I probably should have put a spoiler alert in the title.
A brand new 2020 report from the British Royal Air Force (RAF) warns that they were able to use a swarm of “affordable off-the-shelf decoy to wreak havoc on enemy integrated air defense systems.”
“During the demonstration, a number of Callen Lenz drones were equipped with a modified Leonardo BriteCloud decoy, allowing each drone to individually deliver a highly-sophisticated jamming effect,” according to Leonardo’s press release. “They were tested against ground-based radar systems representing the enemy air defence emplacement. A powerful demonstration was given, with the swarm of BriteCloud-equipped drones overwhelming the threat radar systems with electronic noise.”
You may be wondering if this is the first successful test by an air force of affordable off-the-shelf decoys wreaking havoc on air defense systems.
To answer that quickly, I present to you an account of decoys in a 1946 report called “Paper Bullets” from the United States Office of War Information.
A Mitchell bomber crew, which had been bombing Italian rail communications carried a couple of bundles of leaflets and some wine bottles every time they went out to bomb. Questioned by a psychological warfare officer, who failed to find this particular plane on his schedule, one member of the crew replied: “This is psychological warfare, Mac. Before we hit the target we take a fake bomb run over the nearest flak crew and throw these bottles and the leaflets out. They whistle just like bombs and the flak crew takes cover. Then we go on and bomb as per schedule.”
Set aside the point that maybe the crew was joking and they came up with a funny story to hide the fact that they were alcoholics or at least drank a lot of wine while flying as some form of self-medication.
The idea of dropping whistling bomb decoys over air defense units makes a lot of sense, and wine bottles might disintegrate or disappear enough to avoid suspicion of decoys.
RAND’s first attempt to model a nuclear strategy ignored so many key variables that it nonsensically called for deploying a fleet of aging turboprop bombers that carried no bombs because the United States did not have enough fissile material to arm them; the goal was simply to overwhelm Soviet air defenses, with no regard for the lives of the pilots.
Looking back again, the 1946 Paper Bullets view of the world ends with these questions:
We are very well aware that the right words properly put together, delivered at the right spot at the right moment, can capture and kill. Why not use words and ideas as an instrument of peace, rather than as an instrument of death? A longing for peace is deep in the hearts of all decent peoples everywhere. There are good arguments for those who insist the best way to maintain the peace is to maintain a war machine to police the world and to keep the peace by force. Why not, then, the establishment of a U.S. Department of Information on the same status as the War Department and the Navy Department? Why not a U.S. Department of Information to police the world with words of truth?
We’ve come a long way from swarms being empty wine bottles, yet it seems also we haven’t moved very far along at all.
And I have to wonder if veterans talking about dropping bottles from planes is the kind of story-telling that inspired the iconic opening scene in The Gods Must be Crazy…
The traditional drop cake (also called drop biscuit) was a popular historic treat in America copied from Europe. However, somehow in America the act of baking a common and popular British drop cake with common and popular chocolate turned into a fancy narrative about how chocolate chip cookies had just been “invented” by a woman in 1938.
Is the invention story true? Are they even American?
Let’s start by scanning through the typical drop cake recipes that can easily be found in the first recipe book publications in English:
1883: Ice-cream and Cakes: A New Collection
1875: Cookery from Experience: A Practical Guide for Housekeepers
1855: The Practical American Cook Book
1824: A New System of Domestic Cookery
1792: The London Art of Cookery
1765: The art of cookery, made plain and easy
Now let’s see the results of such recipes. Thanks to a modern baker who experimented with a 1846 “Miss Beecher’s Domestic Recipe Book” version of drop cake, here we have a picture.
Raisins added would have meant this would be a fruit drop cake (or a fruit drop biscuit). There were many variations possible and encouraged based on different ingredients such as rye, nuts, butter or even chocolate.
Here’s an even better photo to show drop cakes. It’s from a modern food historian who references the 1824 “A New System of Domestic Cookery” recipe for something called a rout cake (rout is from French route, which used to mean a small party or social event).
That photo really looks like a bunch of chocolate chip cookies, right? This food historian even says that herself by explaining “…[traditional English] rout cakes are usually a drop biscuit (cookie)…”.
Cakes are cookies. Got it.
This illustrates quickly how England has for a very long time had “tea cakes with currants”, which also were called biscuits (cookies), and so when you look at them with American eyes you would rightfully think you are seeing chocolate chip cookies. But they’re little cakes in Britain.
More to the point, the American word cookie was derived from the Dutch word koek, which means… wait for it… cake, which also turns into the word koekje (little cake):
Dutcheen koekje van eigen deeg krijgen = a little cake of your own dough (literal) = a taste of your own medicine (figurative)
So the words cake, biscuit and cookie all can refer to basically the same thing, depending on what flavor of English you are using at the time.
Expanding now on the above 1855 recipe book reference, we also see exactly what is involved in baking a drop cake/koekje/cookie:
DROP CAKES: Take three eggs, leaving out one white. Beat them in a pint bowl, just enough. Then fill the bowl even full of milk and stir in enough flour to make a thick, but not stiff batter. Bake in earthen cups, in a quick oven. This is an excellent recipe, and the just enough beating for eggs can only be determined by experience.
DROP CAKES. Take one quart of flour; five eggs; three fourths of a pint of milk and one fourth of cream, with a large spoonful of sifted sugar; a tea-spoon of salt. Mix these well together. If the cream should be sour, add a little saleratus. If all milk is used, melt a dessert-spoonful of butter in the milk. To be baked in cups, in the oven, thirty to forty minutes.
I used the word “exactly” to introduce this recipe because I found it so amusing to read the phrase “just enough” in baking instructions.
Imagine a whole recipe book that says use just enough the right ingredients, mix just enough and then bake just enough. Done. That would be funny, as that’s the exact opposite of how the very exact science of modern baking works.
Bakers are like chemists, with extremely precise planning and actions.
And finally just to set some context for how common it became in America to eat the once-aristocratic drop cakes, here’s the 1897 supper menu in the “General Dining Hall Bill of Fare” from the National Home for Disabled Volunteer Soldiers:
Back to the question of chocolate chip cookies versus drop cake, and given all the current worry about disinformation, a story researched on Mental Floss explains that a myth has been created around someone making “Butter Drop Do” and inventing cookies because she accidentally used a “wrong” type of chocolate.
The traditional tale holds that Toll House Inn owner Ruth Wakefield invented the cookie when she ran out of baker’s chocolate, a necessary ingredient for her popular Butter Drop Do cookies (which she often paired with ice cream—these cookies were never meant to be the main event), and tried to substitute some chopped up semi-sweet chocolate instead. The chocolate was originally in the form of a Nestle bar that was a gift from Andrew Nestle himself—talk about an unlikely origin story! The semi-sweet chunks didn’t melt like baker’s chocolate, however, and though they kept their general shape (you know, chunky), they softened up for maximum tastiness. (There’s a whole other story that imagines that Wakefield ran out of nuts for a recipe, replacing them with the chocolate chunks.)
There are three problems with this story.
One, saying “butter drop do cookies” is like saying butter cake do little cakes. That’s hard on the ears. I mean “butter drop do” seems to be some kind of a misprint or a badly scanned script.
This very uniquely named recipe can be found under a cakes category in the 1796 American Cookery book (and don’t forget many drop cake recipe books in England pre-dated this one by decades).
Butter drop do .
No. 3. Rub one quarter of a pound butter, one pound sugar, sprinkled with mace, into one pound and a quarter flour, add four eggs, one glass rose water, bake as No. 1.
The butter drop cake (do?) here appears to be an import of English aristocratic food traditions, which I’ve written about before (e.g. eggnog). But what’s really interesting is this American Cookery book in 1796 is how actual cookie recipes can be found and are completely different from the drop cake (do?) one:
One pound sugar boiled slowly in half pint water, scum well and cool, add two tea spoons pearl ash dissolved in milk, then two and half pounds flour, rub in 4 ounces butter, and two large spoons of finely powdered coriander seed, wet with above; make roles half an inch thick and cut to the shape you please; bake fifteen or twenty minutes in a slack oven–good three weeks.
And that recipe using pearl ash (early version of baking powder) is followed by “Another Christmas Cookey”. So if someone was knowingly following the butter drop cake (do?) recipe instead, they also knew it was explicitly not called a cookie by the author.
Someone needs to explain why the chocolate chip cookie “inventor” was very carefully following a specific cake/koekje recipe instead of a cookie one yet called her “invention” a cookie.
Two, chocolate chips in a drop cake appear almost exactly like drop cakes have looked for a century, with chips or chunks of sweets added. How inventive is it really to use the popular chocolate in the popular cake and call it a cookie?
Three, as Mental Floss points out, the baker knew exactly what she was doing when she put chocolate in her drop cakes and there was nothing accidental.
The problem with the classic Toll House myth is that it doesn’t mention that Wakefield was an experienced and trained cook—one not likely to simply run out of things, let accidents happen in her kitchen, or randomly try something out just to see if it would end up with a tasty result. As author Carolyn Wyman posits in her Great American Chocolate Chip Cookie Book, Wakefield most likely knew exactly what she was doing…
She was doing what had been done many times before, adding a sweet flavor to a drop cake, but she somehow then confusingly marketed it a chocolate chip cookie. I mean she came up with a recipe sure, but did she really invent something extraordinary?
Food for thought: is the chocolate chip cookie really just Americans copying unhealthy European habits (i.e. tipping) to play new world aristocrats instead of truly making something new and better?
What about the chocolate chip itself? Wasn’t that at least novel as a replacement for the more traditional small pieces of sweet fruit? Not really. The chocolate bar, which precipitated the chips, has been credited in 1847 to a British company started by Joseph Storrs Fry.
Thus it seems strange to say that an American putting a British innovation (chocolate bar chips) into a British innovation (drop cake/biscuit/cookie) is an American invention, as much as it is Americans copying and trying to be more like the British.
The earliest recipe I’ve found that might explain chocolate chip cookies is from 1912 (20 years before claims of invention) in “The Twentieth Century Book for the Progressive Baker, Confectioner, Ornamenter and Ice Cream Maker: The Most Up-to-date and Practical Book of Its Kind” by Fritz Ludwig Gienandt.
Amazon basically operates like the mob by seeking markets where regulation or justice is too weak to stop it from taking payments for unethical business practices.
It allegedly will muscle into markets as an engine of exploitation, which measures margin in the amount of harms it can get away with. Some say this is “natural” in the sense that it fits a pattern of American history:
Inequality in America was not born of the market’s invisible hand. It was not some unavoidable destiny. It was created by the hands and sustained effort of people who engineered benefits for themselves, to the detriment of everyone else.
Thus it somewhat predictably has been accused of building “successful growth” on fake and unsafe services and products that damage or kill, with no accountability to itself for the widespread harms carried by others.
Moreover, such ill-gotten profits seem intentional as they are concentrated into the hands of one man who spends a very small percentage on attempts to fix harms. Just a few examples:
“Amazon’s Enforcement Failures Leave Open a Back Door to Banned Goods… Sold and Shipped by Amazon Itself“
“Amazon gives extremists and neo-Nazis banned from other platforms unprecedented access to a mainstream audience — and even promotes [dangerous and violent hate].”
“Amazon’s gigantic, decentralized, next-day delivery network brought chaos, exploitation, and danger to communities across America.”
“While the scale and severity may vary, a single theme often unites each newsworthy incident: An unsecured Amazon…”
“Amazon executive Joy Covey was killed [while riding her bike by a] van delivering Amazon packages….”
Here’s a deeper look into one case (pun not intended) that has been going on for a while now, where we can see flagrant violation of health for profits.
Consumer Reports in 2020 has called out Amazon’s “Starkey” brand water bottled in Idaho because it violates safe standards that limit contaminants in water.
The bottled water, sold in most Whole Foods stores and on Amazon.com, was the only brand of the 45 tested by Consumer Reports scientists between February and May of this year that exceeded 3 parts per billion (ppb)…. Last year, CR tests found Starkey Spring Water exceeded the federal level…
Amazon was the ONLY brand of 45 tested to fail the arsenic test. Many had untraceable amounts, which is great when you look at how dangerous arsenic is to human health.
Note that the report points out it also failed last year.
FDA told Whole Foods that tests had found levels as high as 12 ppb, which resulted in recalls of the water in 2016 and 2017… legal to sell in a bottle across the U.S., but it would be illegal if it came out of the tap…
Recalled in 2016 and 2017, failed tests in 2019 and 2020. Why is this water, which would be illegal to sell if it came from a tap, still being bottled and sold by Amazon?
Amazon explains on their Starkey information site in 2020 that trying to make this water safer would impact Amazon profits, so they’re not doing it.
Arsenic levels above 5 ppb and up to 10 ppb are present… it does contain low levels of arsenic. The standard balances the current understanding of arsenic’s possible health effects against the costs of removing arsenic from drinking water.
Possible health effects “balanced” is how they refer to not making their water safe for consumption.
Possible health effects?
Let it sink in how incredibly vague and misleading Amazon is being on a scientific topic of arsenic in order to say they won’t protect consumers from known harms. They should not be allowed to just casually blow off the harms as “possible health effects”.
Again, Amazon is the only brand of 45 to fail this test. Other brands have untraceable amounts. Nearly 50 competing brands are able to “balance” the correct way by investing in controls for their products to be safe. Why doesn’t Amazon?
Starkey clearly states in their safety report they have decided not to invest in removing arsenic to safe levels, because they believe they can get away with it.
Amazon also clearly promotes this unsafe product with “bottled in Idaho” as if that’s a helpful reference, yet does not include anywhere Idaho Department of Environmental Quality water contamination warnings:
Arsenic is a problem in some parts of Idaho.
“Some parts” is a reference to the area of Idaho (southwestern corner) where Starkey water is sourced.
In fact, that red area that shows up on the Idaho contaminant map stands out as being worst levels in the entire US.
In summary, Amazon is selling water from the most arsenic contaminated region of the US, putting it into harmful single-use plastic bottles, and continues to sell it despite years of public safety test failures.
FEW-View™ is an online educational tool that helps U.S. residents and community leaders visualize their supply chains with an emphasis on food, energy, and water. This tool lets you see the hidden connections and benchmark your supply chain’s sustainability, security, and resilience.
FEW-View™ is developed by scientists at Northern Arizona University and at the Decision Theater® at Arizona State University. FEW-View™ is an initiative of the FEWSION™ project, a collaboration between scientists at over a dozen universities (https://fewsion.us/team/).
FEWSION™ was founded in 2016 by a grant from the INFEWS basic research program of the National Science Foundation (NSF) and the U.S. Department of Agriculture (USDA). The opinions expressed are those of the researchers, and not necessarily the funding agencies.
However, there are two problems I see already with the map. First, it doesn’t go backward in time. The illustrations would be far more useful if I could pivot through 1880 to 1980. Second, the interactive maps allow you to break out a booze category but I have yet to find a way to filter on bananas and pineapples let alone ingredients for three flavors of ice cream.
The trials of the Dakota were conducted unfairly in a variety of ways. The evidence was sparse, the tribunal was biased, the defendants were unrepresented in unfamiliar proceedings conducted in a foreign language, and authority for convening the tribunal was lacking. More fundamentally, neither the Military Commission nor the reviewing authorities recognized that they were dealing with the aftermath of a war fought with a sovereign nation and that the men who surrendered were entitled to treatment in accordance with that status.
MNHS also relates how Dakota leaders have been recorded as clearly humane and civilized in their rationalizations of self-defense, yet received barbaric treatment by the white nationalist militants they fought against:
You have deceived me. You told me that if we followed the advice of General Sibley, and gave ourselves up to the whites, all would be well; no innocent man would be injured. I have not killed, wounded or injured a white man, or any white persons. I have not participated in the plunder of their property; and yet to-day I am set apart for execution, and must die in a few days, while men who are guilty will remain in prison. My wife is your daughter, my children are your grandchildren. I leave them all in your care and under your protection. Do not let them suffer; and when my children are grown up, let them know that their father died because he followed the advice of his chief, and without having the blood of a white man to answer for to the Great Spirit.
Those of the Dakota who had fought in the war retreated for winter, were killed or captured. The U.S. military decided it wasn’t staffed to pursue them. Thus the only Dakota people who were brought into custody by the U.S. were elderly, women, and children; nearly 2,000 people who had nothing to do with the war were seduced by the U.S. military and then death-marched for days into a concentration camp to be abused and die.
They lost everything. They lost their lands. They lost all their annuities that were owed them from the treaties. These are people who were guilty of nothing.
Henry Whipple, traveled to Washington to meet with Lincoln; he explained to the president that Dakota grievances stemmed in large part from the greed, corruption, and deceit of government agents, traders, and other whites. Lincoln took what he called “the rascality of this Indian business” into consideration and granted clemency to most of those sentenced to die.
Minnesota History Magazine further relates that a prominent leader of the Dakota people a year later was murdered by white settlers who simply noticed him eating wild raspberries and decided to hunt, kill, decapitate and scalp him for that alone:
Even if a state of war had existed in 1863, the Lamsons’ action could not be defended as legal. They were mere civilians, who under international law have no right to take up arms against the enemy and who will be
hanged summarily if they do. The ordinary law of murder would apply to them. […] If killing in reliance upon the adjutant general’s orders would be murder under the law in force in 1863, obviously killing before any orders were issued would be an even stronger case of murder. Thus Little Crow was tendered a posthumous apology. One must reach the conclusion that in strict law the Lamsons were provocateurs and murderers.
Shot on sight without any questions, Little Crow was a nationally recognized and celebrated man who had negotiated Treaties of Traverse des Sioux and Mendota in 1851. It was he who had moved a band of Dakota from their massive 25 million acre territory into a tiny (20 mile by 70 mile) reservation.
In 1850, the white population of what would soon be the state of Minnesota stood at about 6,000 people. The Indian population was eight times that, with nearly 50,000 Dakota, Ojibwe, Winnebago and Menominee living in the territory. But within two decades, as immigrant settlers poured in, the white population would mushroom to more than 450,000.
Ten years later by the war of 1862 (and after he was coerced into an even worse treaty in 1858) Little Crow became known as the Dakota leader who took a principled and fair stand against his former trading partner U.S. General Sibley.
The U.S. government allegedly had offered the Dakota only a few cents per acre for their entire ceded territory space in treaties, and gave promises of annuity payments and food supplies. Yet while their land was taken away the agreed upon payments and food didn’t come. It was in this context that white settlers flooded the area historically inhabited by Dakota.
Congress passes the Homestead Act, a law signed by President Abraham Lincoln on May 20, 1862, offering millions of acres of free land to settlers who stay on the land for five years. The act brings 75,000 people to Minnesota over three years. To qualify for 160 free acres, settlers have to live on it for five years, farm and build a permanent dwelling. Those able to spend the money can buy the 160 acres at $1.25 an acre after living on it for six months.
The federal government was effectively buying land for cheap and then selling 160 acre parcels of it at either $200 (20X the cost) or for five years of farming and construction.
Since the tiny reservation space wasn’t producing food as sold to them, and the U.S. government was intentionally withholding payments and supplies to survive on, huge numbers of Dakota faced a starvation-level situation and demanded quick restitution.
On top of that white settlers illegally had been encroaching into even the tiny Dakota reservation area. The Dakota faced no choice but to reassert rights to their money, food and land they already had negotiated.
Tension grew from the U.S. refusing to help, withholding food and money from the now trapped Dakota population in an attempt to “force conformance to white ideals” of a “Christian” lifestyle.
While Dakota parents watched their children starve to death, pork and grain filled the Lower Sioux Agency’s new stone warehouse, a large square building of flat, irregularly shaped stones harvested from the river bottoms. […] “So far as I’m concerned, if they are hungry, let them eat grass or their own dung,” [warehouse owner] Myrick said.
The U.S. strategically reneged on agreements and intentionally starved Dakota populations into desperation, before ultimately using attempts at self-defense as justification for mass unjust executions and murder. This was followed by Minnesota settlers banishing the native population entirely from their own historic territory under penalty of death into concentration camps, offering rewards to anyone who could trap and kill the Native Americans (Minnesota’s government offered a reward up to $200 — roughly $4000 in 2019 terms — for non-white human scalps).
At a higher level the race in 1862 to settle territory inhabited and owned by Native Americans had been complicated the year before by militant southern states starting a Civil War to violently force expansion of slavery into any new states. Thus, just as John Brown’s attempt to incite abolition got him executed in 1859 as a “traitor” to America, the Dakota people fighting for freedom from tyranny three years after in 1862 were unjustly tried by Minnesota settlers and executed on December 26.
Old John Brown’s body lies moldering in the grave,
While weep the sons of bondage whom he ventured all to save;
But tho he lost his life while struggling for the slave,
His soul is marching on.
John Brown was a hero, undaunted, true and brave,
And Kansas knows his valor when he fought her rights to save;
Now, tho the grass grows green above his grave,
His soul is marching on.
He captured Harper’s Ferry, with his nineteen men so few,
And frightened “Old Virginny” till she trembled thru and thru;
They hung him for a traitor, they themselves the traitor crew,
But his soul is marching on.
…military historian Len Fullenkamp reflects on the importance of immersing oneself in the minds of strategic leaders facing dynamic and complex situations. One tool is the staff ride, an opportunity to walk a battlefield and understand the strategic perspective of the leaders…
I’ve walked countless battlefields and tried to relive the decisions of the time. One of the most unforgettable was a trench line perfectly preserved even to this day on a ridge that held off waves of attacks for several sleepless days.
On another long-gone battle ground I stumbled upon three live bullets that had been abandoned for decades, slowly rusting into the ground atop a lookout. I held them in my hand and stared across the dusty exposed road below for what seemed like hours.
Yet I rarely if ever have seen a similar opportunity in the field of security I practice most today. Has anyone developed a “staff ride” for some of the most notorious disasters in security leadership such as Equifax, Target, Facebook…? That seems useful.
In this podcast the speaker covers the disastrous Pickett’s charge by pro-slaveholder forces in America. After two-days investment the bumbling General Lee miscalculated and ordered thousands of men to their death in what he afterwards described plainly as “had I known…I would have tried something different”.
Fullenkamp then goes from this into a long exploration of risk management until he describes leadership training on how to make good decisions under pressure:
What is hard is making decisions in the absence of facts.
Who could be the Fullenkamp of information security, taking corporate groups to our battlefields for leadership training?
Also I have to point out Fullenkamp repeats some false history, as he strangely pulls in a tangent about how General Grant felt about alcohol. Such false claims about Grant have been widely discredited, yet it sounds like Fullenkamp is making poor decisions with an absence of facts.
Accusations of alcoholism were a smear and propaganda campaign, as historians today have been trying to explain. For example:
Grant never drank when it might imperil his army. […] Grant, in a letter to his wife, Julia, swore that at Shiloh, he was “sober as a deacon no matter what was said to the contrary.”
After Grant’s death, exaggerated stories about his drinking became ingrained in American culture.
First, the truth of charges against Grant are related to America’s pre-Civil War political and military patronage system (corruption basically) being unkind to him. He succeeded in spite of them and he was living proof of someone using the past to better understand the present.
After extensive experience fighting in all major battles of the Mexican-American War he didn’t sit well being idle and under-utilized. He was introverted and critical of low performing peers. A superior officer in California used minor charges of alcohol as a means to exercise blunt authority over the brilliant Grant.
Second, it was KKK propaganda campaigns of prohibition that pushed the false idea that Grant’s dispute with his superior was some kind of wild and exaggerated issue relevant to prohibition.
In fact history tells us how pro-slavery Generals literally became so drunk during battles they disappeared and were useless, every single time they fought. The KKK projected those real alcoholic events from pro-slavery leadership onto Grant to obscure their own failed history and try to destroy his name.
Apparently it worked because it’s 2019 and far past time for people to stop repeating shallow KKK propaganda about America’s greatest General and one of the greatest Presidents.
A Harvard man walks into a wildlife protection demo and an AI system made by Intel labels him a poacher. His reaction is fascinating. He criticizes machines in a way that seems just as fitting for humans. Would he have reacted the same to a human labeling him in this manner? Even more interesting is the man labeled a poacher is from an institution (Harvard) that has been known to perpetuate injustices like poaching.
This incident begs the question of whether we should expect human intelligence to be criticized as often or vocally as machine intelligence. I mean is it right to expect more of machines than humans in this scenario? I’d like to explore with this post whether humans of “Harvard intelligence” could be expected to pass the bar set by Harvard for a machine of “Artificial intelligence”.
In other words what if people who graduate from Harvard, who claim to be intelligent, exhibited the same or worse behavior as a machine labeling people poachers in the wildlife protection demo?
POACHER: A poacher is generally defined as someone who unfairly or dishonestly takes and uses something for themselves when it belongs to someone else.
HARVARD: Harvard is generally defined as a school with a tarnished legacy that today remains affiliated with white men in positions of power who display very questionable ethics (Pompeo, Kobach, Zuckerberg…). Here’s a perfect example:
Harvard University is profiting from of one of the earliest known photographs of an enslaved man, despite requests by his descendants to stop doing so, the man’s great-great-great-granddaughter says in a lawsuit…
A little deeper inquiry into that lawsuit reveals that Harvard was heavily invested in perpetuating white supremacy doctrines even after the US Civil War forcefully decided blacks should no longer have their bodies taken unfairly or dishonestly for use by white men.
In 1865, just as emancipation was being secured in the United States, [Harvard professor] Agassiz had more than a hundred photos taken of nude African-descended Brazilians to build support for white supremacy and polygenesis. With slavery in the United States ended, Agassiz’s work became even more critical: In a moment when America’s future regarding race was highly malleable, building a scientific foundation to support continued white supremacy was even more of an imperative.
Harvard has been extremely slow not only to address its racist and unethical foundations, which supported unfair and dishonest practices, it should concern everyone the number of white supremacists even today who have Harvard degrees. Shouldn’t they fail tests of intelligence?
INTELLIGENCE: Intelligence is defined here with Gottfredson’s perspective that it relates to a broad and deep capability for comprehending surroundings, such as making sense or figuring out what to do. For example, what should Harvard do when asked to stop unfairly or dishonestly taking and using something for themselves?
Example of Harvard “intelligence”
Kris Kobach of Kansas (KKK) is a good example as he earned a BA degree in Government in 1988, earning distinction for being top student in his department. We also should include Kobach’s adviser (trainer, if you will), the director of Harvard’s Center for International Affairs, Professor Samuel P. Huntington.
Huntington infamously taught Kobach nativist doctrine such as how to block non-white participation in government. One of the crazy theories was that people of Central and South America who enter the US pose an existential threat to the “American identity.”
Mexican intellectual Enrique Krauze described Huntington’s method as a “crude civilizational approach.” Carlos Fuentes called Huntington “profoundly racist and also profoundly ignorant” and accused him of adopting the favored fascist tactic of creating a generalized fear of “the other.” Henry Cisneros noted that Professor Huntington was “hand-wringing over the tainting of Anglo-Protestant bloodlines.” Andres Oppenheimer of Miami called Huntington’s work “pseudo-academic xenophobic rubbish” and called for national protests against Harvard University and publisher Simon & Schuster. Even those sympathetic to Huntington’s anxiety about Mexican immigration stood their distance. Alan Wolfe said that at times Huntington’s writing bordered on hysteria, and that he appeared to be endorsing white nativism. The editors of the British magazine The Economist questioned Huntington’s notion of Anglo Protestant culture, noting that it had been “a long time since the Mayflower.”
Kobach earned top honors in government theory in the late 1980s, and trained under this obviously racist and xenophobic adviser. Can you can guess, based on world political events at that time, what came next?
In 1990 (given the fall of South Africa’s apartheid was still four years away) Kobach published a pro-apartheid book titled “Political Capital: The Motives, Tactics, and Goals of Politicized Businesses in South Africa” (University Press of America).
Kobach wrote about a white police state as good for business. He seemed to think beating down non-white populations (those seeking equal rights with white police) was how to push wealth into white hands just as a matter of “peace keeping”.
Technically speaking he wrote “strict Verwoerdian apartheid enforced with an iron fist can be seen as a route to a more stable South Africa”. You can see it even on page 28 from his Harvard thesis:
After graduation and publication of pro-apartheid screed Kobach then embarked on a life quest “to enact a nativist agenda, often from within the government.”
In other words, intelligence doesn’t seem like the right word to describe a top student from Harvard. He did the wrong things over and over. What if a machine made these same mistakes? He literallyu made a career out of falsely labeling humans and declaring them a threat based on completely debunked white supremacist theories of species preservation (nativism).
Harvard criticism of Artificial “intelligence”
Fast forward to today’s debate on AI ethics and we have a Harvard man saying an “intelligent” system has unfairly labeled him a poacher, much to his astonishment.
Hey, did that system read history and know he was from Harvard, an institution known for its unauthorized appropriations? No.
Does looking at someone’s training environment, and probability of learning selfish supremacy doctrines, seem like a good way to find people who favor poaching? Maybe.
Those ideas are far more complicated as learning models than what actually happened. The label of poacher turns out to be very easily explained.
First, Kudos certainly go to Latonero for speaking out from within the horribly tarnished halls of Harvard.
His article does seem a little overly “why me” and primarily concerned for his own welfare, yet it makes a fair point that he doesn’t understand the authority or perspective of the system labeling him.
Walking through the faux flora and sounds of the savannah, I emerged in front of a digital screen displaying a choppy video of my trek. The AI system had detected my movements and captured digital photos of my face, framed by a rectangle with the label “poacher” highlighted in red. […] I couldn’t help but wonder: What if this happened to me in the wild? Would local authorities come to arrest me now that I had been labeled a criminal? How would I prove my innocence against the AI? Was the false positive a result of a tool like facial recognition, notoriously bad with darker skin tones, or was it something else about me? Is everyone a poacher in the eyes of Intel’s computer vision?
Second, at no point does he say, for example, 35,000 poached elephants is a catastrophe worthy of solving. Is there a case to be made for labeling ever? Perhaps this is one place where simple labels make sense, as a piece of a puzzle that trends towards more sophisticated answers and broader actions.
Those deaths are approaching extinction level threats, and the elephants are in natural prisons where no human should be…hold that thought.
Latonero gets a good and clear answer to his question and just brushes it off as insufficient.
When I reached out to the head of Intel’s AI for Good program for comment, I was told that the “poacher” label I received at the TrailGuard installation was in error—the public demonstration didn’t match the reality. The real AI system, Intel assured me, only detects humans or vehicles in the vicinity of endangered elephants and leaves it to the park rangers to identify them as poachers.
There we go. Intel clearly says a simplistic algorithm is looking for humans within a space that is authorized only for animals. When a human enters the space they are labeled a poacher because they do not have authorization, and it is assumed they entered unfairly or dishonestly.
I can understand Latonero was shocked to be labeled “unauthorized”. He probably wouldn’t have thought twice if the screen said that, or even just said “human”, instead of making the logical connection to unauthorized access being a poacher.
Walking around at a “MIT conference on emerging AI technology” he felt entitled to enter a space and approach the sensor. He did not appreciate being told his actions were a violation and linked to extinction-level threats.
It sounds perhaps like what a Mexican immigrant to Texas (a state forcibly taken from Mexico) might feel when being labelled by Kobach as a violation and an extinction-level threat.
Using the Harvard critique of intelligent systems to assess Harvard graduates
Ok, now imagine Kobach is that AI system that Latonero walks up to. Let’s say Latonero is an American migrating into the US. Kobach would then label Latonero a threat and…nothing seems to happen. Am I right here?
I don’t see any Harvard ethics experts lining up to warn us of the “intelligent” people emerging from Harvard training who use simplistic and dangerous labels to harm society.
Again, I can give kudos to a Harvard expert calling attention to simplistic labeling and calling it less than intelligent, yet I have to point out his warnings would be far more appropriate to issue a take-down on Kobach and ban him from any authority or office.
Graduates of Harvard who perpetuate its awful past and poaching ways are far worse than the AI system that Latonero is warning about.
We should fix both humans and machine, and by comparison we have easy solutions ready for the latter…but the real question here is whether an AI system designed to protect humanity from poachers would be seen as accurate if it labels Kobach as existential threat to society.
After all, a Harvard affiliation really could get classified as probable poacher
And on that note the parallels are closer than you might realize:
…Kris Kobach is having a tough time finding support for a plan that would allow the [2012 Kansas] governor to distribute 12 big-game hunting permits at his discretion.
In other words Kobach literally tried to pass a law to bypass wildlife safety authorities, which would shield himself/associates from being labeled a poacher. He could literally hand out a sort of get-out-of-jail card, the sort of thing the KKK were famous for using during prohibition to limit alcohol to whites only.
Kobach’s failure to pass a self-entitlement bill led to this embrace in early 2016 with an infamous elephant killer:
Kansas Secretary of State Kris Kobach sports an orange hunting cap, a long gun and a wide grin as he stands alongside the president’s son and 20 dead pheasants.
…Trump announced that the lifting of the ban [on import of dead elephant] was on hold, pending further review. In a follow-up tweet, he went on to say he’d “be very hard pressed to change my mind that this horror show in any way helps conservation of Elephants or any other animal.”
Hopefully this post has helped explain that Harvard makes the best case yet that Harvard should be criticizing Harvard more.
Michelle Obama, who obviously speaks truth to power, doesn’t believe at all in the “Lean In” concept. The title of this post comes from her being quoted in a new Wired story about the aristocratic methods of Facebook’s COO (referred to as an empress).
Wired points out how “Lean in” instead soon may be more known as some sick “shit”, fast becoming the “Let them eat cake” of our times.
The Wired piece is an excellent dive into the how and why Facebook leadership worked to hack people into bits they could profit from; a form of human exploitation and mining of assets.
The “chiefs” overseeing Facebook’s industrial-scale hacking of humans took on such aristocratic airs, there’s probably a book to be written about what that looked like in terms of mental health. They arguably have gone mad in their race for wealth accumulation.
Some of my neighbors in San Francisco literally lost their grip on reality by working one day a week in Facebook’s human exploitation mills, amassing piles of cash to spend on luxury goods and cake-filled offices and homes of isolationism.
Critics already are playing up that they can’t do business if they have to follow regulations set to protect privacy of consumers. These lobbying types are, of course, peddling risk management nonsense in the face of far too many breaches and a long slide downward of consumer confidence in data platforms.
The current round of criticism reminds me of those opposed to food safety regulations even after Upton Sinclair’s 1906 book The Jungle pointed out how rats and workers’ body parts were being ground up and shipped as sausage.
Cloud providers are like sausage factories, especially the largest ones, and for far too long have been allowed to operate without basic duties of care, deliberately avoiding innovation investment because avoiding accountability for harms. And yes, Facebook is the wurst.
Those of us actively innovating in information technology see regulations such as CCPA as welcome guard rails, which spur long overdue innovations in data platform controls and help the data platform market grow more safely.
The proposed regulations set out some clear “shall not” of consumer personal information:
(3) A business shall not use a consumer’s personal information for any purpose other than those disclosed in the notice at collection. If the business intends to use a consumer’s personal information for a purpose that was not previously disclosed to the consumer in the notice at collection, the business shall directly notify the consumer of this new use and obtain explicit consent from the consumer to use it for this new purpose.
(4) A business shall not collect categories of personal information other than those disclosed in the notice at collection. If the business intends to collect additional categories of personal information, the business shall provide a new notice at collection.
(5) If a business does not give the notice at collection to the consumer at or before the collection of their personal information, the business shall not collect personal information from the consumer.
They also set out clear timelines for requests to delete data:
(a) Upon receiving a request to know or a request to delete, a business shall confirm receipt of the request within 10 days and provide information about how the business will process the request. The information provided shall describe the business’s verification process and when the consumer should expect a response, except in instances where the business has already granted or denied the request.
(b) Businesses shall respond to requests to know and requests to delete within 45 days. The 45-day period will begin on the day that the business receives the request, regardless of time required to verify the request.