600,000 troops were destroyed by Napoleon’s mistreatment, leaving barely 20,000 alive. This scene captures the desperation of their existence, burning whatever they could find for warmth, including regimental standards and flags. These weren’t just pieces of cloth; they were sacred symbols of military honor and unit identity that French soldiers burned for basic survival, absent of any pride. Source: Wojciech Adalbert Kossak’s woodcut depicting French retreat on 29 November 1812.For all the extravagant jewelry and fine dining the ruthless Napoleon loved to shower himself in, his troops basically died as disposable slaves.
Binder says. “We have these paintings in the museums of soldiers in shiny armors, of Napoleon on his horse, fit young men marching into battle.”
“But in the end, when we look at the human remains, we see an entirely different picture,” she says.
It’s a picture of lifelong malnutrition, broken feet from marching too far, too quickly, and bodies riddled with disease.
Napoleon was truly a horrible human. The Grande Armée marched without adequate supply lines because his plan was literally to rape and pillage the land—as if his soldiers could sustain themselves while marching hundreds of miles into hostile territory. When Russia came up empty, hundreds of thousands of his own men starved and froze to death. Meanwhile, his baggage train advanced and retreated with his expansive silver dinnerware and fresh steaks.
Scientists are thus proving a subtext of the well-known disasters, that Napoleon never was building a professional army. He was instead rapidly extracting every ounce possible from expendable human material in a hopeless imperial ambition that couldn’t last.
Authoritarian systems consistently demonstrate this pattern of toxic leadership that treats humans as disposable, while maintaining elaborate fake performances of power and legitimacy to hide their dangerous extraction.
The gap that emerges between the story telling of museum paintings, and the facts from modern bone pathology, isn’t just about artistic license; it’s evidence of horribly corrupted power systematically erasing human cost in projects and logs.
Devastating supply line failure killing his own men wasn’t from logistical incompetence—it was a strategy of “efficiency” coming to bear. The fail faster doctrine of Napoleon, in fact failed faster, to the tune of 400,000 and more of his own soldiers destroyed for… nothing.
Charles Minard’s renowned graphic of Napoleon’s 1812 march on Moscow. The tremendous numbers of casualties suffered shows in thinning of the lines (1 millimeter of thickness is equal to 10,000 men) through space and time.
Napoleon is still framed falsely as a military genius rather than as mass murderer, someone who burned everything he touched, destroyed human lives at an industrial scale and then “efficiently” lost it all. His “strong man” propaganda continues to work centuries later, which should make us deeply skeptical of how current authoritarian systems (e.g. Trump) present their own real costs.
The great myth of AI is that it will improve over time.
Why?
I get it, as I warned about AI in 2012, people want to believe in magic. A narwhal tusk becomes a unicorn. A dinosaur bone becomes a griffin. All fake, all very profitable and powerful in social control contexts.
What if I told you Tesla has been building an AI system that encodes and amplifies worsening danger, through contempt for rules, safety standards, and other people’s lives?
People want to believe in the “magic” of Tesla, but there’s a sad truth finally coming to the surface. Elon Musk has been promising for ten years AI can make his cars driverless “a year from now”, as if Americans can’t recognize snake oil of the purest form.
Back in 2016 I gave a keynote talk about Tesla’s algorithms being murderous, implicated in the death of Josh Brown. I predicted it would get much worse, but who back then wanted to believe this disinformation historian’s Titanic warnings?
Source: My 2016 BSidesLV keynote presentation comparing Tesla autopilot to the Titanic
If there’s one lesson to learn from the Titanic tragedy, it’s that designers believed their engineering made safety protocols obsolete. Musk sold the same lie about algorithms. Both turned passengers into unwitting deadly test subjects.
I’ll say it again now, as I said back then despite many objections, Josh Brown wasn’t killed by a malfunction. The ex-SEAL was killed by a robot executing him as it had been trained.
Ten years later and we have copious evidence that Tesla systems in fact get worse over time.
NHTSA says the complaints fall into two distinct scenarios. It has had at least 18 complaints of Tesla FSD ignoring red traffic lights, including one that occurred during a test conducted by Business Insider. In some cases, the Teslas failed to stop, in others they began driving away before the light had changed, and several drivers reported a lack of any warning from the car.
At least six crashes have been reported to the agency under its standing general order, which requires an automaker to inform the regulator of any crash involving a partially automated driving system like FSD (or an autonomous driving system like Waymo’s). And of those six crashes, four resulted in injuries.
The second scenario involves Teslas operating under FSD crossing into oncoming traffic, driving straight in a turning lane, or making a turn from the wrong lane. There have been at least 24 complaints about this behavior, as well as another six reports under the standing general order, and NHTSA also cites articles published by Motor Trend and Forbes that detail such behavior during test drives.
Perhaps this should not be surprising. Last year, we reported on a study conducted by AMCI Testing that revealed both aberrant driving behaviors—ignoring a red light and crossing into oncoming traffic—in 1,000 miles (1,600 km) of testing that required more than 75 human interventions.
Let’s just start with the fact that everyone has been saying garbage in, garbage out (GIGO) is a challenge to overcome in AI, since forever.
And by that I mean, even common sense standards should have forced headlines about Tesla being at risk of soaking up billions of garbage data points and producing dangerous garbage as a result. It was highly likely, at face value, to become a lawless killing machine of negative societal value. And yet, its stock price has risen without any regard for this common sense test.
Imagine an industrial farmer announcing he was taking over a known dangerous superfund toxic sludge site to suddenly produce the cleanest corn ever. We should believe the fantasy because why? And to claim that corn will become less deadly the more people eat it and don’t die…? This survivor fallacy of circular nonsense from Tesla is what Wall Street apparently adores. Perhaps because Wall Street itself is a glorified survivor fallacy.
Let me break the actual engineering down, based on the latest reports. The AMCI Testing data (75 interventions in 1,000 miles) provides a quantifiable deterioration rate. That’s a Tesla needing intervention every 13 miles.
Holy shit, that’s BAD. Like REALLY, REALLY BAD. Tesla is garbage BAD.
Human drivers in the US average one police-reported crash every 165,000 miles. Tesla FSD requires human intervention to prevent violations or crashes at a rate roughly 12,000 times higher than human baseline crash rates.
Elon Musk promised investors a 2017 arrival of a product superior to “human performance”, yet in 2025 we see code that is still systematically worse than a drunk teenager.
And, it’s actually even worse than that. Tesla re-releasing a “Mad Max” lawless driving mode in 2025 is effectively a cynical cover up operation, to double-down on deadly failure as normalized outcomes on the road. Mad Max was a killer.
I’ve disagreed with GIGO for as long as I’ve pointed out Tesla will get worse over time. I could explain, but I am not sure a higher bar even matters at this point. There’s no avoiding the fact that the basic GIGO tests show how Tesla was morally bankrupt from day one.
The problem isn’t just that Tesla faced a garbage collection problem, it’s that their entire training paradigm was fundamentally flawed on purpose. They’ve literally been crowdsourcing violations and encoding failures as learned behavior. They have been caught promoting rolling stop signs, they have celebrated cutting lanes tight, and even ingested a tragic pattern of racing to “beat” red lights without intervention.
That means garbage was being relabeled “acceptable driving.” Like picking up an old smelly steak that falls on the floor and serving it anyway as “well done”. Like saying white nationalists are tired of being called Nazis, so now they want to be known only as America First.
This is different from traditional GIGO risks because the garbage is a loophole that allows a systematic bias shift towards more aggressive, rule-breaking, privileged asshole behavior (e.g. Elon Musk’s personal brand).
The system over time was setup to tune towards narrowly defined aggressive drivers, not the safest ones.
What makes this particularly insidious is the feedback loop I identified back in 2016. “Mad Max” mode from 2018 wasn’t just marketing resurfacing in 2025, it’s a legal and technical weapon deployed by the company strategically.
Source: My presentation at MindTheSec 2021
Explicitly offering a “more aggressive” option means Tesla moves the Overton window while creating plausible deniability: “The system did what users wanted.”
This obscures that their baseline behavior was degraded by training on violations, and reframes the failures within a worse option. Disinformation defined.
Musk’s snake oil promises – that Teslas would magically become safer through fleet learning – require people to believe that more data automatically equals better outcomes. Which is like saying more sugar is going to make you happier. It’s only true if you have labeled ground truth, to know how close to diabetes you are. It needs a reward function aligned with actual safety, and the ability to detect and correct for systematic biases.
Tesla has none of these.
They have billions of miles of “damn, I can’t believe Tesla got away with it so far, I’m a gangsta cheating death” which is NOT the same as if its software drove the car legally let alone safely.
Tesla claimed to be doing engineering (testable, falsifiable, improvable) while actually doing testimonials (anecdotal, survivorship-biased, unfalsifiable). “My Tesla didn’t crash” is not data about safety, it’s absence of negative outcome, which is how drunk drivers justify their behavior too… like a teapot orbiting the sun (unfalsifiable claims based on absence of observed harm).
If we build this robot army, do I have at least a strong influence over that robot army?” Musk said on the call. “I don’t feel comfortable building that robot army if I don’t have at least a strong influence.”
And what does he say his army is for?
…you can actually create a world where there is no poverty…
Musk is deploying the classic utopian framing that’s preceded every authoritarian project: “eliminate poverty” through technological dominance and centralized control.
I’ve written extensively about how these narratives work – from Hitler’s Lebensraum promise of “living space” to apartheid theology’s “separate development” to the ACTS 17 preacher Peter Thiel’s “optimal governance.”
The promise is always paradise; the mechanism is always control.
The “no poverty” promise always comes with an implicit answer to “for whom?”
Historically, these projects define poverty as a problem of the wrong people existing in the wrong places – solved through displacement, containment, or elimination rather than redistribution of resources or power.
This Nazi phrase of human extraction was posted to “labor camps” to end poverty, where prisoners were worked to death to the tune of “Arbeit macht frei, durch Krematorium Nummer drei.”
Tesla can’t even make steering systems that reliably keep vehicles in their lanes. Their “solution” to societal problems likely will be even more dangerous than their “vision” failing to respect double yellow lines.
With an “army” of millions of autonomous machines under Elon Musk’s individual control, failure modes will become systematized violence. Swasticars: Remote-controlled explosive devices stockpiled by Musk for deployment into major cities around the world.
Musk is not talking about oversight, regulation, or democratic accountability. He wants personal control of an army as a precondition. This maps directly onto the history of territorial sovereignty projects such as apartheid — his demand is for extreme governance exemption with concentrated control (e.g. Nazism).
Hitler promised to solve poverty too, but he just redefined who counted as people, then built an enforcement apparatus to murder those redefined as “the poor“.
No one shall be hungry, no one shall freeze. […] Within 4 years the German farmer must be freed from his misery. Within 4 years unemployment must be finally overcome.
That’s what Musk’s “robot army” + “no poverty” means in practice. It’s another Stanford killing machine, like the 1800s in America that Hitler studied.
The 1800s American West wasn’t just the homework for Nazi Lebensraum architects – it was their template. “Manifest Destiny” was utopian framing for Indigenous elimination. “Civilizing the frontier” meant systematic displacement and extermination. The “problem” of poverty was solved by redefining who counted as human, then deploying enforcement mechanisms (cavalry, settler militias, reservation systems) against those excluded from the category.
Stanford University sits on stolen Ohlone land, built with fraud and railroad money extracted through Chinese labor that was then excluded from the prosperity it created. The “Stanford” in “Stanford killing machine” isn’t metaphorical, it’s the institutional genealogy of genocide that Musk is invoking today.
Stanford’s racist platform became increasingly violent over just 5 years.
We must remember Churchill was dismissed as alarmist, warmongering, and unreasonable for warning about men like Elon Musk and Peter Thiel throughout the 1930s. The British establishment – including his own party – marginalized him precisely because he was willing to say what the threat actually was while others counseled moderation, diplomacy, and “not inflaming tensions.”
Churchill sips his “tea”
Churchill would say this is a centrally planned and controlled distributed weapons system with humanitarian marketing.
And Musk has admitted out loud:
Operating under single-person command authority
Demanding exemption from democratic oversight
Failure modes causing death
Intending scale in civilian population centers
Integrating with surveillance and targeting networks
That is by definition another Stanford-born genocidal killing machine, regardless of its nominal purpose.
Kudos to writer Cory Doctorow for his high-profile entry-level economics literacy campaign. I have to assume there’s an audience for his ideas, because people don’t know basic economics?
It’s a fancy new spin on old ideas: the monopoly rent-seeking, regulatory capture, and market power dynamics he describes are retreads from decades of prior writers.
Interesting that he invokes one prior theory, while not admitting to all the others he is borrowing from:
In the same way that Tim Berners-Lee rolled out of bed one morning and said, “The web is too important for me to take out a patent on it. Everyone’s gonna be able to use it.” And the way Jonas Salk said, “The polio vaccine is too important.” He said that owning this vaccine would be like owning the sun, so he didn’t patent it. I’m not a “Great Man of History” guy by any stretch, but I think those people show us the downstream effect of being a real mensch when you start something, just a really solid person, and how it can create a durable culture where there’s an ethos of kindness and care.
Right. Cory is definitely not a man of history, as that interview is basically just repeating textbook stuff well understood since… Stigler “Theory of Economic Regulation” in 1971? Mancur Olson in 1965 “The Logic of Collective Action”? Earlier if you count the trust-busters. Brandeis and the Clayton Act, 1914? Tarbell and Standard Oil, 1904? Veblen in 1899? Sherman Act of 1890?
The HUGE elephant in the room is… are we at a point where basic entry-level economics is served as a “big new idea” to gain traction? Apparently we can’t have normal policy debates using actual technical language anymore, it has to be injected through a viral hook. Part of that blame goes to the toxic ideology that leaked out of the Chicago School labs, which for 40 years misled people that monopoly concerns were outdated/debunked. So now this generation has to rediscover what the previous ones warned about over and over.
Thanks Shit-cago.
Consider this timeline. Tim Wu coined “attention merchants” in 2016 to describe what? Advertising. Leave it to marketing to decide a new term for advertising will sell more books. And then Lina Khan’s 2017 “Amazon’s Antitrust Paradox” paper was treated as groundbreaking when it basically rebranded pre-1980s antitrust theory by using the word “platform”. Ooh. Wait until you hear what happened next. Zuboff’s book “Surveillance capitalism” in 2019 was heavily promoted as a concept where companies were collecting data to sell ads. Shocking if true!
Original Concept (1970s)
1990s Rebrand
2010s Rebrand
Monopoly
Platform power
Enshittification
Externalities
Spillover effects
Systemic risk
Information asymmetry
Knowledge gaps
Dark patterns
Rent-seeking
Value extraction
Wealth transfer
We’ve built a system where expert consensus doesn’t matter, historical knowledge doesn’t accumulate, and basic economic principles have to be rediscovered and remarketed every generation to gain traction.
That’s… wait for it… the shitification.
It’s not how knowledge is supposed to work in a functioning civilization.
To be fair, Doctorow himself becomes a fascinating foil about someone who can’t decide if he’s more into determinism or contingency in economic history:
Determinism: Once the internet became commercially important, monopolization was inevitable under capitalist logic. The specific policies were just accelerants.
Contingency: Different regulatory choices in the 1990s/2000s could have produced genuinely different market structures (more like the pre-consolidation internet).
Doctorow is in the middle, but leaning contingent—we could have had mandatory interoperability, stronger privacy law, preserved rights to modify purchased tech, etc. And those structural guardrails would have prevented monopolization regardless of who the entrepreneurs were.
The specific mechanisms (DMCA preventing competitive modification, stock-as-currency fueling consolidation, KPI-driven enshittification) are really just some lower-level institutional details within basic economic theory.
Of course, this amnesia about economic predation isn’t new—it’s embedded in how the country commemorates its predators. It’s bad that economic theory keeps getting forgotten and rebranded (intellectual amnesia), but underneath is something even worse! American predators exploiting the cycles are elevated and celebrated (moral amnesia).
Am I right? Epstein files, cough, cough.
America still brazenly celebrates the worst of the worst men like Stanford, Polk, Jackson… does anyone really believe a Bezos, Musk or Zuckerberg isn’t going to exploit the same loopholes if they haven’t been closed.
Stanford?
Yes, that supposed great man of history was “a primary facilitator of genocide”, who oversaw Native American policy in the California legislature. His “killing machine” legacy is feted as if the true engine of Silicon Valley, a man implicated in fraud and genocide.
“Killing machine” is Benjamin Madley’s term of art from “An American Genocide” (Yale University Press, 2016), referring to the system of US soldiers, California militia, volunteers, and mercenaries that California officials created.
Stanford served on the Committee on Indian Affairs in the California state legislature in the 1850s, then as Governor (1862-1863) signed appropriations bills specifically funding extermination campaigns against California ethnic groups.
The population rapidly dropped from 150,000 or more, to less than 12,000 survivors. Multiple sources (UCLA’s Madley, California State Library, SF Chronicle) have all confirmed Stanford “helped facilitate genocide.”
Yeah, giant loopholes. Like the one Stanford still proves.
They not only haven’t been closed but people walk around boasting that they went to Stanford. They literally put his name on their hats and clothing. It’s very strange for anyone who understands history, let alone economic theories of monopoly based on annihilation. Can you imagine a Stalin hat, or a Pol Pot sweatshirt?
Hitler had a track record so bad his name was rightly banned and nobody wears it around. Stanford’s genocide, however…
I mean seriously, the White House didn’t give a clue here about loopholes in economic history of America when they said they’d bring Jackson’s ideas back, another American known for fraud and genocide?
Donald Trump’s favorite president: Andrew Jackson, architect of the Trail of Tears and opponent of centralized banking, grandfather of MAGA “white republic”.
Is Doctorow useful? Entertaining, maybe. But if “enshittification” doesn’t bring actual antitrust enforcement back again, we’re just waiting for a 2045 reboot.
a blog about the poetry of information security, since 1995