Recent developments in Ukraine’s UGV deployments force us to confront an uncomfortable reality about Europe’s autonomous vehicle concentrations. The Grünheide facility and its adjacent storage areas near Berlin represent what military planners term a “dual-use capability concentration” — a euphemism that barely masks its strategic implications.
AI drones with high-explosive cluster munitions being stockpiled by a Tesla factory outside of Berlin, Germany. Source: Sean Gallup (Getty Images)
The positioning of thousands of networked, autonomous-capable vehicles within striking distance of a major European capital isn’t just a supply chain curiosity. It’s a potential force multiplier that would make Cold War military planners blush. Each vehicle represents roughly 2,000kg of mobile, precisely-controllable explosive chemical cluster bomblets, networked to a centralized command infrastructure. The mathematics of concentrated force here are stark.
While some might reference WWII motor pools, a more apt comparison is the pre-WWI railway mobilization networks. Like those railway timetables, modern autonomous vehicle networks represent a “use-it-or-lose-it” capability. The critical difference is that instead of requiring weeks of mobilization, modern software-defined vehicles can be repurposed almost instantaneously.
The Lyptsi victory over Russia just demonstrated exactly how UGVs can be effectively weaponized in rural terrain, but urban environments present an entirely different magnitude of potential.
The concentration of autonomous vehicles near Berlin isn’t just about industrial efficiency. It represents a latent capability that could, through software alone, transform from a commercial asset into something far more concerning. Unlike traditional military assets, these vehicles are already positioned in strategically significant locations, require no physical modification to repurpose, and can be activated simultaneously through existing command infrastructure.
From a historian’s perspective, the Ukrainian victory represents a watershed moment bearing similarities to the first deployment of tanks at the Battle of the Somme in 1916. While those early tanks were clumsy and unreliable, they signaled a fundamental shift in warfare. What we’re seeing now may be equally significant.
The widespread presence of connected autonomous vehicles in cities like Berlin represents an unprecedented network of potential dual-use technology, far surpassing previous examples of civilian-to-military conversion like Ford’s River Rouge plant during WWII. The key difference now is that infrastructure wouldn’t need physical modification — merely trivial software updates that the Tesla CEO promises (completely fraudulently) will always maintain accuracy in combat.
What’s particularly striking is the speed of adaptation versus the urgency of civilian defense. While it took years for armies to develop effective tank doctrine after WWI, we’re seeing tactical evolution happen in near real-time in Ukraine. The establishment of specialized units like Ukraine’s Typhoon unit suggests institutionalization of these capabilities, moving beyond ad hoc experimentation. This kind of organizational change historically presages major doctrinal shifts.
The Russians learned this lesson the hard way in Kharkiv. Urban warfare is no longer just about controlling physical space, but about controlling the networked assets within that space. The question isn’t whether such concentrations of autonomous vehicles represent a strategic concern — that much is evident from Ukraine. The question is how national security planners are adapting to this new reality of Tesla fleets developing rapidly into a clear and present threat.
A disused military airfield outside Berlin, as seen from satellite, filling up with Tesla autonomous vehicles awaiting command and control.
“Locally autonomous drone warfare,” Musk added, “is where the future will be.” Then he said, “I can’t believe I’m saying this, because this is dangerous, but it’s simply what will occur.”
The Dogs of Cyberwar A Lowly Hacker’s Warning
LSE Commencement 2024
Distinguished faculty, dear students, and those venture capitalists or intelligence agencies inevitably lurking in the back hoping to recruit our graduates into their latest ethical catastrophe:
Thirty years ago, I sat where you’re sitting now, though with considerably fewer people and a significantly more embarrassing haircut. Back then, I was the American oddity who lived day and night in the computer lab while my half-dozen classmates assembled in the Three Tuns, competing to see whose understanding of the Cuban Missile Crisis would solve all of humanity’s problems over another round of pints that cost a pound twenty each.
I spent considerable effort to get LSE on something new called the “World Wide Web” – a phrase now that sounds as charmingly dated as “information superhighway” or “freeze dried coffee”, which was by the way the only coffee you could find in London in 1993. Can you imagine a young American hacker stepping off a plane in London and realizing only too late I was expected to drink tea and write with a pen?
I almost immediately died from caffeine and keyboard withdrawal.
To keep calm and carry on I volunteered writing code to help a blind PhD student of political philosophy digitize his dozens of books into robotic speech. He taught me more about seeing the world clearly, learning page breaks didn’t really exist in our mind’s eye, than I ever taught him about data integrity flaws in OCR algorithms. However, I did save him from accidentally submitting his thesis with hidden instances of the letter S replaced by a number five — a substitution that in retrospect could have meant his analysis of Hobbe5‘ 5tate of Nature would be credited for the invention of modern passwords.
As you might guess, I arrived here still a raw and immature sod raised on the dirt roads of rural America. When an LSE student repeatedly left their World War I essay about military vulnerability completely exposed on one of our four shared lab computers, the irony proved as irresistible as… relieving myself on a hidden electric fence back home. A risky temptation that I really should have resisted. After watching the pattern repeat daily with a stubborn predictability of the BBC weather forecast, I did what any country bumpkin would do facing an open barn door: I scattered pointed commentary about undefended positions throughout their work. Professor Stevenson, to my great relief, marked every single edit with a bright red circle, proving he dutifully read each word that we turned in — which is more than I could say for my fellow student about their own work.
Little did I know this kind of penchant for exposing vulnerability would become the perfect metaphor later in my career. Professor Kent, the best advisor anyone could ever ask for, encouraged me upon graduation to throw myself straight into the tech industry. And so I did. For three decades I’ve helped big and small organizations see vulnerabilities, in order to keep them grounded, to make their grandiose claims about safety less full of the stuff we knew in Kansas as meadow muffins. Or prairie pancakes.
As I learned quickly as a kid, one taste and you immediately knew it’s a good thing you didn’t step in it.
From LSE’s tiny computer lab up into the largest corporate skyscraper boardrooms, I have spilled gallons of red ink around tens of thousands or more of undefended positions. Although now stakes are rather higher than a student’s marked-down essay, and the giant hidden electric fences are incredibly more…well, shocking.
Let me explain.
Take Palantir, named without a trace of irony after Tolkien’s all-seeing stones that invariably corrupt those who use them. They pitched venture capitalists a “revolutionary” surveillance system to “predict and prevent terrorism.” Of course you can imagine how VCs’ eyes lit up with dollar signs, presumably the same way medieval merchants’ eyes lit up with dubloons at the prospect of selling torture devices to the Spanish Inquisition. “Think of the market opportunity!” they must have said. “Every Queen Isabel will want one!” Did you know studies today show that Palantir actually created the terrorists they promised to predict and prevent? The self licking ISIS cream cone is real.
Similarly, Tesla’s ‘Autopilot’ promised to end traffic deaths, then proceeded to invent entirely new ways for cars to kill people. It has achieved the remarkable distinction of making the Ford Pinto look like a triumph of safety engineering. Who needs a faulty gas tank when you have AI that can find entirely new ways to turn cars into crematoriums? Henry Ford may have won the Third Reich’s highest honor, but at least he didn’t try to rebrand his Dearborn Independent newspaper with a hakenkreuz and call himself Twittler.
You might think I’m being too glib about death or unfair to visionaries. “Surely,” you say, “their companies must have some redeeming qualities, like what about South Africans dreaming of turning Mars into New Rhodesia?” Well yes, I suppose in the same way the East India Company really streamlined the tea trade. Have you seen the grand old counting house? I noticed the gift shop doesn’t mention how they balanced their moral ledger. The problem isn’t that technology being assembled is unimpressive — it’s measuring who really pays for it.
Which brings me to why your, and my, LSE education matters more than ever. You see, the world is perhaps being affected by Silicon Valley today in the same way that Dresden was fire-bombed by some pioneering Palo Alto radar engineers. Tech desperately needs people who can spot the rather subtle differences between innovations and repackaged historical tragedies. They need people who, when presented with a “revolutionary” surveillance system called Bluesky, can say, “Ah yes, this is exactly like the Stasi, but with better UX design.”
You’ve been trained to see patterns that even the most brilliant engineers miss – not because they lack intelligence, but because they’ve never had to explain to Dr. Preston why Franco’s “move fast and break things” wasn’t about innovations in Jerry cans. You understand that every “disruption” has a history, every “innovation” a context, and every hot-rushed philosophy eventually breaks something rather important – like democracy, or human rights, or that quaint notion that public transit shouldn’t spontaneously combust.
Let me give you a current example. Are you aware of the thousands of networked autonomous vehicles quietly amassing at a former Cold War airfield outside Berlin? The press has cheered deforestation around the German capital as “Tesla’s biggest European output.” With your training, you might recognize this as rather like how France celebrated the Maginot Line as their biggest investment in concrete. We’re staring down the barrel of a cybernetic equivalent to Chekhov’s gun. Thousands of hackable vehicles in Act One, are going to cause chaos by Act Three.
You’re entering a world where technology companies have more power than most nations, yet demonstrate all the ethical sophistication of a first-year philosophy student discovering moral relativism. They need people who can see through the Silicon Valley doublespeak, who understand that “making the world a better place” often means “making ourselves richer at everyone else’s expense.”
When I left LSE directly to California, with only $50 and dried coffee crystals in my pockets, I thought I was leaving behind the rigorous historical thinking this institution taught me. Instead, I found it was my most valuable skill. While engineers around me focused on rapid valuation from throwaway ideas, I was trained to ask whether they should. And more importantly, I was trained to recognize when “unprecedented” innovation was actually a very precedented bad idea in a shiny new package.
At one point I sat in charge of software release gates that affected two billion users, navigating the dawn of modern mobile phones and gaming consoles. With an official title of “dedicated paranoid” I wore a t-shirt that simply said “why?” It turned out to be the most important question in Silicon Valley, though one that got me uninvited from a surprising number of launch events. Venture capitalists, I learned, prefer historical parallels to stop at the Wright brothers and skip the Hindenburg.
So, Class of 2024, as you leave these strangely sunny and bright, airy halls that I somehow remember as windowless and always wet from rain, please know that your historical training isn’t just about understanding mistakes in the past. It’s about recognizing when someone tries to repeat them while hoping nobody realizes. In a world where tech companies are speedrunning through every bad idea of the 20th century, we desperately need people who can find the causes of things to avoid every AI implementation becoming a case study in all our successor’s dissertations.
You have been trained to see through a growing fog of cyberwar, whether rising from hundreds of thousands of burning Model 3s attacking European cities or stuff spread by social media tycoons about their robots. Use your training in clarity of vision to improve society. The world needs your sharp tongue and sharper minds.
And to those venture capitalists in the back: yes, our graduates are available for hire. But I should warn you – they’ve been trained to spot patterns. Your term sheets look remarkably like Victorian labor contracts, just with time measured by TikToks.
Thank you, and congratulations.
Swasticars: Remote-controlled high-explosive vehicles stockpiled by Twittler outside Berlin.
The ghosts of female philosophers haunt Silicon Valley’s machines. While tech bros flood Seattle and San Francisco in a race to claim revolutionary breakthroughs in artificial intelligence, the spirit of Mary Wollstonecraft whispers through their fingers, her centuries-old insights about human learning and intelligence echoing unacknowledged through their algorithms and neural networks.
1790 oil on canvas portrait by John Opie of philosopher Mary Wollstonecraft (1759-1797). Source: Tate Britain, London
In “A Vindication of the Rights of Woman” (1792), Wollstonecraft didn’t just argue for women’s education, she dismantled the very mechanical, rote learning systems that modern AI companies are clumsily reinventing at huge cost. Her radical vision of education as an organic, growing system that develops through experience and social interaction reads like a direct critique of today’s rigid, mechanical approaches to artificial intelligence.
The eeriest part? She wrote this devastating critique of mechanical thinking 230 years before transformer models and large language models would prove her right. While today’s AI companies proudly announce their discovery that learning requires social context and organic development, Wollstonecraft’s ghost watches from the margins of history, her vindication as ignored as her original insights.
Notable history tangent? She died from infection eleven days after giving birth to her daughter, who then went on to write Frankenstein in 1818 and basically invent science fiction.
When we look at modern language models learning through massive datasets of human interaction, we’re seeing Wollstonecraft’s philosophic treatises on organic learning scaled to the digital age.
David Hume’s philosophical contributions are also quite striking, given they’re 300-years old as well. His “bundle theory” of mind and identity reads like a prototype for neural networks.
When Hume argued that our ideas are nothing more than collections of simpler impressions connected through association, he was describing something remarkably similar to the weighted connections in modern AI systems. His understanding that belief operates on probability rather than certainty is fundamental to modern machine learning.
Every time an AI system outputs a confidence score, it’s demonstrating Hume’s predictive writing about our modern dependency on empiricism.
What’s particularly fascinating is how both thinkers rejected the clockwork universe model of their contemporaries. They saw human understanding as something messier, more organic, and ultimately more powerful than mere mechanical processes. Wollstonecraft’s insights about how social systems shape individual development are particularly relevant as we grapple with AI alignment and bias. She understood as a philosopher of the 1700s that intelligence, whether natural or artificial, cannot be separated from its social context.
The problem with our 1950s-style flowcharts that emerged from hard-fought victory in WWII isn’t just that they’re oversimplified, it’s that they represent a violent step backward from the sophisticated understanding of mind and learning that Enlightenment thinkers had already developed.
We ended up with such mechanistic models, simplistic implementations like passwords instead of proper messy heatmap authentication, because the industry was funded out of military-industrial contexts that too often prioritized command-and-control thinking over organic developments. TCP/IP and HTTPS were academically-driven exceptions to the Rochester-Stanford teams who fought hard to standardize on X.25, for example.
When Wollstonecraft wrote about the organic development of understanding, or when Hume described the probabilistic nature of belief, they were articulating ideas that would take computer science centuries to rediscover and apply as “novel” concepts divorced from all the evidence presented in social science.
As we develop AI systems that learn from social interaction, operate on probabilistic inference, and exhibit emergent behaviors, we’re not just advancing beyond the simplistic war-focused mechanical models of early computer science, we’re finally catching up to the insights of 18th-century philosophy. Perhaps the real innovation in AI isn’t about technology itself, but our acceptance of a particular woman’s more sophisticated understanding from 1792 of what intelligence really means.
The next frontier in AI not surprisingly won’t be found in more complex algorithms, but in finally embracing the full implications of what Enlightenment thinkers understood about the nature of mind, learning, and society. When we look at the most advanced AI systems today and where they are going with their fuzzy logic, their social learning, their emergent behaviors, we’re seeing the vindication of ideas that Wollstonecraft and Hume would have recognized immediately.
Unfortunately, the AI industry seems dominated by an American “bromance” that isn’t particularly inclined to give anyone credit for the ideas that are being taken, corrupted and falsely claimed as futuristic or even unprecedented. Microsoft summarily fired all their ethicists in an attempt to silence objections to OpenAI investment, not long before a prominent whistleblower about OpenAI turned up dead.
Nothing to see there, I’m sure, as philosophers rotate in their graves. We haven’t just forgotten the lessons of Enlightenment thinkers, the Sam Altmans and Mark Zuckerbergs may be actively resisting them in favor of a more controlled, corporatized, exploitative approaches to innovations with technology.
Let me give you an example of the kind of flawed and ahistoric writing I see lately. Rakesh Gohel posed this question on the proprietary, closed site, ironically called “LinkedIn“:
Most people think AI Agents are just glorified chatbots, but what if I told you they’re the future of digital workforces?
What if?
What if I told you the tick tock of Victorian labor exploitation practices and inhumane colonialism don’t disappear if you just rebrand them to TikTok and use camera phones instead of paper and pen? Just like Victorian factory owners used mechanical timekeeping to control workers, modern platforms use engagement metrics and notification systems to maintain digital control.
The eyeball-grabbing “digital workforce” framing that Gohel stumps is essentially reimagining a factory into APIs instead of steam engines and belts. Just as factory owners reduced skilled craftwork to mechanical processes, today’s AI companies are watering down complex social and cognitive processes into simple flowcharts that foreshadow their dangerous intentions. Gohel tries to sweeten his pitch using a colorful chart, which in fact illustrates just how fundamentally broken “AI influencer” thinking can be about thinking.
That, my fellow engineers, is a tragedy of basic logic. Contrasting a function call with a while loop… is like promoting 1950s-era computer theory at best. A check loop after you plan and do something! What would Deming say about PDCA, given that he was famous 50 years ago for touring the world lecturing on what this brand new chart claims to be the future?
The regression here goes beyond just technical architecture. When Deming introduced PDCA, he wasn’t just describing a feedback loop, he was promoting a holistic philosophy of continuous improvement and worker empowerment. The modern AI agent diagram strips away all of that context and social understanding, reducing it to the worst technical loop theory.
This connects back to the earlier point about Wollstonecraft, because the AI industry isn’t just ignoring 18th-century philosophy, it’s also ignoring 20th-century management science and systems thinking. The “what if” diagram presents as revolutionary what Deming would have considered decades ago a primitive understanding of systematic improvements in intelligence.
Why does the American tech industry keep “rediscovering” and selfishly-corrupting or over-simplifying ideas that were better understood and presented widely decades or centuries ago?
A quick back-of-napkin sketch you likely would never see in the current put-other-peoples-nose-to-grindstone American tech scene
Perhaps it’s because technically raw upwardly mobile privileged skids (TRUMPS) being expected to acknowledge any deep historical roots, such as giving any real credit to humanities or social science, would mean confronting the very harmful implications from their poorly constructed systems… which the world’s best philosophers like Wollstonecraft, Hume, and Deming have emphasized for hundreds of years.
The pattern is painfully clear — exhume a sophisticated philosophical concept, strip it to its mechanical bones, slap a technical name on it, and claim revolutionary insight. Here are just a few examples of AI’s philosophical grave-robbing:
“Attention Mechanisms” in AI (2017) rebranded William James’ Theory of Attention (1890). James described consciousness as selectively focusing on certain stimuli while filtering others in a dynamic, context-aware process involving both voluntary and involuntary mechanisms. The tech industry presents transformer attention as revolutionary when it’s implementing a stripped-down version of 130-year-old psychology.
“Reinforcement Learning” (2015) rebranded Thorndike’s Law of Effect (1898). Thorndike described how behaviors followed by satisfying consequences tend to be repeated, developing sophisticated theories about the role of context and social factors in learning. Modern RL strips this to pure mechanical reward optimization, losing all nuanced understanding of social and emotional factors.
“Federated Learning” (2017) rebranded Kropotkin’s Mutual Aid (1902). Kropotkin described how cooperation and distributed learning occur in nature and society, emphasizing knowledge development through networks of mutual support. The tech industry “discovers” distributed learning networks but focuses only on data privacy and efficiency, ignoring the social and cooperative aspects Kropotkin emphasized.
“Explainable AI” (2016) rebranded John Dewey’s Theory of Inquiry (1938). Dewey wrote about how understanding must be socially situated and practically grounded, emphasizing that explanations must be tailored to social context and human needs. Modern XAI treats explanation as a purely technical problem, losing the rich philosophical framework for what makes something truly explainable.
“Few-Shot Learning” (2017) rebranded Gestalt Psychology (1920s). Gestalt psychologists described how humans learn from limited examples through pattern recognition and developed sophisticated theories about how minds organize and transfer knowledge. Modern few-shot learning presents this as a novel technical challenge while ignoring deeper understanding of how minds actually organize and transfer knowledge.
These philosophical ghosts don’t just haunt our machines – they’re Wollstonecraft’s vindication made manifest, a warning echoing through centuries of wisdom. The question is whether we’ll finally listen to these voices from the margins of history, or continue pretending every thoughtless mechanical implementation of their ideas is cause to celebrate a breakthrough discovery. Remember, Caty Greene’s invention of the “cotton engine” or ‘gin (following her husband’s untimely death from over-exertion) came from intentions to abolish slavery, yet instead was stolen from her and reverse-patterned into the largest unregulated immoral expansion of slavery in world history. Today’s AI systems risk following the same pattern of automation tools intended to liberate human potential being corrupted into instruments of digital servitude.
“Naively uploading” our personal data into any platform that lacks integrated ethical design or safe learning capabilities is more like turning oneself into a slave exploited by cruelty of emergent digital factory owners, rather than maintaining basic freedoms while connecting to a truly intelligent agent that can demonstrate aligned values. Agency is the opposite to what AI companies have deceptively been hawking as their vision of agents.
Kubrick’s 1964 film “Dr. Strangelove” presented what seemed an absurdist critique of automation and control systems. While most bombers in the film could be recalled when unauthorized launches occurred, a single damaged bomber’s “CRM 114 discriminator” prevented any override of its automated systems – even in the face of an end-of-world mistake. This selective communication failure, where one critical component could doom humanity while the rest of the system functioned normally, highlighted the kind of dangerous fragility that necessitates tight regulation of automated control systems.
The film’s “discrimination” device, preventing override and sealing the world’s fate, was comical because it was the invention of a character portrayed as a paranoid conspiracy theorist (e.g. a fictional Elon Musk). The idea that a single point of failure in communications could trigger apocalyptic consequences was considered so far-fetched as to be unrealistic in the 1960s. Yet here we are, with Tesla rapidly normalizing paranoid delusional automated override blocks as a valid architectural pattern without any serious security analysis or public scrutiny.
Traditional automakers since the Ford Pinto catastrophe understand design risks intuitively — they build mechanical overrides that can not be software-disabled, showing a fundamental grasp of safety principles that Tesla has glowingly abandoned. In fact, other manufacturers specifically avoid building centralized control capabilities, not because difficulty, but because engineers should always recognize and avoid inherent risks — following the same precautionary principle that guided early nuclear power plant designers to build in physical fail-safes. However, the infamous low-quality high-noise car parts assembly company known as Tesla has apparently willfully recreated the worst architectural vulnerabilities at massive scale that threaten civilian infrastructure.
Most disturbing is how Tesla masks a willful destruction of societal value systems using toddler-level entertainment. The “Light Show” is presented as frivolous and harmless, much like how early computer viruses were dismissed as fun pranks rather than serious security threats that would come to define devastating global harms. But engineers know the show is not just plugging trivial LED audio response code into a car. What it actually demonstrates is a fleet-wide command and control system without sensible circuit breakers. It promotes highly-explosive chemical cluster bombs mindlessly following centrally planned orders without any independent relation to context or consequences. It turns a fleet of 1,000 Tesla into automation warfare concepts reminiscent not just of the Gatling gun or the Chivers machine gun of African colonialism, but the Nazi V-1 rocket program of WWII — a clear case of automated explosives meant to operate in urban environments that couldn’t be recalled once launched.
Finland 1940:
Threat? What threat? Soviet Foreign Minister Vyacheslav Molotov said he’s just airlifting food into Finland (Molotov’s “bread basket” technology — leipäkori — was in fact a cluster bomb. And yes, Finland was so anti-Semitic their air-force really adopted the hooked-X for their symbol. REALLY!)26 Jan 1940: “…the civil defense chief has named ‘Molotov’s Bread Basket.’ …equipped with 3 winged propeller devices. Its contents are divided into compartments containing dozens of different incendiary and ignition bombs. When the propeller sets the torpedo into a powerful spinning motion, the bombs have opened from its sides and scattered around the environment. …the Russians are throwing bread to us in their own way.” Source: National Library of Finland
Finland 2024:
Threat? What threat? Musk says it’s just a holiday light show. These are all just Tesla food delivery vehicles clustered for “throwing bread to us in their own way” like the fire-bombing of winter 1939 again.
The timing of propaganda is no accident. Tesla strategically launches these demonstrations during holidays like Christmas, using celebratory moments to normalize dangerous capabilities. It’s reminiscent of the “Peace is our Profession” signs decorating scenes in Dr. Strangelove, using festive imagery to mask dangerous architectural realities.
British RAF exchange officer Mandrake in the film Dr. Strangelove. Note the automation patterns or plays surrounding the propaganda.
Tesla’s synchronized light shows, while appearing harmless, demonstrate a concerning architectural pattern: the ability to push synchronized commands to large fleets of connected vehicles with potentially limited or blocked owner override capabilities. What makes this particularly noteworthy is not the feature itself, but what it reveals about the underlying command and control objectives of the controversial political activists leading Tesla. The fact that Tesla owners enthusiastically participate in these demonstrations shows how effectively the security risk has been obscured — it’s a masterclass in introducing dangerous capabilities under the guise of consumer features.
More historical parallels? I’m glad you asked. Let’s examine how the Cuban Missile Crisis highlights the modern risks of automated systems under erratic control.
During the Cuban Missile Crisis, one of humanity’s closest brushes with global nuclear catastrophe, resolution came through human leaders’ ability to identify and contain critical failure points before they cascaded into disaster. Khrushchev had to manage not just thorny U.S. relations but also prevent independent actors like Castro from triggering automated response systems that could have doomed humanity. While Castro controlled a small number of weapons in a limited geography, today’s Tesla CEO commands a vastly larger fleet of connected vehicles across every major city – with demonstrably less stability and even more concerning disregard for fail-safe systems than Cold War actors showed.
As Group Captain Mandrake illustrated so brilliantly to audiences watching Dr. Strangelove, having physical override capabilities doesn’t help if the system can fail-unsafe and ignore them. Are you familiar with how many people were burned alive in Q4 2024 by their Tesla door handles failing to operate? More dead in a couple months than the entire production run of the Ford Pinto, from essentially the same design failure — a case study in how localized technical failures can become systemic catastrophes when basic safety principles are ignored.
Tesla’s ignorant approach to connected vehicle fleets presents a repeat of these long-known and understood risks at an unprecedented scale:
Centralized Control: A single company led by a political extremist maintains the ability to push synchronized commands to hundreds of thousands of vehicles or more
Limited Override: Once certain automated sequences begin, individual owner control may have no bearing regardless of what they see or hear
Network Effects: The interconnected nature of modern vehicles means system-wide vulnerabilities can cascade rapidly
Scale of Impact: The sheer number of connected vehicles creates potential for widespread disruption
As General Ripper in Dr. Strangelove would say, “We must protect our precious vehicular fluids from contamination.” More seriously…
Here are some obvious recommendations that seem to be lacking from every single article I have ever seen written about the Tesla “light discriminator” flashy demonstrations:
Mandate state-level architectural reviews of over-the-air update systems in critical transportation infrastructure. Ensure federal agencies allow state-wide bans of vehicles with design flaws. Look to aviation and nuclear power plant standards, where mandatory human-in-the-loop controls are the norm.
Require demonstrable owner override capabilities (disable, reset) for all automated vehicle functions — mechanical, not just software overrides
Develop frameworks for assessing systemic risk in connected vehicle networks, drawing on decades of safety-critical systems experience
Create standards for fail-safe mechanisms in autonomous vehicle systems that prioritize human control in critical situations
What Kubrick portrayed as satire — how a single failed override in an otherwise functioning system could trigger apocalyptic consequences — has quietly become architectural reality with Tesla’s rising threats to civilian infrastructure. The security community watches light shows while missing their Dr. Strangelove moment: engineers happily building systems where even partial failures can’t be stopped once initiated, proving yet again that norms alone won’t prevent the creation of doomsday architectures. The only difference? In 1964, we recognized this potential for cascading disaster as horrifying. In 2024, we’re watching people ignorant of history filming it to pump their social media clicks.
In Dr. Strangelove, the image of a single malfunctioning automated sequence causing the end of the world was played for dark comedy. Today’s Tesla demonstrations celebrate careless intentional implementations of equally dangerous architectural flaws.
60 years of intelligence thrown out? It’s as if dumb mistakes that end humanity are meant to please wall street, all of us be damned. Observe Tesla propaganda as celebrating the wrong things in the wrong rooms — again.
a blog about the poetry of information security, since 1995