Fascists love to destroy meaning in language, because it allows them to say up is down and failure is their success. When nothing means anything, they can always be right and stay in power no matter how absurd their language devolves.
SpaceX’s latest unmanned Starship launch at first went off without a boom on Tuesday, but eventually broke up almost an hour after it took off.
After two test flights ended in dramatic explosions earlier this year, SpaceX’s ninth test of its Starship vehicle experienced a “rapid unscheduled disassembly” on Tuesday, May 27, SpaceX confirmed on X. That’s the same language used when Starship’s January and March flight tests unexpectedly exploded in the sky.
Notably, critics point to anti-diversity initiatives at SpaceX as causing accelerating brain drain, leaving the company in a state of rapid unscheduled disassembly (an aerospace phrase well known since the 1980s).
“With a test like this, success comes from what we learn…” [said] Musk, who called the flight test a “big improvement” from the last one…. This time around the booster catch was not attempted as SpaceX said it prioritizes… Mars. The team lost connection with its the booster, which had been expected to land in the Gulf of America. SpaceX was also unable to release eight mock objects resembling Starlink internet satellites as the ship’s door would not open wide enough, according to Hewitt. In another hiccup, the spacecraft started spinning, causing it to head toward the Indian Ocean for an uncontrolled landing… later confirmed that SpaceX eventually also lost contact with the ship itself, concluding Starship flight test nine. “We were not able to do a lot of our own orbit objectives today…
So much whiffing at bat by SpaceX throwing billions of dollars away, missing the ball entirely and causing massive environmental destruction, that… the pitcher throwing strikes soon will be so exhausted he’ll be forced to declare the other team the winner!
Isn’t that how American baseball works now?
The rich kid with the emerald encrusted stainless steel bat, throwing Hitler salutes to his six pregnant wives is declared the big winner after he strikes out over and over again? This May 27, 2025 picture of SpaceX has early American rocket design characteristics of 1950s (Nazi V-2 rocket technology developed by the slaves of Walter Thiel).Nazi German V2 Rocket, developed by slaves imprisoned by Walter Thiel
The contrast between SpaceX’s wasteful WWII-era (Nazi) approach and NASA’s more modern and methodical Cold War program with democratic regulation becomes clear when comparing the development philosophies and cost management.
Claiming “big improvements” when the primary vehicle is consistently lost illustrates how fascist propaganda frames progress, through selective emphasis on partial achievements, basically lying about gross harms, inefficiency and loss.
SpaceX mission objectives – deploying satellites, completing orbital maneuvers, demonstrating reentry – ALL REPEATEDLY FAILED AT HUGE COST.
Satellite deployment: FAIL (door wouldn’t open)
Orbital maneuvers: FAIL (spacecraft started spinning)
Repeatedly calling big failures a success makes accountability impossible, as the Nazis proved so dramatically in the late 1920s. The contrast in governance should be obvious:
NASA’s methodical approach delivered actual Mars landings starting in 1976 (Viking), followed by 50 years of successful missions like Pathfinder, Spirit, Opportunity, Curiosity, and PerseveranceNASA moon shot glass
SpaceX’s “spectacular failure show” approach, promised to have a man walking on Mars by 2018 and colonization by 2021, has yet to complete even basic orbital objectivesSource: Twitter
NASA vs SpaceX: Development Risk Comparison
Risk Metric
NASA “Hard Work” fail safe (e.g. SLS) landing on Mars since the 1970s
SpaceX Starship “Spectacular Failure Show” unable to even exit Earth’s orbit
Development Philosophy
Waterfall method with systematic reviews: System Requirements Review, Preliminary Design Review, Critical Design Review
“Fail fast, big show” approach with minimal testing before disasters
Flight Success Rate (2025)
100% – 1 successful launch (Artemis I, Nov 2022)
0% – 3 consecutive failures (Jan, Mar, May 2025)
Development Cost
$11.9 billion through 2018, with systematic verification and accounting at each checkpoint
Over $300 million in hardware losses from 3 recent failures alone, plus environmental/disruption costs
Cost Per Launch
$800 million – $4.1 billion (proven, functional system)
Theoretical and unproven with 0% success rate
Ground Testing Approach
Extensive ground testing, engine verification, structural testing before flight
Minimal testing, destructive remote explosions undermining research
Environmental Impact
Controlled launches from designated facilities with minimal debris
Debris scattered across Caribbean islands (Turks & Caicos, Bahamas)
Air Traffic Disruption
Standard aerospace coordination with minimal commercial flight impacts
Expanded hazard zones affecting 175+ flights, 40-minute delays costing $6,048/hour per flight
Hardware Reusability
Expendable, with proven track record and mission success
100% upper stage hardware loss rate in recent tests
Risk Management
Conservative approach prioritizing mission success and crew safety
High-risk approach with acceptance of catastrophic failures and total loss as unquantified and unmeasurable “learning experiences”
* Data compiled from multiple sources including NASA reports, SpaceX announcements, and aerospace industry analysis through May 2025. Beyond the $300+ million in direct hardware losses, there are the environmental remediation costs, FAA investigation expenses, and the economic impact of flight disruptions that don’t appear in SpaceX’s promotional materials.
We already reported on this crash, which was unusually tight-lipped. Now police are beginning to reveal how driverless software may be at fault. A Tesla operating like a ground-to-ground missile, crashed high speed into the back of a van, flipping it and killing its driver.
The driver of a van was killed in a rollover crash involving a Tesla and one other car on Interstate 95 in Pompano Beach on Saturday, Florida Highway Patrol said.
The crash happened just before 2 a.m. in the southbound lanes near Cypress Creek Road when, according to FHP, the driver of a black Tesla Model Y hit a Mercedes-Benz Sprinter 5000 from behind. […] Authorities said the Tesla driver “did not take driver action to avoid” the van….
In other words, authorities say Tesla Robotaxi software just killed a man because a Tesla operator didn’t stop the attack.
Notably, a rise in Tesla AI fatalities lately seem to be clustered around 2am and 3am, which crime investigators should register as a familiar pattern. I can explain, perhaps a topic for another day.
This Florida attack, in other words, provides NatSec analysts more evidence of algorithmic assassination tactics on public roads, inherent to the deeply flawed Robotaxi “Technocracy” designs of population capture, control and murder.
Swasticars: Remote-controlled explosive devices stockpiled by Musk for deployment into major cities around the world.
Imagine 10,000 or more Swasticars deployed in an urban area and you see how an Elon Musk plan for political power shift could be attempted through violent remote-controlled robot warriors. Austin, Texas is a capitol city, and soon it can be captured by just one man who orders his robot fleets to attack.
…officials with Austin’s transportation department, the city’s emergency first responders, and federal regulators say that Tesla has failed to deliver crucial information regarding the service, which is supposed to go live in just a few days.
“Neun Autos gehen in Berlin in Flammen auf!” In 2024 on just one night in Berlin there were nine Tesla exploding like chemical bombs, allegedly a foreign power test of German urban emergency response capacity. Source: BZ
Or, as an unfortunately necessary counterpoint, the hacking community has known since at least 2016 that Tesla software is weak and susceptible to command and control by unauthorized outsiders (tampering and repudiation vulnerabilities, a throwback to “tipping” the flawed Nazi V1 bombers).Popular Mechanics Feb 1945
How many Nazi drones (derivative of the “Silicon Valley of the 1930s“) are on your roads today, or amassed near critical infrastructure and population centers? Measuring this is a legitimate national preparedness concern regarding foreign-backed terrorism. Märkische Allgemeine Zeitung (MAZ) is a regional newspaper in Brandenburg, Germany an area known for harboring and promoting Nazi sentiment (e.g. Peter Thiel, Elon Musk and AfD).
We’re getting close to needing police cruisers to be built like the WWII “Tempest” Nazi robot killer, with cannons setup to destroy Elon Musk’s Swasticars. Not an exaggeration, a Swasticar speeding like a ground-to-ground missile needs a modern and proportional NatSec response system to defend the nation. What would Sherman or Abrams do? WWII memorials now include public destruction of Tesla by veterans in a Sherman tank who remember fascism the first time around. Source: David Mirzoeff / Led By Donkeys / SWNS
Imagine being the guy who tries to show off how “awesome” his lawn dart is, long after everyone knows not to throw them anymore.
Source: Twitter, 2 April 2025
The only good part of the story here is they didn’t both die in a fire.
Notably he tags Tesla, as if the maker of a lawn dart of public roads doesn’t know they have a dangerously defective design. Tesla management probably looks at this video and just calls the owner stupid or reckless for trying to use FSD as advertised.
Let the Presto Crasho of FSD be a warning to everyone else. Don’t get in a Tesla. Don’t let friends or family get in a Tesla.
Anil Dash’s recent paean to Model Context Protocol (MCP) as the harbinger of “Web 2.0 2.0” reads like the kind of Disney-esque a romanticized past of Lordship that never actually existed—and worse, it actively advocates for recreating the exact conditions that led to our current unrepresentative repression and taxation system of tech Lords.
Source: Bella Roe. Boys never play “prince”, yet girls are pushed to play “princess”, a form of early age power transfer through nostalgia as self-repression. For example, the Indian Microsoft CEO infamously tried to control female staff in 2014 by advising them to act like a “princess” not a PK Rosy.
Slave Plantation Nostalgia
Dash’s core thesis rests on a fundamental misunderstanding of why Web 2.0’s “openness” was sliced and diced into plantations trying to incarcerate and exploit their users. He presents it as a simple morality tale: the good guys built open protocols, then the bad guys at Facebook and Twitter killed good things because greed. This conveniently ignores the uncomfortable truth that we see users choose fraud all the time, they waltz into walled gardens (digital flytraps) because they fall for things like “it’s easier” or “it’s free” or “it worked better” regardless of greater perspectives on what happens to them next.
“Arbeit Macht Frei” was put above Nazi death camp entrances to emphasize how much wonderful freedom victims would see after entering, which was in fact zero. Facebook’s “connecting the world” rhetoric becomes especially sinister when you consider they were literally connecting genocidal mobs in Myanmar and Ethiopia.
The Atlantic “Brief Visual History of Weapons” should perhaps be updated with images like this one.
Do you want another Auschwitz? Because we can see exactly how Facebook is on track towards another Auschwitz.
Facebook didn’t greedily ruin open social protocols through some nefarious conspiracy—it took advantage of it the way Nazis take advantage of democratic speech. Integrate themselves into democracy in order to destroy democracy and replace it with dictatorship.
Managing your digital identity across seventeen different federated services with inconsistent interfaces and zero interoperability guarantees was spun into a false “too hard” narrative full of fear and loathing that bashed freedom by offering false hope. The “beautiful” chaos of early Web 2.0 that Dash romanticizes was actually a user experience innovation challenge that worked for the technically sophisticated early adopters.
I speak from experience, given this blog you are reading right now dates all the way back to 1995. That means I crucially had several very hands-on deep in the weed years of Web development and infrastructure before Anil Dash showed up to have a go at it. I lived through the HyperCard and Gopher collapse. I was front line in failed X.25, IPX/SPX, Tokenring, NetBEUI, Appletalk, ATM… FDDI battles, let alone the mess of HTML and SSLv2 safety flaws, and many, many more.
The Innovation/Regulation Blindspot
Here’s where Dash’s analysis becomes actively dangerous: his celebration of “janky specs that everybody hurriedly adopted” completely ignores that this same hurried, unregulated adoption is exactly how we ended up with data plantations of digital slavery for value extraction embedded in our human communications infrastructure.
Consider his own example: OAuth was indeed simple and widely adopted but not early enough to avoid consolidation forced by a speculation collapse (the 2000 dot bomb). It thus opened the door to a Trojan horse built by Google and Facebook to take over our homes as identity providers for the entire Web. That “simple” protocol became the foundation for tracking users across every website they visit, which Facebook very falsely and cruelly tried to argue was them protecting everyone from themselves (the way President Andrew Jackson used to argue Blacks are better off as his slaves being alienated from society and worked to death). The simplicity Dash celebrates was subsidized by a predatory privacy invasion that users didn’t understand and couldn’t meaningfully consent to.
The women of Harvard who filed formal complaints against Zuckerberg laid out the problem right away. However, instead of the institution appropriately reigning in such clear criminal activity that undermined and exploited vulnerable online users, Facebook was seen by Harvard as an investment opportunity.
The appropriately regulated credit card industry provides the perfect counter-example to Dash’s thesis. The payment card industry has built a foundation of extensive fraud protections, chargeback mechanisms, and compliance standards to maintain clear harm principles and definitions. This regulation doesn’t make credit cards harder to use or stifle innovation—the opposite-it makes them far safer yet easier to use. It’s not perfect, but it carries a sense of stability and evolution in it’s ordered and standardized protocols, instead of revolution and chaos breeding harms. Meanwhile, the many “simpler,” less regulated alternatives like cryptocurrency and peer-to-peer payment apps have become the preferred tools for scams, ransomware, and fraud. We all know North Korea and Russia depend on Bitcoin hacks to avoid sanctions, right? Just like South Africa used emeralds and diamonds to keep apartheid running, right?
Embrace-Extend-Extinguish Risks
As much as I love to work on a new protocol and have been looking at MCP since before day one, Dash’s excitement about big players like OpenAI adopting MCP reads like someone documenting their own mugging in real-time. When he writes “it’s cool that other platforms adopted the same spec that Anthropic made,” he’s potentially witnessing the exact moment when an open protocol gets co-opted by the worst influences.
The playbook is formulaic at this point:
Embrace the open standard enthusiastically
Build the most polished, user-friendly implementation
Use your massive resources to make it the obvious choice
Extend with proprietary features that create lock-in
Extinguish the open alternatives through market dominance
Google demonstrated this with their habit of trying to push web standards they could dominate into capture—championing openness while building Chrome’s dominance and pushing APIs that coincidentally favored their business model. Amazon did it with open source infrastructure—embracing open tools while creating proprietary managed services that are “easier” than running open versions yourself. The point being there’s supposed to be a balance where entering a restaurant to have a kitchen prepare things for you shouldn’t mean you can’t eat again without them.
Instead of teaching someone to fish, using a standard pole and reel, some want to teach attachment and dependence with rent-seekimg taxation. You will never eat for free again if they can reduce regulations far enough to legalize extraction and exploitation.
Simplicity Can Also Be Known as Death Spiral
It’s easy to crash a plane, better to fly one. Think of that when you read Dash’s “better is worse” philosophy—the idea that developers should resist improving specs and just implement them faithfully. Without being required to sign a code of ethics that they will do no harm, you must realize what the “faithful implementation” of Hitler’s specs means in practice. Dash appears to be advocating that engineers should reproduce all the worst harms, vulnerabilities, invasions, and horrible immoral power imbalances of a system while avoiding proper inspection and accountability.
He openly admits that MCP, like the plantation South under gag rules prohibiting abolitionist speech, is “a totally opaque system when it comes to what a platform is doing with your data”. He further admits that “security risks are enormously high”. These together should become a clarion call for innovation or disqualification, not simply the footnotes lost in celebrations of mass suffering.
If you prefer a lighter analogy, how did the soccer player get to pick up the ball with his hands, stuff it under his jersey and walk it into the goal to score? Rules make the game playable.
Yet he waves them away with the blithe assumption that “that stuff won’t get fixed until there are some really egregious violations that get a ton of bad press.” This is demonstrably false. The news is almost always a constant stream of bad press for things that aren’t getting fixed. In American news cycles, for just one easy and obvious example, things get fixed when a celebrity complains about harm. The American system is highly tuned to privilege and caste, where reporters grind through “sucks to be you” stories, until someone with bazillions says they feel danger. Then everyone notices, believes things matter, and asks what can be done to protect the wealthy.
This is exactly the mentality that gave us Enron, Tesla, Cambridge Analytica, the Equifax breach, and SolarWinds. Move fast, break things that other people suffer from, because the masses are expected to bear the cost of massive technical debt and security failures of elites.
The Missing Ethical Code That Defines Real Engineering
What’s entirely absent from Dash’s analysis is any serious engagement with the moral and societal implications of systems he’s advocating for despite the obvious gaps. There’s no discussion of how open protocols should handle misinformation, harassment, or coordinated manipulation campaigns. No consideration of how “interoperability” might actually amplify harm by making it easier for bad actors to operate across platforms.
The early web’s relative openness was a small community of mostly well-intentioned academics and hobbyists, yet by 1995 I was making a career almost immediately from investigations of serious harms. This breaks down completely when scaled to billions of users including state actors, organized crime, and people whose entire business model is exploiting others. Google reported many dozens of convictions of its own management at one point, due to pressure from women working there reporting systemic abuses.
The Regulation Imperative
Here’s what Dash fundamentally misses: the choice isn’t between “open” and “closed” systems—it’s between regulated and unregulated ones. The most successful “open” technologies in history, from telecommunications to financial services, succeeded precisely because they were built on robust regulatory frameworks that understood and defined abuse in order to orient more towards fair access.
The internet protocols Dash celebrates—HTTP, TCP/IP, DNS—work because they’re managed through standards bodies with governance structures, not because they were “janky specs everyone hurriedly adopted.” In fact, having lived through hands-on deployment of all those protocols among the many proprietary ones, I know first-hand why the moment political power entered the equation, technology governance became essential.
Let Historians Be Our Guide
If we want genuinely open, interoperable systems that serve users rather than exploit them, we need to start with regulatory frameworks that have reliably worked before:
Mandate data portability (so switching costs stay low if not zero)
Prevent anticompetitive bundling (so open alternatives remain viable)
Require transparency (such as algorithmic, so users understand what they’re agreeing to)
Establish liability for harms (so companies can’t externalize all their costs)
Defend rights of user agency (so “simple” doesn’t become “manipulative”)
The developers Dash wants to inspire should be signing code of ethics and demanding foundations first, not celebrating the return of the same unregulated expansion and capture for exploitation that led to our current predicament.
Giddy’up Partner, Who’s the Sherrif and Judge in This Town?
“Nobody move or the N* gets it!” Source: Blazing Saddles
Dash’s “Web 2.0 2.0” isn’t a return to some golden age of openness—it’s a push towards a fiction of unregulated conditions that made slavery plantations possible in Texas after Mexico had abolished them years before. His celebration of MCP reads like someone cheering for the very forces that will inevitably capture and exploit the systems he claims to want to protect.
The real tragedy is that genuinely open, user-serving technology is possible—but only if we learn from history instead of romanticizing it. That means starting with strong ethical frameworks and regulatory protections, not hoping that “janky specs” and good intentions will somehow produce different results this time.
The stakes are too high, with technology led genocide happening as we speak, and the pattern too clear, to fall for dangerous nostalgia and ignore what needs to be done, again.
When you add up the body count from platform-enabled violence, the comparison to historical systems of harm becomes much more concrete. These aren’t just “business practices” – they’re systems that weaponize human psychology for profit while externalizing the deadly costs onto society.
WhatsApp used for chat surveillance yet also coordinated lynchings in India
Facebook-enabled ethnic cleansing in Ethiopia
YouTube’s radicalization pipeline developing and deploying mass shooters
TikTok’s algorithm pushing self-harm content to vulnerable teens
Twitter pushing Nazi propaganda while censoring women and LGBTQ, as well as enabling coordinated harassment campaigns to force suicide
Tesla AI algorithms causing hundreds of sudden “veered” crashes, killing dozens of passengers
The list goes on and on, and it shouldn’t have been ignored then, and certainly not now. Regulate to innovate.
a blog about the poetry of information security, since 1995