Andrew Jackson often is associated with threatening to ignore the law, particularly regarding his conflict with the Justice system itself. The most famous instance involves his response to the 1832 case Worcester v. Georgia.
The U.S. Supreme Court, led by Chief Justice John Marshall, had ruled that the state of Georgia could not impose its laws on the Cherokee Nation and that the Cherokee people were entitled to their lands under federal protection. Sensible, I know.
However, the horribly corrupt and deceitful Jackson, who was a strong proponent of rushed barbaric deportations, reportedly responded to the ruling with the declaration he was above the law.
President Jackson was one of the most, if not the most unjust, immoral and corrupt leaders in American history
Although Jackson’s exact diatribe may not be definitively recorded, the essence of his position reflected his unwillingness to abide by a court ruling. Jackson was not inclined to allow a decision he unilaterally disagreed with.
Jackson cruelly ordered the execution of his own men during the War of 1812. As President he destabilized the financial system and economy so badly that a banking panic in 1837 drove the country rapidy into severe depression that lasted until 1844.
Thus, his administration continued with forced removal of the Cherokee people, known as the Trail of Tears, despite the ruling to halt immediately. The Jackson deportation has since been recognized as mass armed arrest to push non-whites into concentration camps for ethnic cleansing.
…we will get clear of all Indians in Mississippi, and have a white population in their stead.
This incident is emblematic of the tension between Jackson and the judicial branch, where a President simply ignored the Court’s authority. His ignorance caused great suffering, foreshadowing today’s latest challenge in U.S. checks and balances.
Each president is allowed to select their preferred carpet and drapery colors, as well as statues and portraits. On Monday, President Donald Trump brought a portrait of Andrew Jackson, the seventh president of the United States, back to the Oval Office.
The White House has said, as if to invoke the racist, immoral ghost of Jackson, it will ignore the Justice system and maybe even try to impeach judges who disagree with Trump.
Chief Justice John Roberts pushed back on President Donald Trump’s escalating rhetoric against the federal judiciary on Tuesday in a highly unusual statement that appeared to be aimed at the president’s call to impeach judges who rule against him. […] Trump is attempting to invoke a 1798 law that allows the federal government to expedite deportations of citizens of a “hostile nation” in times of war or when an enemy attempts an “invasion or predatory incursion” into the United States. […] Roberts’ statement Tuesday was similar to a rebuke the chief justice issued in 2018, when he responded to Trump’s remarks by saying that, “we do not have Obama judges or Trump judges, Bush judges or Clinton judges.”
The political impeachment threat to judges goes back even earlier to 1804, when Federalist judge Samuel Chase was accused of bias. The US Senate has in total considered only 15 judges for impeachment since the country’s founding. Of those, only eight were found guilty in a US Senate trial, as you can see here:
March 2025 the Tesla driverless experience is still blind to objects and humans in the road. Arguably it has only gotten worse as the company intentionally removed critical safety equipment, slashing costs despite known risks to life and property. Source: Screen grab from Mark Rober video
Tesla’s response of course to findings like this has never been to address safety concerns with engineering, but instead a barrage of debate tactics and threats. Thus, it’s time again to watch the masterclass in military intelligence methods unleashing their usual terminological obfuscation/smoke game.
The media machine [using Soviet scaffolding repurposed by KGB officers to run their Russian dictatorship] seeks to not only provide an alternative narrative with a Russian version of events, but also to cause general confusion and question the whole notion of the truth. It provides varying accounts of events, often based in truth, that work to sow discord and confusion.
This is not a theoretical experience for American engineers working on safety reports, this has been a long-time a fundamental public safety issue by design.
Everyone knows the Kremlin seeks to use information to deny, deceive, and confuse… You could spend every hour of every day trying to bat down every lie, to the point where you don’t achieve anything else. And that’s exactly what the Kremlin wants
An engineer’s hands are tied up with truckloads of misdirection and misinformation so they can’t possibly do engineering? A truth-teller delivering transparent results is accused of manipulation by the biggest manipulators, up is down, math and physics no longer can be real… in the fog of information warfare. We know how and why many people will die, until we’re facing a tidal wave of “nothing is real” attacks.
March 2025 a Tesla autopilot still runs over children like it’s 2016, with sensors unable to handle normal road conditions as if negligent by design. Over 50 people have so far been killed by Tesla autopilot flaws.
Notably the angry mob spins their attacks as a “defense” strategy to protect their assets, their way of life, as if they are the real victims and not the people who will be killed by design. They’re blasting information weapons out into the Internet with a claim to be protecting something they consider so valuable, so critical to their own survival, any lives lost by others (e.g. killed by Tesla) get reframed as just collateral damage.
A Tesla balloon designed to be made of lead has a “good” reason for never getting off the ground…
Consider the irony. A Tesla vision failure means it can’t “see” a child mannequin and runs it over without any regard for human life. Tesla defenders don’t “see” this as design failure, but rather focus on what they can “see” as an attack by anyone who dares to speak the truth of exactly how and why a child would be killed.
It’s a kind of consistency in trained and limited vision, an inability to process real outcomes, that’s a result of military-like basic training about who deserves to live or die.
“You’ll Believe What We Tell You To” Say Tesla PK Shock Troops
Tribune.com.pk’s recent mob-rule-sounding propaganda blast attacking Mark Rober’s Tesla test is a perfect example of how the military intelligence of an unnamed nation state can unleash weaponized words to deflect meaningful criticism to float the stock value underpinning one of their key foreign assets.
The techniques we’re seeing mirror Soviet “Operation Infektion” that falsely claimed AIDS was a US bioweapon – a playbook preserved and upgraded by those who deployed it originally. Despite having an economy smaller than Italy’s, this nation maintains disproportionate global influence operations, as essential to its power as oil revenues. Like inheriting a Cold War nuclear arsenal then repurposing it for neighborhood extortion, former intelligence operatives now running a dictatorship deploy their keyboard armies against threats to their investments. Tesla’s terminology battles represent just one theater in this broader campaign – flooding discourse with confusion to exhaust experts and undermine regulation. The ultimate goal remains unchanged: enable rapid wealth extraction by using asymmetric information attacks to prevent accountability for preventable harms and deaths.
With that in mind, thousands of keyboard warriors from an unknown country are now on a campaign to attack Rober as if he “misrepresented Tesla” because he supposedly tested “Autopilot” not “Full Self-Driving”, as if any of those words have actual meaning and a distinction matters when the fundamental issue is Tesla in 2025 demonstrates the complete failure to detect a wall and mannequins it claimed a decade ago to be a solved problem. More to the point, Tesla claimed it would be the first to solve this safety issue and be the most safe car on the road, placing itself above all other designs and engineers unequivocally and without exception.
Tesla fails 50% of the safety tests, meaning three child mannequins were run over by its flawed camera-only driverless system, compared with a car wisely using LiDAR. Source: Screen grab from Mark Rober video
Here’s the absurd logic at work, just to make clear how cruel and cynical the military intelligence system is at pushing Tesla into certain death of Americans (remember for purposes of information warfare severity, millions of people died during the Cold War from its targeted application):
Tesla markets the term “Autopilot” without shame in 2016, announcing autopilot capabilities removing any need for a human by 2017, and their CEO repeatedly states that anyone criticizing autopilot with caution about adoption should be held responsible for deaths — BECAUSE AUTOPILOT IS SOLD AS CAPABLE OF PREVENTING DEATH
People start to die because they trust Tesla marketing, with two fatal crashes immediately in 2016 and a pedestrian dead in 2018…
Tesla starts to passively criticize Autopilot itself by 2020, announcing “Full Self-Driving” that will do what Autopilot was sold to do.
Tesla in late 2024 changes the name of FSD to “supervised”, passively criticizing both Autopilot and FSD as being incapable of achieving their meaning, admitting they’ve never been using language correctly. Musk pumps even harder on the propaganda, claiming there will be ZERO CRASHES IN 2025, despite at least 52 deaths from Autopilot and FSD together so far)
Anyone testing these systems is accused of the crime that Tesla is committing, as if misuse of language is applied to anyone pointing out the misuse of language. It’s always “didn’t test the right system” because there is no actual system to test, just a shell game of opaque unaccountable abusive behavior that puts everyone in danger except Tesla.
This terminological methodology, well known to scholars of military intelligence and targeted attacks on populations, is designed for Tesla to never be held accountable. When deaths occur, the response isn’t to investigate and fix the technology, but to revise words and change definitions. When tests demonstrate failures, the reaction isn’t engineering revised and better safety systems, but semantic arguments to avoid engineering at all. Meanwhile, the body count continues to rise while Musk makes increasingly absurd safety claims detached from reality and attacks his critics with baseless claims they are doing what he does. It’s a casino mentality where he sets up mirrors and tables to unjust house rules such that anyone who dares to enter his realm can never win.
Deadly Tesla Disengagement
Learning how magicians lie is such a disappointment because the magic is lost. This is what the Electrek journalist discovered after being attacked by Tesla’s investors who demanded he believe in the magic:
NHTSA’s investigation of Tesla vehicles on Autopilot crashing into emergency vehicles on the highway found that Autopilot would disengage within less than one second prior to impact on average in the crashes that it was investigating…
Rober’s video captured this exact behavior! The magic gone in an instant. Watch carefully as the system disengages 17 frames before impact. This is a damning example of Tesla engineering designing coverups into friendly-fire situations. They built a feature to generate maximum plausible deniability to reduce their liability in a known deadly outcome they are responsible for creating. “The system wasn’t engaged during the crash” becomes the technical truth that masks the killer reality: the Tesla since 2016 promises of solving driverless completely by 2017 still fail to prevent a crash in 2025 that it should detect well in advance.
Seventeen LONG Frames Before Death
Other cars can do it today. Other cars didn’t promise to solve crashes by 2017. Tesla can’t do it today. Tesla promised to have it solved by 2017. You think it matters what words Tesla uses when they’ve proven since even before 2016 that none of their words can be trusted? Accepting their preference in terminology is like agreeing to let a toddler rewrite the dictionary in a way that helps them never be responsible for anything.
Tesla has been selling people a word salad unsafe for consumption. Their “apple” is actually a painted rock. And when someone breaks a tooth trying to bite into it, Tesla argues “this is our LOOKING apple, it can’t yet be bitten.” After many people lose their teeth Tesla announces “we have a banana for you to go with our apple.” Should someone test either the “apple” or the new “banana” they would discover both are painted rocks, to which Tesla says “forget the apple, we replaced the banana with another banana, and another one, and another one, next year the banana will be so edible nobody will break a tooth ever again”… and the next year more teeth are broken, repeating this advance fee fraud forever. It’s really no different than the 419 African email scam.
In this new safety test video by the ex-NASA engineer we see someone showing a Tesla apple for what it is, and always has been, just a painted rock. It’s a LIE that has dragged on since 2016. Because LiDAR don’t LIE. There shouldn’t be controversy in this VERY OLD NEWS. The exact opposite in fact, this video should be welcomed like how someone who just placed 154th in a group event gets congratulated. Hey Mark, welcome, and thanks for participating in something that has been operating for over ten years with the same results. Welcome to Mark, welcome into the big tent with everyone who already understands that since 2016 Tesla has been selling “driverless” for hundreds of millions and more hundreds of millions without ever providing what they had claimed from the start.
Another Brick in the Wall Tesla Can’t See
While Tesla plays word games to undermine safety, the reality remains unchanged: their low-quality consumer-grade camera-only system simply and predictably fails basic tests that LiDAR-equipped vehicles have passed for a decade. This isn’t new to anyone with a clue because engineers have been demonstrating this fundamental flaw repeatedly and dramatically (although, I’ll admit, not as dramatically as this high-production new Disney-like video). The Dawn Project and numerous safety experts have shown these exact same failures in many media formats with the same conclusive results, yet Tesla removed safety in the false name of a fictional “efficiency”.
Elon Musk… has expressed his admiration for Rand’s work, particularly “The Fountainhead.”
In Ayn Rand’s novel “The Fountainhead,” the character Dominique Francon purchases a beautiful Greek statue that she genuinely admires, then deliberately destroys it by throwing it out the window. It’s almost like Elon Musk is that character, who destroys everything he touches to prove that is better (for him) than letting it exist in a world that doesn’t appreciate him enough. Musk’s “the best part is no part” psychosis is destructive thinking that removed critical safety sensors from Tesla vehicles, despite warnings from experts. In the same way he created DOGE to force a false “efficiency” of minimal human safety, resulting in preventable deaths (targeting non-whites).
The philosophy of the malignant narcissist isn’t a mystery, the intent to deny/withhold and harm aren’t hidden. Elon Musk repeatedly implies deaths of non-white children will be consistent with his life’s eugenicist mission to generate more white people as quickly as possible.
Killing children is by design, I’m afraid. “Pro-natalists” like Musk claim they aren’t racist, but their pressure to have children is solely focused on white women, while they back policies that literally kill non-white children. He’s a eugenicist.
Tesla killing children in the road thus is the outcome of his racist game, given the majority of people at risk will statistically be non-white. DOGE eliminating USAID is projected to kill at least 3 million non-white people, far greater than Tesla death tolls. Elon Musk is consistent in his plotting to do harm to very specific groups of people.
That’s why you have to understand in the fog of information warfare that Elon Musk makes increasingly absurd claims on purpose, recently promoting the nonsense that Tesla vehicles “won’t crash” in 2025 even as Tesla crash rates have actually accelerated even faster than fleet growth, according to NHTSA data.
Key Observations: Data clearly shows that both serious incidents (orange line) and fatal incidents (pink line) are increasing at a steeper rate than the fleet size growth (blue line). This is particularly evident from 2021 onwards, where: Fleet size (blue) shows a linear growth of about 1x per year. Serious incidents (orange) show an exponential growth curve, reaching nearly 5x by 2024. Fatal incidents (pink) also show a steeper-than-linear growth, though not as dramatic as serious incidents. The divergence between the blue line (fleet growth) and the incident lines (orange and pink) indicates that incidents are indeed accelerating faster than the production/deployment of new vehicles. Source: Tesladeaths.com and NHTSA
Tesla at War, Casualties Mounting
The public deserves better than semantic games. When a vehicle can’t detect a wall or mannequins in the road, the terminology used to market its driver assistance features becomes irrelevant. The question isn’t whether it was “Autopilot” or “Full Self-Driving” that failed, but why Tesla continues to deploy systems with demonstrated safety flaws and fights regulation rather than improving their technology.
As the Tribune.pk article unwittingly reveals, we’re witnessing a coordinated effort to shift discussion from “does this system have a correct outcome” to “which term was used at the moment of failure”. That’s a shell game designed to exhaust and confuse the public while real safety concerns go unaddressed and more and more people die by design.
The truth is simple: if your vehicle that has been vehemently and angrily defended since 2016 as “driverless” still can’t detect a child in the road or a giant wall, the terms don’t really matter. Over 52 people are dead. What matters is Tesla intentionally misleads people, they’re dead, and it shouldn’t be on the road anymore. At this point, Tesla should be recognized as a foreign-backed threat even worse than domestic terrorism, literally…
Welcome to the Stupidity of AI-Powered Policy: When Governance is Reduced to One-Move Chess
Send it!
A profound shift in American governance has been signaled by three recent “AI” developments in the news.
First, the BBC says that United Nations aid agencies have received a dubious 36-question form from the US Office of Management and Budget asking if they harbor “anti-American” beliefs or communist affiliations. That in itself should be proof enough that AI systems are totally incapable of preventing themselves from making an accidental launch of nuclear missiles.
Second, the Atlantic tells us how Department of Government Efficiency (DOGE) appears to be rapidly implementing AI systems in federal agencies despite significant concerns about their readiness, with plans to replace human workers with incompetent-robot operators at the General Services Administration (GSA). This is much in the same as Tesla initially boasting it would replace all workers with robots, which failed horribly and caused a rapid roll-back in disaster mode.
This all comes as Facebook, just as one obvious example, has said content generation and moderation is a bust because of unavoidable integrity breaches in automated communications systems.
Zuckerberg acknowledged more harmful content will appear on the platforms now.
The “best” attempts by Facebook (notably started by someone accused at Harvard of making no effort at all to avoid harm) have been just wrong, like laughably wrong and in the worst ways, such that they can’t be taken seriously.
This week [in 2021] we reported the unsurprising—but somehow still shocking—news that Facebook continues to push anti-vax groups in its group recommendations despite its two separate promises to remove false claims about vaccines and to stop recommending health groups altogether.
Foreshadowing clumsy and toxic American social media platforms in 2025, Indian troops in the Egyptian desert get a laugh from one of the leaflets which Field Marshal Erwin Rommel has taken to dropping behind the British lines after his 1942 ground attacks failed. The leaflet, which of course were strongly anti-British in tone, were printed in Hindustani, but far too crude to be effective. (Photo was flashed to New York from Cairo by radio. Credit: ACME Radio Photo)
However, despite the best engineers warning AI technology is unsafe and unable to deliver safe communications without human expertise, we see the three parallel developments above are not isolated policy shifts.
They appear to be lazy, rushed, careless initiatives that represent a fundamental transformation in governance from thoughtful outcome-oriented service to an unaccountable extract-and-run gambit. It’s a shift from career public servants making things work through a concentration of significant effort, to privileged disruptive newcomers feeling entitled to rapid returns without any idea of what they are even asking. The contextless, memory-less nature of both the latest AI systems and certain rushed anti-human leadership styles are now upon us.
The One-Move-at-a-Time Problem in International Relations
When powerful AI systems are deployed in policy contexts without proper human oversight, governance begins to resemble what international relations theorist Robert Jervis would call a “perceptual mismatch” and actors will fail to understand the complex interdependence that shapes the global system.
It becomes a game of chess played one move at a time, with no strategy beyond the immediate decision other than selfish gains.
There is a query [to the UN] about projects which might affect “efforts to strengthen US supply chains or secure rare earth minerals”.
This is the worst possible way to play on the world stage, revealing evidence of an inability to think, learn, adapt or improve. America looks sloppy and greedy, a kind of desperation for wealth extraction, like a 1960s dictatorship.
A Tofflerian Acceleration Crisis
Alvin Toffler, in his seminal work “Future Shock” (1970), warned about the psychological state of individuals and entire societies suffering from “too much change in too short a period of time.” What Toffler was warning us about is how AI-driven governance would accelerate our political systems in ways that would frighten the anti-science communities into a panic. The domain shift opens a vacuum of trust that we might call “policy shock”, enabling “strong men” (snakeoil) decisions made in spite of (ignoring) historical context, by removing consideration of second-order effects.
We go from a line with points on it to no lines at all, just a bunch of points.
The UN questionnaire perfectly embodies this anti-science acceleration crisis: complex geopolitical relationships developed over decades since World War II reduced to thoughtless binary questions, processed in a flawed algorithmic rush to check unaccountable lists rather than an intelligent/diplomatic pace for measured outcomes.
Similarly, the GSA’s rapid deployment of AI chatbots, conceived as an experimental “sandbox” under the previous administration, are being fast-tracked as a productivity tool amid mass layoffs. It represents exactly the kind of technological acceleration Toffler warned would be devastatingly self-defeating.
The State Department’s AI-powered “Catch and Revoke” program amplifies acceleration as well, with a senior official boasting that “AI is one of the resources available to the government that’s very different from where we were technologically decades ago.” Well, well General LeMay would say, now that we have the nuclear bombs, what are we waiting for, let’s drop them all and get this Cold War over with already! He literally said that, for those of us who appreciate the importance of studying history.
Source: “Dar-win or Lose: the Anthropology of Security Evolution,” RSA Conference 2016
As The Atlantic reports, what was intended to be a careful testing ground immersed in scientific rigor is being transformed into a casino-like gambling table to replace human judgment across federal agencies. At the very moment human judgment is most needed for complex social and political determinations with disruptive technology, the administration keeps talking about rapid speed of implementation to replace any careful consideration of potential consequences.
You could perhaps say Elon Musk has been pulling necessary sensors from autopilot cars as an “efficiency” move (ala DOG-efficiency), at the very moment every expert in transit safety says such a mistake will predictably cause horrible death and destruction. We in fact need the government workers, we in fact need the agencies, just like we in fact need LiDAR in autopilot cars detecting dangers ahead to ensure the system is designed for necessary action to avoid disaster.
The Chaotic Actor Problem
Political scientist Graham Allison introduced the concept of “organizational process models” to explain how bureaucracies function based on standard operating procedures rather than rational calculation. But what happens when leadership resembles what computer scientists call a “memoryless process” of self-serving chaos, where each new state depends only on the current inputs, not on any history that led there?
A leader who approaches each day with no memory of previous positions, much like an AI chatbot that restarts each conversation with limited context due to token constraints, creates a toxic tyrannical governance pattern that:
Disregards Path Dependency: Ignores how previous decisions constrain future options
Fails to Recognize Patterns: Misses recurring issues that require consistent approaches
Creates Strategic Incoherence: Generates contradictory policies that undermine long-term objectives
Historians have noted how authoritarian systems in the 1930s disrupted institutional stability through what scholars later termed “permanent improvisation”, forcing unpredictable governance to replace rule of law with only a loyalty test to Hitler. The current administration’s approach to governance shares concerning similarities with historical authoritarian systems (Hitler’s Germany) that relied on constant policy shifts and disregard for factual consistency.
The danger of the memoryless paradigm appears to be materializing in real time. The Atlantic reports that the GSA chatbot, which could be used to “plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data”, now operates with the same limitations as commercial AI systems.
Systems that very notoriously struggle to reach factual accuracy, that exhibit dangerous biases, and that have no true understanding of context or consequences, are unfit to be implemented without governance. But for the memoryless anti-governance actor, it’s like pulling the trigger on an automatic weapon swinging wildly without caring at all about who or what is being hurt.
The State Department’s “Catch and Revoke” program represents perhaps the most alarming implementation of this memoryless approach. Policing speech and using faulty technology is like a nightmare straight out of the President Jackson experience (leading into Civil War) or President Wilson experience (leading into WWII). Some have compared today’s AI surveillance to the more recent President Nixon experience and “Operation Boulder” from 1972. Remember when Dick Cheney admitted he had been hired into the Nixon administration to help find students to jail for opposing Nixon? America has not the best track record on this and yet today’s technology is different because it makes the scope vastly more expansive and the consequences more immediate.
As one departed GSA employee noted regarding AI analysis of contracts: “if we could do that, we’d be doing it already.”
The rush into flawed systems creates “a very high risk of flagging false positives,” yet there appears to be little consideration of checks against this risk, further proving memoryless governance fails to learn from past technological overreach. This concern becomes even more acute when the stakes involve not just contracts but people’s citizenship status, as evidence emerges of students leaving the country after visa cancellations related to their speech.
Constructivism vs. Algorithmic Reductionism
International relations theorist Alexander Wendt’s constructivist approach argues that the structures of international politics are not predetermined but socially constructed through shared ideas and interactions. AI-driven policy, by contrast, operates on algorithmic reductionism, that horribly reduces complex social constructs to simplified computable variables.
Imagine trying to represent social interaction as a simple mathematical formula. Hint: Jeremy Bentham tried hard and failed. We know from his extensive work that it doesn’t work.
The AI generated questionnaire sent to the UN is an attempt categorize humanitarian organizations as either aligned or misaligned with American interests. Such a stupid presentation of American thought reflects a reductionist approach, ignoring what constructivists would recognize as the evolving, socially constructed nature of international cooperation.
It’s like American foreign policy being turned into a slow robot wearing a big hat and saying repeatedly “Hello, I am from America, please answer whether I should hate you”.
The State Department’s new “Catch and Revoke” program employs AI to scan social media posts of foreign students for content that “appears to endorse” terrorism. This collapses complex political discourse into binary classifications that leave no room for nuance, context, or constructivist understanding of how meaning is socially negotiated. And that’s not to mention, again, Facebook says that they’ve conclusively proven that the technology isn’t capable of this application so they’re disabling speech monitoring.
Think about the politics of Facebook saying all speech has to be allowed to flow because even their best and most well-funded tech simply can’t scan it properly, while the federal government plows into execution of harsh judgment based on rushed, low-budget tech with dubious operators.
Orwellian Optimization Without Context
Chess algorithms excel at optimizing for clearly defined objectives: capture the opponent’s pieces, protect your own, and ultimately checkmate the opposed king. Similarly, an AI tasked with “reducing foreign aid spending” or “prioritizing America first” is surely going to generate questions designed to create easily broken (gamed if you will) classifications without grasping even a little of the complex ecosystem of international humanitarian work.
Playing Tic-Tac-Toe With Baseballs
Political scientist Joseph Nye’s concept of “soft power” — the ability to shape others’ preferences through attraction rather than force and coercion — becomes particularly relevant here. A chess player who can only ever focus on a next move will inevitably lose to someone thinking five moves ahead (assuming they both play by the rules, instead of believing they can never lose). Similarly, questionnaires that reduce complex international relationships to yes/no questions miss how the dismantling of humanitarian cooperation rapidly diminishes America’s soft power projection. Trust in America is evaporating and it’s not hard to see why if you can think more than a single move ahead.
Human Cost of Algorithmic Governance
We know from Elon Musk’s use of AI in Tesla that many more people are dying than would have without the use of AI. The cars literally run over people due to operators failing to appreciate and prepare for when their car will run over people. Why? Because Elon Musk’s aggressive promotion of emerging technologies despite documented limitations raises questions about… ability to see harms. His well-researched methods of public sentiment attack — similar to advance fee fraud — are known to be highly successful in disarming even the most intelligent (e.g. doctors, lawyers, engineers) when they lack domain expertise necessary to judge his fantasy-level claims of a miraculous future. So if such a deadly pattern of deceptive planning becomes normalized into federal government, what might we expect?
Safety Margin Collapse: Complex humanitarian principles based on deep knowledge like neutrality and impartiality become impossible to maintain when forced into binary classifications. Similarly, as The Atlantic reports, the nuanced judgment of civil servants is being replaced by AI systems that struggle with “hallucination,” “biased responses,” and “perpetuated stereotypes”, all acknowledged risks on the GSA chat help page. This loss of nuance extends to political speech, where the State Department is using AI to determine if social media posts “appear pro-Hamas”, which is so vague it could capture legitimate political discourse about protecting Israelis from harm. I can’t overemphasize the danger of this collapse, like warning how the machine-gun poking out of a balcony in Las Vegas exploited the binary mindset on gun control forced by the NRA.
Accelerated Policy Shifts: What the infamous Henry Kissinger liked to call the “architecture of the international order” will degrade rapidly not through deliberative process but through algorithmic errors reminiscent of the Cuban Missile Crisis. Domestically, we’re already seeing this acceleration, with DOGE advisers reportedly feeding sensitive agency spending data into AI programs to identify cuts and using AI to determine which federal employees should keep their jobs. Need I mention that AI programs lack privacy controls? The OPM breach was minor compared to DOGE levels of security negligence. The State Department’s AI initiative already resulted in push-button visa revocations and at least one student leaving the country like in a Kafka novel, bypassing deliberative process and representation in human judgment.
Feedback Loops: As organizations adapt their responses to pass algorithmic filters, we risk creating what sociologist Robert Merton called a “self-fulfilling prophecy” of a system that outputs the adversarial relationships it was designed to detect. This dynamic resembles how some surveillance technology companies may inadvertently create the very problems they claim to solve, potentially creating systems (e.g. Palantir) that generate false positives while marketing themselves as solutions. This mirrors the current situation where, as one former GSA employee told The Atlantic, AI flagging of “potential fraud” will likely generate a fraud from numerous false positives, where no checks appear to be in place. Free speech advocates are already noting the “chilling effect” on visa holders’ willingness to engage in constitutionally protected speech, which is exactly the kind of feedback loop that reinforces compliance through false positives at the expense of democratic values.
Closing One Eye Around the Blind, Making Moves Against One-Move Thinking
Francis Fukuyama, despite his “End of History” thesis, later recognized that liberal democracy requires ongoing maintenance and adaptation. Similarly, effective governance, like chess mastery, requires thinking many moves ahead and understanding the entire board. It demands appreciation for strategy, history, and the complex interplay of all pieces far beyond mechanical application of rules.
The contrast between governance approaches is striking. The previous administration’s executive order on AI emphasized “thorough testing, strict guardrails, and public transparency” before deployment. As a long-time AI security hacker I can’t agree enough that this is the only way to get to where we need to go, to innovate in security necessary to make AI trustworthy at all. However, the current radical approach by anti-government extremists dismantling representative government, as The Atlantic reports, appears to treat “the entire federal government as a sandbox, and the more than 340 million Americans they serve as potential test subjects.”
Tesla’s autopilot technology has been associated with a rapid rise in preventable fatalities, raising serious questions about whether the technology was deployed before adequate safety testing. The rapid deployment of unproven AI systems with life-or-death consequences represents a concerning pattern that appears to prioritize technological short-cuts and false efficiency over rigorous safety protocols to emphasize long term savings.
This divergence is plainly visible in policy moves that have all the hallmarks of loyalists appointed by Trump to gut the government and replace it with incompetence and graft machines. Whereas determining whether a move constitutes risk traditionally required careful human judgment weighing multiple factors to see into the outcomes, the “Catch and Revoke” program reflects a chess player focused solely on a current move and completely blind to what’s ahead. When AI flags a social media post as “appearing pro” anything, that alone can trigger a massive change in civil rights now. This is having real-world consequences, just like Tesla has been killing so many people with no end in sight. Raising alarm about constitutional implications of unregulated AI should be in context of allowing Tesla to continue to operate manslaughter robots on public roads.
The rise in all these AI developments exemplify a radical difference in concepts of integrity and what constitutes a breach, between strategic chess thinking and playing one move at a time.
If we’re entering an era where AI systems—or leaders who operate with similar memoryless, contextless approaches—are increasingly involved in policy implementation, we must find ways to reintroduce institutional memory, historical context, and strategic foresight.
Otherwise, we risk a future where both international relations and domestic governance are reduced to a poorly played game ruled by self-defeating cheaters—as real human lives hang in the balance. The binary questionnaire to UN agencies, the rapid deployment of AI across federal agencies, and the algorithmic policing of social media aren’t just parallel developments—they’re complementary moves in the same dangerous game of governance without memory, context, or foresight.
We’re a decade late on this already. Please recognize the pattern before the game reaches its destructive conclusion. The Cuban missile crisis was a race to a place where nobody is a winner, and we’re not far from the repeating that completely stupid game in taking one selfish and suicidal step at a time.
The rapid and humiliating defeat of poorly trained and disorganized Romanian mercenaries in eastern Democratic Republic of Congo (DRC) last January offers more than just another chapter in Congo’s troubled history. It provides a critical lens through which to understand a troubling reality: systems corrupted by external forces often cannot be reformed solely from within—a lesson Americans must urgently confront.
The Cold War Template: Foreign Capture of National Resources
Mobutu Sese Seko’s ascent to power represents the quintessential foreign-directed coup designed to secure resource extraction, a playbook America once deployed abroad but now faces at home:
Phase One: Villify Democratic Leadership
Just months after Congo gained independence from Belgium in June 1960, Patrice Lumumba became the country’s first democratically elected Prime Minister. His nationalist agenda and willingness to work with the Soviet Union to counter continued Belgian control over mineral-rich provinces triggered immediate Western intervention.
By September 1960, the CIA, in coordination with Belgian intelligence, backed Colonel Joseph Mobutu to stage his first coup, suspending parliament and neutralizing Lumumba. Declassified documents later revealed that the CIA had authorized Lumumba’s assassination, which occurred in January 1961 with Belgian complicity after he was transferred to the mineral-rich Katanga province.
The message was clear: nationalist leaders who threatened Western access to strategic minerals would not be tolerated. “They” even assassinated the UN Secretary General (1961) and the President of the United States (1963).
Phase Two: Install and Maintain Corrupt Puppet
After five years of political maneuvering and continued Western support, Mobutu surged in power again to stage a second, more decisive coup in November 1965. He was set to replace democratic leadership that had dared to suggest rights and regulations, things that threatened foreign power extraction of national resources.
Army head, Mobutu seizes control in Congo Republic”, Indianapolis Recorder, 4 December 1965
Explicitly backed by foreign powers (US, Belgium, and France) Mobutu rapidly established what would become a 32-year dictatorship characterized by:
Complete consolidation of power
Elimination of opposition
Direct foreign backing for explicit purposes of resource extraction
Suspension of democratic institutions while maintaining their facade
What followed was decades of authoritarian rule that hollowed out democratic institutions while maintaining their outward appearance—a pattern now disturbingly visible in America’s democratic erosion.
Critical Lesson: Internal Reform Fails Under External Capture
For decades, Congolese citizens suffered under Mobutu’s kleptocratic regime with no internal path to democratic restoration. Despite extensive suffering, corruption, and human rights abuses, the Mobutu regime’s external backing made internal reform impossible. What should have happened—Mobutu hauled out for his illegal coup and Congo returned to democratic governance—was prevented by Cold War geopolitics.
Only external intervention—Rwanda-backed forces (led by Laurent-Désiré Kabila) in the First Congo War (1996-1997)—finally ended Mobutu’s reign. This was a dictator responsible for thousands of extrajudicial killings, torture, and disappearances of political opponents, while embezzling an estimated $4-15 billion from his impoverished country. Despite these horrific crimes, witness accounts reveal his shocking disconnection from reality:
When Mobutu came through Pointe Noire, and although I had known Mobutu for a long time, it was still remarkable to see him at the airport in Pointe Noire and all the Congo… was out there just really cheering and obviously respecting this guy as someone who was a big man, and respected as a big man for all of his warts and faults. …He was not prepared to accept that after, whatever it was, 25 years, somehow the Zairian people wouldn’t stand up and defend him. He truly believed, and with some reason, that he had been a wonderful President for Zaire. He didn’t recognize that there was a very good argument that could be made he’d been a terrible President for Zaire.
This brutal dictator’s eventual fall through external intervention rather than internal resistance demonstrates a crucial truth: once powerful foreign interests have sufficiently undermined a nation’s power structures, internal democratic mechanisms alone rarely succeed in restoring sovereignty—especially when facing a regime willing to use extreme violence against its own citizens.
Modern Parallel: Rwanda-Backed M23 vs. Russian/Chinese Proxies
On January 28, 2025, the ’23 Mars’ (M23) rebels , backed by Rwanda, captured mineral-rich Goma, defeating so-called “Romeo” contractors funded by Russian oligarchs and Chinese investors through the DRC government. And to put the significance of the strategic rout in perspective, the M23 has less than 5,000 rebels rapidly defeating over 100,000 Congolese soldiers and their 10,000 foreign mercenaries. Seemingly entrenched systems can quickly collapse when so thoroughly corrupted, it just takes determined external intervention. Essentially the same thing we’ve been seeing in Ukraine with the colossal failure of Russia.
What makes the M23 wins particularly revealing is how they’re undoing the tactics seen elsewhere in the world. Congo’s disorganized deployment of poorly trained European personnel given an AK47 and flak vest but nothing else — “supermarket guards” according to The Guardian’s investigation — resembles the approach used by Russian PMCs in Mali and the Central African Republic for example. The idea was to deploy low-skilled militants desperate for rapid enrichment (yet low chances of survival) as a foreign intervention “force” to maintain remote strategic resource access while avoiding direct accountability.
Congolese leaders have a history of employing white mercenaries. They led infamous campaigns against rebels in the turbulent years after independence from Belgium in 1960. Former Congolese dictator Mobutu Sese Seko also hired ex-Yugoslav mercenaries as his regime collapsed in the 1990s. In late 2022, with the M23 surrounding Goma, the DRC government hired two private-military firms. One, named Agemira, was made up of about 40 former French security personnel who provided intelligence and logistical support to the Congolese army.
Following Mobutu’s coup in November 1965, Maj Gen Louis Bobozo (left) was appointed to be his commander-in-chief of the ANC, as seen here in Kisangani, 1966 with French mercenary Col Bob Denard (right). The recent Romanian mercenary collapse follows a long history of dubious foreign fighters paid to heavily influence control of Congo’s resource conflicts.
The Historical Inversion: America Now Faces What It Once Inflicted
The disturbing parallel emerging today is that America itself is experiencing the same playbook it once deployed against nations like Congo:
Democratic Erosion: Not through outright abolishment of institutions but through their hollowing out and redirection—maintaining the facade of democratic governance while relocating actual power.
Resource Capture: Just as Congo’s minerals were extracted for foreign benefit, America’s wealth and resources are increasingly concentrated in fewer hands, a dozen extreme right wing oligarchs.
Puppet Leadership: The rise of leaders who serve external interests while masquerading as nationalists mirrors the Mobutu model.
This seems to be the immediate plan for America now under South African-born President Musk (raised on the lessons from Mobutu) and his assistant Trump. Already we see democratic erosion that operates not through outright abolishment of institutions but through their hollowing out and redirection. This maintains the facade of democratic governance while relocating actual power. The formal appearance of democratic institutions masks a reality where actual power has been redirected outside traditional channels of accountability, similar to how foreign powers historically achieved resource extraction in places like the DRC while maintaining the facade of sovereignty.
The Uncomfortable 2025 Question: Who Will Be America’s Rwanda?
History tells us clearly that systems cannot be reformed solely from within when control is sufficiently consolidated by external pressures. The collapse of Goma’s defenses and the flight of mercenaries to UN compounds demonstrate how quickly seemingly entrenched corrupt systems can fall when confronted by determined external intervention.
For Congolese citizens, Rwanda’s intervention—while complex and certainly not without its own problems—finally disrupted decades of foreign-backed corruption. In the American context, the question becomes increasingly urgent: Who will be the Rwanda in this picture, and how soon can they come to rescue Americans from a corrupt tyranny?
Those who would like me to expect that domestic guardrails and organizations can work their way out of a “DOGE” coup in America likely haven’t seen what I have studied up close and in person for so many decades: the how and why of countries around the world that required external military intervention to drive out authoritarian oligarchs and foreign oppressors, rather than achieving liberation solely through internal resistance.
Rwanda-aligned forces gaining control of strategic mineral resources suggests a geopolitical realignment. M23’s capture of Goma means setbacks for Russian/Chinese interests, as well as US/UK/France, their corruption/control of DRC government now potentially undermined. Click to enlarge.