DeepSeek Jailbreaks and Goldilocks Power as Geopolitical Gaming

Coverage of AI safety testing reveals a calculated blind spot in how we evaluate AI systems – one that prioritizes geopolitical narratives over substantive ethical analysis.

A prime example is the recent reporting on DeepSeek’s R1 model:

DeepSeek’s R1 model has been identified as significantly more vulnerable to jailbreaking than models developed by OpenAI, Google, and Anthropic, according to testing conducted by AI security firms and the Wall Street Journal. Researchers were able to manipulate R1 to produce harmful content, raising concerns about its security measures.

At first glance, this seems like straightforward security research. But dig deeper, and we find a web of contradictions in how we discuss AI safety, particularly when it comes to Chinese versus Western AI companies.

The same article notes that “Unlike many Western AI models, DeepSeek’s R1 is open source, allowing developers to modify its code.

This is presented as a security concern, yet in other contexts, we champion open-source software and the right to modify technology as fundamental digital freedoms. When Western companies lock down their AI models, we often criticize them for concentrating power and limiting user autonomy. Even more to the point many of the most prominent open source models are actually from Western organizations! Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360), T5 (Google), Bloom (BigScience), Starcoder2 (BigCode), and Falcon (TII) to name a few.

Don’t accept an article’s framing of open source as “unlike many Western AI” without thinking deeply why they would say such a thing. It reveals how even basic facts about model openness and accessibility are mischaracterized to spin a “China bad” narrative.

Consider this quote:

Despite basic safety mechanisms, DeepSeek’s R1 was susceptible to simple jailbreak techniques. In controlled experiments, the model provided plans for a bioweapon attack, crafted phishing emails with malware, and generated a manifesto containing antisemitic content.

The researchers focus on dramatic but relatively rare potential harms while overlooking systemic issues built into AI platforms by design. We’re more concerned about the theoretical possibility of a jailbroken model generating harmful content than we are about documented cases of AI systems causing real harm through their intended functions – from hate speech and influencing suicides through chatbot interactions to autonomous vehicle accidents.

The term “jailbreak” itself deserves scrutiny. In other contexts, jailbreaking is often seen as a legitimate tool for users to reclaim control over their technology. The right-to-repair movement, for instance, argues that users should have the ability to modify and fix their devices. Why do we suddenly abandon this framework when discussing AI?

DeepSeek was among the 17 Chinese firms that signed an AI safety commitment with a Chinese government ministry in late 2024, pledging to conduct safety testing. In contrast, the US currently has no national AI safety regulations.

The article presents a concerning lack of safety measures, while simultaneously presenting safety commitments, while criticizing the model for being too easily modified to ignore safety commitments. This head-spinning series of contradictions reveals how geopolitical biases can distort our analysis of AI safety.

We need to move beyond simplistic Goldilocks narratives about AI safety that automatically frame Western choices as inherently good security measures while Chinese choices can only be either too restrictive or too permissive. Instead, we should evaluate AI systems based on:

  1. Documented versus hypothetical harms
  2. Whether safety measures concentrate or distribute power
  3. The balance between user autonomy and preventing harm
  4. The actual impact on human wellbeing, regardless of the system’s origin

The criticism that Chinese AI companies engage in speech suppression is valid and important. However, we undermine this critique when we simultaneously criticize their systems for being too open to modification. This inconsistency suggests our analysis is being driven more by geopolitical assumptions than by rigorous ethical principles.

As AI systems become more prevalent, we need a more nuanced framework for evaluating their safety – one that considers both individual and systemic harms, that acknowledges the legitimacy of user control while preventing documented harms, and that can rise above geopolitical biases to focus on actual impacts on human wellbeing.

The current discourse around AI safety often obscures more than it reveals. By recognizing and moving past these contradictions, we can develop more effective and equitable approaches to ensuring AI systems benefit rather than harm society.

Hegseth Gross Violation of Effciency Orders: Millions Wasted on Frivolous Base Name Change

It would be funny if it wasn’t so blatantly stupid. Imagine if a childish racist troll was put in charge of the U.S. military, one who defies orders let alone common decency, and you get

Secretary of Defense Pete Hegseth signed a memorandum renaming Fort Liberty in North Carolina to Fort Bragg. The new name pays tribute… [to those who will profit in repeatedly changing base names instead of fixing actual base needs].

We’re talking about a base in need of actual fixes not performative ones, like some solutions to problems widely reported (e.g. black mold).

Last year at Fort Bragg, roughly 1,100 soldiers had to be relocated to new living quarters due to spiraling mold issues in the damp climate. Twelve barracks at Fort Bragg were set to be demolished this year, five years ahead of schedule, and an additional five are set to be remodeled.

Garrison officials at Bragg scrambled over the summer to accommodate the displaced soldiers after the barracks were effectively condemned. Roughly half were given a housing allowance, a move typically reserved for higher-ranking or married troops.

Instead Hegseth, famously thrown out of the military for spending too much time and money on white supremacist tattoos, has now focused the entire nation’s time and money on… expensive and demoralizing reversal of replacing letters on signs.

The service spent … more than $2 million for Fort Liberty [renaming it from racist Confederate traitor Bragg]

Putting the military into full retreat, making base names anti-American again, is a direct violation of the White House efficiency mandates and should get Hegseth immediately dismissed for gross insubordination.

As some have pointed out, the Liberty name may have been a tactical weakness and concession, blocking a proper tribute to a soldier. It was meant to weaken the base name intentionally so a case would be made later for a different tribute. That gives too much credit, as if there was a long plan by white supremacists in the armed forces to one day remove Liberty literally and metaphorically.

Even so, the change to Liberty was lawful and settled. On top of that there have been orders from the top for absolute efficiency. This rushed move now to turn a long racist plan into an immediate one wasting millions must be the most inefficient order imaginable.

And the mold. What about the mold?

Levitsky and Way’s “Foreign Affairs” Dictatorship Analysis: A Critical Response

The recent Foreign Affairs piece on American authoritarianism fundamentally misses how AI will supercharge authoritarian power in unprecedented ways.

While the authors correctly identify the risk of democratic breakdown, their analysis is unfortunately trapped in an outdated framework that fails to grasp two critical accelerants.

First, they underestimate how AI already weaponizes America’s buried atrocities. Unlike human narratives that often gloss over historical trauma, AI can instantaneously surface and connect centuries of state violence by normalizing it – from President Jackson’s genocidal Trail of Tears to President Wilson’s Red Summer of 1919 leading to Tulsa Massacre of 1921. AI doesn’t miss the subtext of racist deception in the Missouri Compromise or the brutally racist and illegal conquest of Texas and Florida to expand slavery. It can relentlessly illuminate how “America First” movements always consistently and repeatedly enabled American race-based authoritarianism since the late 1800s.

The authors vaguely suggest institutional guardrails could contain authoritarian power. But they fail to recognize how AI can weave foundational historical threads into devastating narratives that undermine faith in those very institutions. When AI connects the dots between a past of systemic state violence and the present institutional power of non-governmental “efficiency” (totalitarian) mercenaries called DOGE, it becomes much harder to believe in the protective power of courts or federalism.

Second, they dramatically underestimate the velocity of AI-powered narrative control. Their analysis feels like watching someone explain how decades of prior print media will hold the line on public opinion in 1933, while completely missing how the Nazi regime flooded radio waves and totally rewrote reality in just three months. 2025 AI is far more powerful than 1933 radio – it can generate, target, and amplify hateful messages at a scale that makes Hitler’s genocidal machines look primitive.

The authors worry about gradual institutional capture through bureaucratic maneuvering. But they miss how AI can simply flatten institutional resistance through overwhelming narrative force.

Why bother carefully pressuring judges and abiding by them when AI can flood every platform with unaccountable sock puppet messages demanding targeting judges to be eliminated if “woke” or opposed to “efficiency”? Already the White House has announced they will be “looking into” any judge who disagrees with “efficiency”. The speed and scale of AI-powered propaganda makes the old ways of careful institutional analysis feel quaint, like marching troops with slow-firing inaccurate muskets into a machine gun. Domain shifts are devastating to analysis that doesn’t account for what’s changed.

Therefore the Foreign Affairs assessment is not just wrong, it’s dangerously overconfident in the way that reduces opposition to mass unjust incarceration and death. By suggesting American institutions can weather authoritarian pressure through quaint concepts of traditional resistance, they underestimate how AI already fundamentally changes the game.

Quantum threats are basically here and some people still don’t know how to change their passwords.

Does anyone really think executive orders pumped out by the hundreds aren’t being written with software? Does anyone really not understand why a few college-aged kids who barely write software are being called “auditors” of “efficiency” on a highly complex financial system they can’t possibly understand?

They are feeding all, and I mean all, American citizen data into Elon Musk’s private unsafe AI infrastructure and asking it “what would Hitler do, in the voice of Goebbels?”

This won’t be a slow erosion of democracy through bureaucratic weaponization, waters creeping up on those who don’t have boats. It already is a tsunami-level warning of AI-powered narrative control that will catastrophically sweep away democratic institutions faster than any previous authoritarian transition.

The authors claim America won’t face “classic dictatorship.” But by failing to grasp how AI supercharges authoritarian power, they miss that we’re facing something potentially worse – a form of technologically-enhanced authoritarianism that could exceed anything in history. And this “Technocracy” disaster has been many decades in the making, a Musk family obsession since the 1930s as proven out in South African apartheid, not something political scientists today should be unfamiliar with.

Elon Musk’s grandfather making national security news with racist totalitarian “Technocracy”. Source: The Leader-Post, Regina, Saskatchewan, Canada, Tue, Oct 8, 1940, Page 16
Elon Musk repeatedly promoted fascism on social media such as polling followers whether he should bring his Grandfather’s racist totalitarian Technocracy back by “colonizing Mars” and ignoring all laws. Source: Twitter

In case Elon Musk’s encoded speech pattern is unclear, planet “Mars” is used (incorrectly) to promote open violation of the law and disobeying law enforcement, like saying America will finally be as good as Mars when the white men who occupy it can’t be regulated: Occupy Mars = Aryan Nation.

We all know the children’s tale about what comes next if we don’t understand the threat. The institutional safeguards appear as straw huts against a coming huff-a-puff wolf. We need to wake up to the true scale and speed of the threat before it’s too late.

The simple reality is this: AI-powered authoritarianism won’t respect and carefully navigate around slow democratic institutions – it will overwhelm them in raw narrative force at unprecedented speeds causing disasters to force surrender and complacency.

To put it another way, Nazi generals carelessly sped full speed into France to overwhelm their targets while leaving themselves dangerously exposed. The French capitulated and resolved themselves to occupation instead of rapid counter attack that would have destroyed the Nazis. General Grant understood this in the 1860s yet the French didn’t grasp adequately the domain shift tactics of radios, planes and trucks.

The Foreign Affairs authors are analyzing how to defend against 20th century authoritarianism while missing that an AI invasion force already has landed and is expanding. They’re not just wrong about defenses, they’re complacent and leaving America dangerously unguarded.

DOGE Is Hacking America

I co-authored an expert analysis of the high-profile DOGE attacks on America, published today in Foreign Policy.

The U.S. government has experienced what may be the most consequential security breach in its history.

In the span of just weeks, the U.S. government has experienced what may be the most consequential security breach in its history—not through a sophisticated cyberattack or an act of foreign espionage, but through official orders by a billionaire with a poorly defined government role. And the implications for national security are profound.