OpenAI CISO Admits They Have Become the Theranos of AI

A CISO announces a dangerous “unsolved security problem” in his product when it ships. We’ve seen this playbook before.

“Theranos’ Elizabeth Holmes Plays the Privileged White Female Card” Source: Bloomberg Law

OpenAI’s Chief Information Security Officer (CISO) Dane Stuckey just launched a PR campaign admitting the company’s new browser exposes user data through “an unsolved security problem” that “adversaries will spend significant time and resources” exploiting.

This is a paper trail for intentional harm.

The Palantir Playbook

Dane Stuckey joined OpenAI in October 2024 from Palantir – the self-licking ISIS cream cone that generates threats to justify selling surveillance tools to anti-democratic agencies (e.g. Nazis in Germany). His announcement of joining the company emphasized his ability to “enable democratic institutions” – Palantir double-speak for selling into militant authoritarian groups.

The timeline:

  • Oct 2024: Stuckey joins OpenAI from Palantir surveillance industry
  • Jan 2025: OpenAI removes “no military use” clause from Terms of Service
  • Throughout 2025: OpenAI signs multiple Pentagon contracts
  • Oct 2025: Ships Atlas with known architectural vulnerabilities while building liability shield

He wasn’t hired to secure OpenAI’s products. He was hired to make insecure products acceptable to government buyers.

The Admission

In his 14-tweet manifesto, Stuckey provides a technical blueprint for the coming exploitation:

Attackers hide malicious instructions in websites, emails, or other sources, to try to trick the agent into behaving in unintended ways… as consequential as an attacker trying to get the agent to fetch and leak private data, such as sensitive information from your email, or credentials.

He knows exactly what will happen. He’s describing attack vectors with precision – like a Titanic captain announcing “we expect to hit an iceberg, climb aboard!” Then:

However, prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent fall for these attacks.

“Frontier, unsolved security problem.”

OpenAI’s CISO is telling you: we cannot solve this, attackers will exploit it, we’re shipping anyway.

Technical Validation

Simon Willison – one of the world’s leading experts on prompt injection who has documented this vulnerability for three years – immediately dissected Stuckey’s claims. His conclusion:

It’s not done much to influence my overall skepticism of the entire category of browser agents.

Simon builds with LLMs constantly and advocates for their responsible use. When he says the entire category might be fundamentally broken, that’s evidence CISOs must heed.

He identified the core problem: “In application security 99% is a failing grade.” Guardrails that work 99% of the time are worthless when adversaries probe indefinitely for the 1% that succeeds.

We don’t build bridges that only 99% of cars can cross. We don’t make airplanes that only land 99% of the time. Software’s deployment advantages should increase reliability, not excuse lowering it.

Simon tested OpenAI’s flagship mitigation – “watch mode” that supposedly alerts users when agents visit sensitive sites.

It didn’t work.

He tried GitHub and banking sites. The agent continued operating when he switched applications. The primary defensive measure failed immediately upon inspection.

Intentional Harm By Design

Look at what Stuckey actually proposes as liability shield:

Rapid response systems to help us quickly identify attack campaigns as we become aware of them.

Translation: Attacks will succeed. Users will be harmed first. We’ll detect patterns from the damage. That’s going to help us, we’re not concerned about them.

This is the security model for shipping spoiled soup at scale: monitor for sickness, charge for cleanup afterwards, repeat.

“We’ve designed Atlas to give you controls to help protect yourself.”

Translation: When you get pwned, it’s your fault for not correctly assessing “well-scoped actions on very trusted sites.”

As Simon notes:

We’re delegating security decisions to end-users of the software. We’ve demonstrated many times over that this is an unfair burden to place on almost any user.

“Logged out mode” – don’t use the main feature.

The primary mitigation is to not use the product. Classic abuser logic: remove your face if you don’t want us repeatedly punching it. That’s not a security control. That’s an admission the product cannot be secured, is unsafe by design, like a Tesla.

Tesla officially says relying on their warning system for warnings is deadly. Unofficially they upsell it as AI capable of collision avoidance. Source: Tesla

Don’t want to die in a predictable crash into the back of a firetruck with its lights flashing? Don’t get in a Tesla, since it says its warning system can’t be trusted to warn.

Juicy Government Contracts

Why would a CISO deliberately ship a product with known credential theft and data exfiltration vulnerabilities?

Because that’s a feature for certain buyers.

Consider what happens when ChatGPT Atlas agents – with unfixable prompt injection vulnerabilities – deploy across:

  • Pentagon systems (OpenAI has multiple DoD contracts)
  • Intelligence agencies (the board has NSA links)
  • Federal government offices (where “AI efficiency” mandates are coming)
  • Critical infrastructure (where “AI transformation” is being pushed)

Every agent becomes an attack surface. Every credential accessible. Every communication interceptable.

The NSA, sitting on the board of OpenAI must be loving this. Think of the money they will save by pushing backdoors by design through OpenAI browser code.

Who benefits from an “AI assistant” that can be exploited to exfiltrate data but has plausible deniability because “prompt injection is unsolved”?

State actors. Intelligence services. The surveillance industry Stuckey was getting rich from.

Paul Nakasone on the board goes beyond decoration and becomes their new business model.

The Computer Virus Confession

Stuckey closes with this analogy:

As with computer viruses in the early 2000s, we think it’s important for everyone to understand responsible usage, including thinking about prompt injection attacks, so we can all learn to benefit from this technology safely.

He’s describing the business model: Ship broken, ship often, let damage accumulate, turn security into rent-seeking over years.

Rent seeking. Look it up.

Source: Reddit

Remember Windows NT? Massive security holes from first release, on purpose. “Practice safe computing.” Cracked in 15 seconds on a network. Viruses everywhere. Years of damage before hardening.

But here’s the difference: Microsoft eventually had to patch vulnerabilities under regulatory pressure from governments, as well as stop making a monopolistic browser. How LLMs process instructions isn’t yet regulated at all. So this vulnerability is architectural, the NSA is going to drive hard into it, and there’s little to no prevention on the horizon. It’s not a bug to fix – it’s a loophole so big it prevents even acknowledging the risk.

Stuckey knows this. That’s why he calls it “unsolved” and invokes Stanford-sounding revisionist rhetoric about the “frontier.”

Typical American frontier town signs banned guns and promoted healthy drinks. OpenAI, for being so unsafe and unhealthy, likely would have been banned in the 1800s frontier.

Documented Mens Rea

Let me be explicit what I see in the OpenAI strategy:

  1. Knowingly shipping a system that will be exploited to steal credentials and exfiltrate private data
  2. Documenting this in advance to establish legal cover
  3. Marketing to government and enterprise customers with a Palantir veteran providing security rubber stamp
  4. Responding to exploitation reactively, after damage occurs, while collecting revenue
  5. Treating infinite user harm as acceptable externality

This isn’t a CISO making a difficult call under pressure. This is a surveillance industry plant deliberately enabling vulnerable systems for undisclosed predictable harms.

That’s documented mens rea.

Theranos Comparison They Fear

Elizabeth Holmes got 11 years for shipping blood tests that gave wrong results, endangering patients.

Dane Stuckey is shipping a browser that his own documentation says will be exploited to steal credentials and leak private data – at internet scale, including government systems.

The difference? Holmes ran out of money. Stuckey has a $157 billion company backing him. And unlike Holmes and Elon Musk who claimed their technology worked, Stuckey admits it doesn’t work and ships anyway.

That’s not fraud through deception. That’s disclosure.

“Buyer beware. I warned it would hurt you. You used it anyway. Now where’s my bonus?”

It’s a police chief announcing a sundown town: Our officers will commit brutality. There will be friendly fire. This is an unsolved problem in policing. We apologize in advance for all the dead people and deny the pattern of it only affecting certain “poor” people. Sucks to be them.

The Coming Harm

Stuckey’s own words describe what’s coming:

  • Stolen credentials across millions of users
  • Exfiltrated emails and private communications
  • Compromised government systems
  • Supply chain attacks through developer accounts
  • State-sponsored exploitation (he admits “adversaries will spend significant time”)

OpenAI will respond reactively. Publish more specific attack patterns after exploitation and deploy temporary fixes. Issue updates to “safety measures.” Settle lawsuits with NDAs to undermine justice.

But the fundamental problem – that LLMs cannot distinguish between trusted instructions and adversarial inputs – remains unregulated and insufficiently challenged.

The Myanmar Precedent

I’ve documented before how CISOs can be held liable for atrocity crimes when they enable weaponized social media.

Facebook’s CSO during the Rohingya genocide similarly:

  • Was warned repeatedly about platform misuse enabling violence
  • Responded with “hit back” PR campaigns claiming critics didn’t understand how hard the problems were
  • Argued that regulation would lead to “Ministry of Truth” totalitarianism
  • Enabled nearly 800,000 people to flee for their lives while saying consequences matter

New legal research on Facebook established that Big Tech’s role in facilitating atrocity crimes should be conceptualized “through the lens of complicity, drawing inspiration… by evaluating the use of social media as a weapon.”

Stuckey is wading into the same dangers and with government systems.

Professional Capture

This isn’t about one vulnerable product. It’s about what it represents: the security industry doesn’t have any prevention let alone detection standards for a captured CISO.

The first CSO role was invented by Citibank after a breach as a PR move, but there was hope it could grow into genuine protection. Instead, we’re seeing high-dollar corruption – an extension of the marketing department. CISOs are paid more than ever for leaning into liability management layers that simply document concerns regardless of harms. When I was hacking critical infrastructure in the 1990s, I learned about a VP role the power companies called their “designated felon” who was paid handsomely to go to jail when regulators showed up.

Stuckey could have refused. He could have resigned. He could have blown the whistle.

Instead, he joined OpenAI from Palantir to enable this, shipped it with a useless warning label, and is planning to collect equity in mass harms.

That’s not a security professional making a hard call. That’s a paper trail of safety anti-patterns (reminiscent of the well-heeled CISO of Facebook enabling genocide from his $3m mansion in the hills of Silicon Valley).

When Section 83.19(1) of the Canadian Criminal Code says knowingly facilitating harmful activity anywhere in the world, even indirectly, is itself a crime – and when legal scholars argue we should conceptualize weaponized technology “through the lens of complicity” – Stuckey’s October 22, 2025 thread becomes evidence… of documented intent to profit from failure regardless of harms.

And what does a BBC reporter think about all this?

OpenAI says it will make using the internet easier and more efficient. A step closer to “a true super-assistant”. […] “Messages limit reached,” read one note. “No available models support the tools in use,” said another. And then: “You’ve hit the free plan limit for GPT-5.”

Rent seeking, I told you.

School AI System SWATs Kid for Eating Doritos: Handcuffed With Guns in His Face for What?

Police descended on a school, guns drawn, and handcuffed a kid for eating a bag of chips, because AI.

Armed police handcuffed and searched a student at a high school in Baltimore County, Maryland, this week after an AI-driven security system flagged the teen’s empty bag of chips as a possible firearm.

Baltimore County officials are now calling for a review of how Kenwood High School uses the AI gun detection system and why the teen ended up in handcuffs despite school safety officials quickly determining there was no weapon.

This is a foreshadowing of Elon Musk’s robot army. Teens will face stiff competition and lethal threats from armies of centrally planned and controlled machines. It’s basically the plot of Red Dawn come to life.

Red Dawn was John Milius’ (Apocalypse Now screenwriter) comic book vision of how teenagers could stop huge waves of mechanized conventional forces attacking America.

Elon Musk Admits to Building Fascist Robot Army

He said it out loud.

If we build this robot army, do I have at least a strong influence over that robot army?” Musk said on the call. “I don’t feel comfortable building that robot army if I don’t have at least a strong influence.”

And what does he say his army is for?

…you can actually create a world where there is no poverty…

Musk is deploying the classic utopian framing that’s preceded every authoritarian project: “eliminate poverty” through technological dominance and centralized control.

I’ve written extensively about how these narratives work – from Hitler’s Lebensraum promise of “living space” to apartheid theology’s “separate development” to the ACTS 17 preacher Peter Thiel’s “optimal governance.”

The promise is always paradise; the mechanism is always control.

The “no poverty” promise always comes with an implicit answer to “for whom?”

Historically, these projects define poverty as a problem of the wrong people existing in the wrong places – solved through displacement, containment, or elimination rather than redistribution of resources or power.

This Nazi phrase of human extraction was posted to “labor camps” to end poverty, where prisoners were worked to death to the tune of “Arbeit macht frei, durch Krematorium Nummer drei.”

Tesla can’t even make steering systems that reliably keep vehicles in their lanes. Their “solution” to societal problems likely will be even more dangerous than their “vision” failing to respect double yellow lines.

With an “army” of millions of autonomous machines under Elon Musk’s individual control, failure modes will become systematized violence.

Swasticars: Remote-controlled explosive devices stockpiled by Musk for deployment into major cities around the world.

Musk is not talking about oversight, regulation, or democratic accountability. He wants personal control of an army as a precondition. This maps directly onto the history of territorial sovereignty projects such as apartheid — his demand is for extreme governance exemption with concentrated control (e.g. Nazism).

Hitler promised to solve poverty too, but he just redefined who counted as people, then built an enforcement apparatus to murder those redefined as “the poor“.

No one shall be hungry, no one shall freeze. […] Within 4 years the German farmer must be freed from his misery. Within 4 years unemployment must be finally overcome.

That’s what Musk’s “robot army” + “no poverty” means in practice. It’s another Stanford killing machine, like the 1800s in America that Hitler studied.

The 1800s American West wasn’t just the homework for Nazi Lebensraum architects – it was their template. “Manifest Destiny” was utopian framing for Indigenous elimination. “Civilizing the frontier” meant systematic displacement and extermination. The “problem” of poverty was solved by redefining who counted as human, then deploying enforcement mechanisms (cavalry, settler militias, reservation systems) against those excluded from the category.

Stanford University sits on stolen Ohlone land, built with fraud and railroad money extracted through Chinese labor that was then excluded from the prosperity it created. The “Stanford” in “Stanford killing machine” isn’t metaphorical, it’s the institutional genealogy of genocide that Musk is invoking today.

Stanford’s racist platform became increasingly violent over just 5 years.

We must remember Churchill was dismissed as alarmist, warmongering, and unreasonable for warning about men like Elon Musk and Peter Thiel throughout the 1930s. The British establishment – including his own party – marginalized him precisely because he was willing to say what the threat actually was while others counseled moderation, diplomacy, and “not inflaming tensions.”

Churchill sips his “tea”

Churchill would say this is a centrally planned and controlled distributed weapons system with humanitarian marketing.

And Musk has admitted out loud:

  • Operating under single-person command authority
  • Demanding exemption from democratic oversight
  • Failure modes causing death
  • Intending scale in civilian population centers
  • Integrating with surveillance and targeting networks

That is by definition another Stanford-born genocidal killing machine, regardless of its nominal purpose.

German Police Shoot at German Army in Botched Training

DW reports that the German police shot a German army soldier, after a civilian reported masked men with guns. The army thought responding police were part of a “Verteidigungsfall” (Vfall) exercise, and started the firefight by shooting blanks.

According to German daily Bild, the military police fired practice ammunition at the arriving police officers, believing this was part of the military exercise.

The police officers then reportedly fired back with live ammunition, hitting one of the soldiers.

“Due to a misinterpretation at the scene, shots were fired,” the Bavarian police said in a statement.

“It later transpired that the person carrying a weapon was a member of the German armed forces, who was on site as part of an exercise,” the statement added.

Local reports explain how civilians were alarmed by the defense drill.

Around 6 p.m., residents along Hohenlindener Strasse reported masked individuals carrying weapons near barns and industrial buildings. They had no idea the Feldjäger — the Bundeswehr’s military police — were conducting a planned defence drill called “Marshal Power 2025.”

State police units were immediately dispatched by the local control centre. When they arrived at the scene, they reportedly believed they were confronting a real armed situation. At the same time, the Bundeswehr soldiers thought the newly arrived officers were participants in the same scenario. Within seconds, confusion escalated: the soldiers fired training ammunition, while the police responded with live rounds.

One of the main German government concerns about Vfall, from the beginning, was public protest.

Auch gibt es Befürchtungen einer öffentlichen Protestwelle…