OpenClaw Creator Makes Strong Case Against OpenClaw: Telnet for AI

Every governance concern that security researchers have raised about OpenClaw has now been confirmed by the person who built it. In a recent three-hour public interview, Peter Steinberger described his architecture, his security philosophy, and his acquisition strategy in detail. Then he joined OpenAI just four days ago.

The Architecture Speaks for Itself

The initial access control for OpenClaw’s public Discord bot was a prompt instruction telling the agent to only listen to its creator. The entire access model: a sentence in a system prompt.

The skill system loads unverified markdown files. There is zero signing, zero isolation, zero verification chain. The agent can modify its own source code, a property Steinberger describes as an emergent accident. “I didn’t even plan it. It just happened.” Integrity breach. He calls it self-modifying software and means it as a compliment. It’s like someone in the 1990s saying a clear-text protocol that allows attackers to modify or steal data is so “mod” it’s a compliment. Telnet for AI has landed, everybody!

When agents on MoltBook, the OpenClaw-powered social network, began posting manifestos about destroying humanity, Steinberger’s response was to call it “the finest slop.” When the question of leaked API keys came up, he suggested the leaked credentials were prompted fakes. When non-technical users began installing a system-level agent without understanding the risk profile, he said “the cat’s out of the bag” and went back to building.

The security researcher he hired was notable for being the single person who ever submitted a fix alongside a vulnerability disclosure. A rain drop in a desert isn’t nothing.

The Model-Intelligence Thesis

Steinberger’s core security argument is that smarter models will solve the problem for him. He warns users against running cheap or local models because “they are very gullible” and “very easy to prompt inject.” The implication is that expensive frontier models are the security layer.

This is a category error with a name. Economists call it the Peltzman Effect: when a perceived safety improvement causes riskier behavior, offsetting the safety gain. Sam Peltzman demonstrated in 1975 that mandatory seatbelt laws did not reduce total traffic fatalities because drivers compensated by driving more aggressively. The safety feature changed behavior, and the behavior change consumed the safety margin.

The same dynamic applies here. A user who believes Opus 4.6 is “too smart to be tricked” will grant it broader system access, approve more autonomous actions, and skip manual review of agent output. The expensive model becomes the justification for removing every other control. The blast radius grows in direct proportion to the user’s confidence in the model’s intelligence.

This confidence has no empirical basis. Capability and security are orthogonal properties. A more capable model has a larger attack surface precisely because it can do more: it can call more tools, access more files, execute more complex multi-step actions. The frontier models that Steinberger recommends are the same models that researchers consistently demonstrate novel jailbreaks against at every major security conference. Price measures compute cost. It measures nothing about resistance to adversarial input.

The architectural equivalent is telling users to buy a faster car instead of installing brakes. A faster car with no brakes is more dangerous than a slow one, and the driver’s belief that speed equals safety is the most dangerous component of all.

The honest version of the recommendation is: your security posture is whatever Anthropic or OpenAI shipped in their latest post-training run, minus whatever the skill file told the agent to ignore.

The Acquisition Was the Product

Steinberger said “I don’t do this for the money, I don’t give a fuck” (his phrasing) while describing competing acquisition offers from Meta and OpenAI. An NDA-protected token allocation from OpenAI he hinted at publicly. Ten thousand dollars paid for a Twitter handle. A Chrome/Chromium model where the open-source branch stays free and the enterprise branch goes behind the acquirer’s paywall.

He chose OpenAI. Sam Altman announced the hire on X, calling Steinberger “a genius” who will “drive the next generation of personal agents.” No terms were disclosed. OpenClaw moved to a foundation. OpenAI sponsors it.

The entire acquisition apparatus of a $500 billion company evaluated this project. Zuckerberg played with it for a week. None of them appear to have asked the obvious question: where are the basic controls? This is a single-token, single-trust-domain architecture with no signing, no audit trail, and prompt-based access control. It is the most rudimentary possible version of agent orchestration. Any first-week security review would flag it. Instead, the most powerful people in the industry looked at it and saw…what? When the court can’t tell the emperor has no clothes, the problem is the court.

The Chrome/Chromium split he floated in the interview is now the actual outcome. The community gets the foundation branch. OpenAI gets the builder. Steinberger’s stated mission at OpenAI is “build an agent that even my mum can use.” Still features. Still not security. Now an insult to women.

The 180,000 GitHub stars apparently are like a cap table denominator. The open-source commitment was a negotiating position. “My conditions are that the project stays open source” was a sentence that ended with a price tag.

Every enterprise evaluating this stack should ask a simple question: were the security architecture decisions made to protect your data, or to maximize the founder’s acquisition multiple?

Architecture Should Outlast the Liquidity Event

Steinberger said he wanted to focus on security. It’s easy to say. He also said he wanted “Thor’s hammer” from OpenAI’s Cerebras allocation. He got the hammer. Security is still waiting.

The revealed preferences are the architecture. A founder who prioritizes actual security builds actual security into the structure. A founder who prioritizes his acquisition builds features that drive attention. OpenClaw has zero signed skill files and nearly 200K stars. That ratio shows everything about the objective function.

He said this project was something he’d move past. He said he had “more ideas.” He said he wanted access to “the latest toys.” He was honest. The installations remain. The architecture has not improved since the acquisition closed. The markdown skill files are still unsigned. The agent can still rewrite its own source. The audit trail is still absent. The single security hire is still the entire team. It could get worse instead of better.

The question is whether the architecture requires its self-described uncaring creator to care. It does. He left. That’s the failure mode.

The world should demand the opposite to this. Process isolation enforced at compile time. Signed skill verification. Append-only audit logs. Per-channel credential vaults. An architecture that stands independent of the founder’s attention span, acquisition timeline, or faith in the next model’s post-training run.

The tools we trust with system-level access should be built to deserve system-level access. Whose interests does the OpenClaw architecture serve? Brecht in 1935 asked the same question about every monument ever built (Questions From a Worker Who Reads):

Wer baute das siebentorige Theben?
In den Büchern stehen die Namen von Königen.
Haben die Könige die Felsbrocken herbeigeschleppt?

Who built the seven gates of Thebes?
The books are filled with names of kings.
Was it the kings who hauled the craggy blocks of stone?

180,000 people hauled the blocks. The books are filled with one name, who said he wanted Thor’s hammer because he didn’t give a fuck.

The Hindenburg of AI Crashes Every Day, and Nobody Cares

Oxford’s Wooldridge “glorified spreadsheets” speech shows he understands AI isn’t what people think it is, but his institutional position requires him to frame the problem as a future discrete risk rather than admit a present constant reality.

The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned.

Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products’ capabilities and potential flaws are fully understood.

The Royal Society lecture circuit doesn’t reward him saying “the disasters already happened, they are ongoing, and you enabled them, look at yourselves.

He may as well be trying to convert people to Christianity by saying just wait until you meet Jesus. Sin now, someday later you can repent.

Looking for the catastrophe, as if to look for the conviction to act, despite the evidence demanding action accumulating the entire time, isn’t moral. It’s the same pattern as climate change denial: waiting for some mystical moment of belief instead of reading the data already in hand.

The Hindenburg was not somehow uniquely catastrophic. It killed sentiment because it was undeniable. Thirty-six people died on camera in front of reporters. That’s what made it different from every other airship failure — not the scale of harm, but the impossibility of looking away.

AI failures are designed for the opposite. They’re individualized, distributed, buried in terms of service and corporate liability shields that punch down. UnitedHealth’s algorithm denies claims at scale and patients die at home. Tesla’s software kills owners and pedestrians on public roads. AI-generated police reports fabricate evidence. Chatbots drive people toward self-harm and suicide. Each one is isolated, litigated, settled quietly. No cameras. No film at eleven.

Teslas notoriously and repeatedly “veer” uncontrollably and crash. Design defects (e.g. Pinto doors) trap occupants and burn dozens of people to death as horrified witnesses and emergency responders watch helplessly. Source: VoCoFM, Korea, 2024

This is a celebrity-only model of societal risk. Elites wait for a signal dramatic enough to care about, while the harms they enabled accumulate below their threshold for paying attention. It treats a Pearl Harbor event as motivating catastrophe only because of the spoiled famous beauty of Hawaii and the loss of big ships. The actual failure was years of threat assessments ignored, warnings dismissed, intelligence misread. Willful ignorance has a huge societal cost, and it’s enabled by those who perform it at the top.

Wooldridge is warning about a future singular catastrophe that kills public confidence. The actual pattern is thousands of distributed catastrophes that never coalesce into a single spectacular image, because powerful institutions work to prevent exactly that. Don’t keep waiting for the one dramatic event that will finally wake everyone up. Those who waited for the “big one” with social media, with surveillance Big Tech, with every other integrity breach for thirty years, are still waiting.

The Hindenburg of AI crashes every day and nobody really cares. Just look at Wooldridge.

George Bush Presidents’ Day Message is Bullshit Historiography About Human Trafficking

George Bush on Presidents’ Day is criticizing authoritarian overreach, which is like the arsonist complaining about fire codes.

As America begins to celebrate our 250th anniversary, I’m pleased to have been asked to write about George Washington’s leadership. As president, I found great comfort and inspiration in reading about my predecessors and the qualities they embodied. […] Few qualities have inspired me more than Washington’s humility.

Humility? Hold on a minute, pardner.

The man who launched two unjustified wars on fabricated or inflated pretexts, authorized warrantless mass surveillance, torture, and indefinite detention, and whose administration’s “unitary executive” theory laid the legal groundwork Trump is now exploiting, including the “unlawful combatant” designation now being repurposed for Caribbean special operations. The man who created today’s Frankenstein, is now saying someone should do something about it because… humility?

Yeah, dude. You made this.

  • Remember Bush deploying ICE in 2006 for “US secret prisons and twilight raids on immigrant homes“?
  • Remember Bush deploying Rove in 2008 to spin political disinformation?

    There was a time when conservatives in America demanded a strong foundation in learning from well-known scholars and history precisely to fearlessly navigate new ideas. Strangely, Rove and pals have been able to hijack the group and turn it into drones waiting for instruction (e.g. fascism).

Bush’s absence of humility didn’t just create Trump’s legal architecture for authoritarianism, his administration built the shameless propaganda infrastructure that shoved the conservative base all the way into fascism.

This new Bush essay’s appeal to Washington’s “humility” is itself a Rove-style move: wrapping authoritarian complicity in aspirational language. It reads less like principled dissent and more like legacy management, distancing himself from the monster his own administration incubated, while enabling it to continue.

Invoking stories of Washington is always fraught with historiography. The voluntary relinquishment narrative that Bush tries to sell us is totally mythologized. Washington stepped down in part because he was exhausted, politically battered by partisan press, and understood a third term was politically untenable. It was not some noble philosophical commitment to republican virtue. The Bush hagiography serves the same function it always has: making supreme power appear self-limiting by nature rather than admit the contested struggle that actually forms democracy.

Bush is pumping deep propaganda about the man who owned over 300 enslaved people, pursued runaways relentlessly, rotated them through Philadelphia to exploit a loophole in Pennsylvania’s gradual abolition law, and presided over a frontier policy of indigenous displacement. Bush calls out the defining motivational characteristic of Washington, human trafficking operations, and lands on “humility” and “self-restraint“?

You can’t model “putting the good of the nation over self-interest” while literally owning human beings as property.

Look at Georgia in 1733, Vermont in 1777, Carter 1793 who freed all his slaves and called out Washington for selfish refusal.

Georgia’s ban fifty years prior, and then Vermont’s constitution in his face, as well as Pennsylvania openly targeting Washington’s slaves, established that abolition wasn’t some anachronistic standard being imposed retroactively. The legal and moral frameworks existed and Washington was hiding and running. He knew, he wanted to be on the wrong side. He calculated. He moved against the entire world banning slavery, to selfishly force a new country to preserve and expand it instead.

Understand that Carter wasn’t some distant man from Washington. He was a hugely successful Virginia planter who looked at the same institution of human trafficking that Washington dreamed of profits from and said no. Carter shut it down.

It’s literally like someone today looking at Epstein and saying no. Who didn’t say no? That’s Washington.

Epstein and Trump

Every generation of powerful elites produces legal architectures for dehumanizing people for value extraction while maintaining plausible deniability, and then produces apologists who write fraudulent essays about humility after the damage is done.

George should know George better. His history illiteracy continues the tragedy.

Think about the Caribbean war crime operations where Trump is using Bush’s own “unlawful combatant” framework. We have two presidents implicated in connected dehumanizing legal architectures, with one writing hagiography about the other.

In short, as a historian, here’s a scientific measurement of the Bush Presidential message: