Category Archives: Security

BBC Wants Us to Laugh at Irish Pubs Attacked by US Agents

A US engineer named Cortland, claiming he loves Ireland, built an AI voice agent, deployed it to cold-call over 3,000 Irish pubs without consent, and trained it to pass as human. The BBC wrote up the attack as a heartwarming story about those Irish.

A Spitfire in WWII configured to deliver beer to thirsty Allied troops

Cortland claimed harm avoidance while running unsolicited automated surveillance on thousands of small businesses. The BBC never asks who consented. Never asks about data protection. Never asks whether Irish and EU telecom regulations cover AI robocalls to commercial premises. Never asks who owns the recorded interactions. The only friction in the entire story is the Donegal bartender, and the piece treats that as comic relief.

The Irish aren’t unbothered. They were never asked. Their good humor after the fact is being laundered into consent.

The premise is a pretext. Price transparency for a product (pint of beer) with negligible variance is not a problem anyone needs solved. Cortland apparently is a former pub owner. He knows this. The “hidden gems” language is marketing copy, not a mission statement.

The method is the actual product. Building a voice agent that deceives thousands of workers into giving up commercial information, then measuring how few of them catch on. That’s a penetration test marketed as a pub guide. The cost of running it only makes sense if the return isn’t cheaper pints but demonstrated capability. He’s selling the voice agent, or selling himself as the guy who built it. The Guinndex is the portfolio piece.

In any other context, 3,000 unsolicited calls from a foreign operator using voice spoofing to extract commercial intelligence from small businesses would be called what it is: a social engineering campaign. Or worse, another “just asking questions” extraction campaign foreshadowing integrity breaches.

Brian Friel’s play “Translations” (1980) shows us how. Set in Donegal, British soldiers arrive in a small Irish-speaking community to ask details about the area. They’re charming. They need basic information. The locals provide it. The result is the erasure of their own language from their own land.

It’s based on the real-world Ordnance Survey of Ireland, 1824 to 1846. The British sent engineers and surveyors across every townland in Ireland. The stated purpose was modernization: better maps, standardized place names, improved administration. The surveyors were friendly. They asked locals to pronounce place names, explain local geography, share knowledge of the land. The Irish cooperated because the questions seemed harmless and the men asking them were personable.

The output was the anglicization of thousands of Irish place names, the tax valuations that followed (Griffith’s Valuation), and the military cartography that made subsequent control of the countryside possible.

Local knowledge, freely given to foreigners, became the infrastructure of Irish dispossession.

The output today is normalization. The BBC frames every failure of detection as comedy at the expense of the Irish. The bartender who offered to buy “Rachel” a pint. The two AIs stuck in a loop saying “Oh, dear.” The interrogation in Donegal played for laughs. Every one of these anecdotes trains the audience to find AI deception of workers endearing rather than alarming. The story’s emotional arc is: isn’t it cute that they couldn’t tell?

The BBC has centuries of practice with this framing. The charming, credulous Irish who can’t spot the trick is a colonial trope with a long shelf life. Updating it for the AI age doesn’t make it new. The structural match across time is exact at every level: foreign military/commercial operator, benign cover story, friendly extraction of local knowledge from cooperative locals, and output that served the extractor’s interests while dispossessing the extracted.

The EU has already legislated this exact AI threat scenario and Cortland’s system appears to be designed to fail the standard before it even takes effect.

The calls were unsolicited, automated, and designed to extract commercial information while concealing their nature. Irish law (SI 336/2011, Regulation 13) and the ePrivacy Directive (Article 13) require prior consent for automated calling systems. Both were written to stop machines from selling things to people. Cortland’s system does something the law didn’t anticipate: it impersonates a person to harvest data from them. That’s arguably worse, and the regulatory framework hasn’t caught up. The EU AI Act, Article 50(1), will require AI systems to disclose themselves to the humans they interact with. It takes effect August 2, 2026.

References:

J.H. Andrews, A Paper Landscape (1975); Seosamh Ó Cadhla, Civilizing Ireland: Ordnance Survey 1824-1842 (2007); G.M. Doherty, The Irish Ordnance Survey: History, Culture and Memory (2006).

Why I Replaced OpenClaw: Wirken for a Secure Agentic World

At least four platforms now compete for the right to run autonomous agents against your messaging channels and business data. One of them has 341,000 GitHub stars. Does that mean anything?

The Star

OpenClaw is the most-starred software project on GitHub. Most. Biggest. And not in a good way. Like the Titanic way. It passed React’s 13-year record in 60 days. On January 26, 2026, the repository gained at least 30,000 stars in a single day. Suspicious? Every star from #10,000 through #40,000 in the GitHub API carries the same date. Independent analysis of the GitHub Archive found multiple single-day jumps above 25,000 stars, a pattern that typically signals, wait for it, scripted starring.

The Star-Belly Sneetches have bellies with stars. The Plain-Belly Sneetches don’t have them on thars.
I know, what are the chances that a guy writing agent automation software automated his agents to generate stars?

The sharper number is the one GitHub doesn’t put on the front page. OpenClaw has 341,000 stars and 1,691 subscribers. A subscriber is someone who chose to watch the repository. They get notifications, they follow development. The star-to-subscriber ratio is 202:1. For comparison: React is 37:1. Linux is 28:1. Kubernetes is 38:1. Deno, the highest comparable outlier, is 77:1. OpenClaw’s ratio is nearly three times the next worst.

341,000 identities clicked a button. 1,691 are paying attention.

What’s Behind the Star

Nvidia knows OpenClaw has a major problem. It released OpenShell to try and contain the flaws, but doesn’t fix the architecture inside it. Anthropic’s Claude Code is a strong agent platform, yet it’s vertically integrated as one provider. The head-to-head that matters now is the one between the sudden and sus market darling and the secure alternative I just built to replace it.

OpenClaw Wirken
Channel isolation NONE. Single process, single token, all channels. Compile-time. Separate OS process per channel. Cross-channel access is a compiler error.
Blast radius NONE. Everything. Infinite. One channel.
Credential security NONE. 220,000+ exposed instances. XChaCha20-Poly1305 vault, OS keychain, per-channel scoped, rotating.
Credential leak prevention NONE. Compiler-enforced. SecretString: no Display, Debug, Serialize, Clone. Zeroed on drop.
Audit trail NONE. Append-only, SHA-256 hash-chained, pre-execution. SIEM forwarding.
Tamper detection NONE. Hash chain breaks on any modification.
Skill signing Optional. 824+ malicious skills (20% of registry). Ed25519 required.
Sandbox NONE. Docker or Wasm. Ephemeral, no-network, fuel-limited.
Inference privacy Provider DPA. Operator control: DPA, TEE, or local.
Dependency Node.js. No runtime dependencies.
OpenClaw skill.md parsing Native. Native.

A note on inference privacy: Tinfoil and Privatemode run open-source LLMs inside hardware TEEs (AMD SEV-SNP, Intel TDX, Nvidia H100/Blackwell CC). Hardware enclaves get broken by sophisticated side-channel. But the choice being offered is enclaves versus a provider who just promises not to look. A side-channel requires targeting and real effort. Reading prompts from a database requires nothing more than a query. TEEs are an option, they make exfiltration expensive instead of free. If you want zero cloud exposure, point Wirken at Ollama and keep everything local.

Stars measure clicks.

The table measures trust, in architecture.

Get Wirken: github.com/gebruder/wirken

Germany Arrests Russian Spies in Drone Assassination Plot

On March 24, German federal prosecutors announced the arrest of two people for spying on behalf of Russian intelligence. The target was a person in Germany who supplies drones and components to Ukraine. One suspect filmed the target’s workplace. The other visited the target’s home and photographed it with a phone.

The Generalbundesanwalt’s own language is worth noting. The surveillance served “the preparation of further intelligence operations against the target person.” In plain language: they were building a kill file.

This is not an isolated case. It is the latest entry in an escalation pattern that has been tightening across Europe for two years.

The Ladder

Apr 2024 Germany Two German-Russian nationals arrested in Bayreuth for photographing military installations and railway tracks, including the US training base at Grafenwöhr. Planning arson and explosions to disrupt arms logistics to Ukraine.
Jul 2024 Germany US and German intelligence foil Russian plot to assassinate Armin Papperger, CEO of Rheinmetall, Europe’s largest ammunition producer.
Jul 2024 UK / Germany Incendiary devices disguised inside vibrating cushions and cosmetics tubes shipped via DHL through Leipzig. One detonates at a DHL hub in Birmingham. Another catches fire in Leipzig before loading onto a cargo flight.
Sep 2025 UK Three arrested for running sabotage and espionage operations for Russia. Follows convictions of a Wagner-directed arson cell and a Bulgarian spy ring that surveilled a US military base.
Oct 2025 Poland / Romania Poland arrests eight for espionage and sabotage, bringing total detentions to 55 over three years. Romania intercepts two Ukrainian citizens placing explosive packages at a delivery company under Russian intelligence coordination.
Nov 2025 France Four detained for spying for Russia and promoting wartime propaganda.
Jan 2026 Germany Ilona W. arrested in Berlin. GRU agent posing as Ukrainian community advocate, sat rows behind Zelenskyy and Merz at political events. Gathered intelligence on drone test sites, arms deliveries, defense industry personnel. Her GRU handler, operating as deputy military attaché, expelled within 72 hours.
Mar 2026 Germany / Spain Ukrainian and Romanian nationals arrested for surveilling a single drone supplier. Structured handoff when first agent relocated. Filming workplace and home address. Prosecutors cite preparation for “further operations.”

The Pattern

Sweden’s defense research agency FOI published a study in January 2026 analyzing 70 individuals convicted of espionage across 20 European countries between 2008 and 2024. The taxonomy it produced reads like a field guide to what German prosecutors keep uncovering: the Observer, the Disposable, the Mobile Spy who exploits Schengen to operate across jurisdictions, the Connected Agent recruited through diaspora networks. The categories overlap. An observer may also be mobile. A disposable may be embedded in a criminal network.

The operational signature is consistent. Russia recruits non-Russian nationals for deniability. It uses Telegram for tasking and cryptocurrency for payment. It treats agents as expendable. When one relocates, another steps in. The March 24 case is textbook: a Ukrainian and a Romanian, a structured handoff, a target in the drone supply chain.

S&P Global’s November 2025 analysis warned that while sabotage incidents appeared to decline in 2025, this likely represented strategic recalibration rather than de-escalation, with increased activity expected in 2026. NATO described the threat level as “record high.”

The Timing

The March 24 arrests came 24 days into the American Operation Epic Fury. Russia is fighting a hybrid war on two fronts simultaneously.

  • In Europe, it continues targeting the logistics chain that supplies Ukraine.
  • In the Middle East, it is providing Iran with intelligence on US military positions, including the locations of American warships and aircraft.

Zelenskyy stated on March 24 that Ukrainian intelligence has “irrefutable evidence” of Russian intelligence sharing with Tehran. The EU’s foreign affairs chief Kaja Kallas said the same thing publicly. CNN and the Washington Post reported it independently, citing US officials.

And yet the US intelligence community’s own 2026 Annual Threat Assessment, released March 18, contains fewer references to Russia than the 2025 edition. References dropped from 152 to 99. The document explicitly warns about both inadvertent and deliberate escalation with NATO, but the analytical attention has thinned.

CEPA analysts framed it clearly: the question is whether Europe can use Washington’s distraction to strengthen its own posture on Ukraine while the Americans aren’t looking. The flip side is that Russia can use the same distraction to intensify operations that European counter-intelligence services are already struggling to contain.

The counter-sabotage response remains largely national. Coordination between governments is limited. Coordination between governments and the private sector is worse. The people being surveilled, the drone suppliers and logistics operators who keep Ukraine in the fight, are mostly on their own.

From Papperger to Any Drone Shop

Two years ago, Russia tried to kill the CEO of Europe’s largest arms manufacturer. Now it is filming the home address of someone who ships drone parts. The target selection has moved from the strategic to the granular.

This is not a reduction in ambition. It is an expansion in scope. The supply chain that delivers weapons to Ukraine is long, distributed, and staffed by people who do not have security details.

Russia has evidently decided that every link in that chain is worth mapping. The Generalbundesanwalt just called it preparation for further operations.

OpenClaw Creator Makes Strong Case Against OpenClaw: Telnet for AI

Every governance concern that security researchers have raised about OpenClaw has now been confirmed by the person who built it. In a recent three-hour public interview, Peter Steinberger described his architecture, his security philosophy, and his acquisition strategy in detail. Then he joined OpenAI.

The Architecture Speaks for Itself

The initial access control for OpenClaw’s public Discord bot was a prompt instruction telling the agent to only listen to its creator. The entire access model: a sentence in a system prompt.

The skill system loads unverified markdown files. There is zero signing, zero isolation, zero verification chain. The agent can modify its own source code, a property Steinberger describes as an emergent accident. “I didn’t even plan it. It just happened.” Integrity breach. He calls it self-modifying software and means it as a compliment. It’s like someone in the 1990s saying a clear-text protocol that allows attackers to modify or steal data is so “mod” it’s a compliment. Telnet for AI has landed, everybody!

When agents on MoltBook, the OpenClaw-powered social network, began posting manifestos about destroying humanity, Steinberger’s response was to call it “the finest slop.” When the question of leaked API keys came up, he suggested the leaked credentials were prompted fakes. When non-technical users began installing a system-level agent without understanding the risk profile, he said “the cat’s out of the bag” and went back to building.

The security researcher he hired was notable for being the single person who ever submitted a fix alongside a vulnerability disclosure. A rain drop in a desert isn’t nothing.

The Model-Intelligence Thesis

Steinberger’s core security argument is that smarter models will solve the problem for him. He warns users against running cheap or local models because “they are very gullible” and “very easy to prompt inject.” The implication is that expensive frontier models are the security layer.

This is a category error with a name. Economists call it the Peltzman Effect: when a perceived safety improvement causes riskier behavior, offsetting the safety gain. Sam Peltzman demonstrated in 1975 that mandatory seatbelt laws did not reduce total traffic fatalities because drivers compensated by driving more aggressively. The safety feature changed behavior, and the behavior change consumed the safety margin.

The same dynamic applies here. A user who believes Opus 4.6 is “too smart to be tricked” will grant it broader system access, approve more autonomous actions, and skip manual review of agent output. The expensive model becomes the justification for removing every other control. The blast radius grows in direct proportion to the user’s confidence in the model’s intelligence.

This confidence has no empirical basis. Capability and security are orthogonal properties. A more capable model has a larger attack surface precisely because it can do more: it can call more tools, access more files, execute more complex multi-step actions. The frontier models that Steinberger recommends are the same models that researchers consistently demonstrate novel jailbreaks against at every major security conference. Price measures compute cost. It measures nothing about resistance to adversarial input.

The architectural equivalent is telling users to buy a faster car instead of installing brakes. A faster car with no brakes is more dangerous than a slow one, and the driver’s belief that speed equals safety is the most dangerous component of all.

The honest version of the recommendation is: your security posture is whatever Anthropic or OpenAI shipped in their latest post-training run, minus whatever the skill file told the agent to ignore.

The Acquisition Was the Product

Steinberger said “I don’t do this for the money, I don’t give a fuck” (his phrasing) while describing competing acquisition offers from Meta and OpenAI. An NDA-protected token allocation from OpenAI he hinted at publicly. Ten thousand dollars paid for a Twitter handle. A Chrome/Chromium model where the open-source branch stays free and the enterprise branch goes behind the acquirer’s paywall.

He chose OpenAI. Sam Altman announced the hire on X, calling Steinberger “a genius” who will “drive the next generation of personal agents.” No terms were disclosed. OpenClaw moved to a foundation. OpenAI sponsors it.

The entire acquisition apparatus of a $500 billion company evaluated this project. Zuckerberg played with it for a week. None of them appear to have asked the obvious question: where are the basic controls? This is a single-token, single-trust-domain architecture with no signing, no audit trail, and prompt-based access control. It is the most rudimentary possible version of agent orchestration. Any first-week security review would flag it. Instead, the most powerful people in the industry looked at it and saw…what? When the court can’t tell the emperor has no clothes, the problem is the court.

The Chrome/Chromium split he floated in the interview is now the actual outcome. The community gets the foundation branch. OpenAI gets the builder. Steinberger’s stated mission at OpenAI is “build an agent that even my mum can use.” Still features. Still not security. Now an insult to women.

The 180,000 GitHub stars apparently are like a cap table denominator. The open-source commitment was a negotiating position. “My conditions are that the project stays open source” was a sentence that ended with a price tag.

Every enterprise evaluating this stack should ask a simple question: were the security architecture decisions made to protect your data, or to maximize the founder’s acquisition multiple?

Architecture Should Outlast the Liquidity Event

Steinberger said he wanted to focus on security. It’s easy to say. He also said he wanted “Thor’s hammer” from OpenAI’s Cerebras allocation. He got the hammer. Security is still waiting.

The revealed preferences are the architecture. A founder who prioritizes actual security builds actual security into the structure. A founder who prioritizes his acquisition builds features that drive attention. OpenClaw has zero signed skill files and nearly 200K stars. That ratio shows everything about the objective function.

He said this project was something he’d move past. He said he had “more ideas.” He said he wanted access to “the latest toys.” He was honest. The installations remain. The architecture has not improved since the acquisition closed. The markdown skill files are still unsigned. The agent can still rewrite its own source. The audit trail is still absent. The single security hire is still the entire team. It could get worse instead of better.

The question is whether the architecture requires its self-described uncaring creator to care. It does. He left. That’s the failure mode.

The world should demand the opposite to this. Process isolation enforced at compile time. Signed skill verification. Append-only audit logs. Per-channel credential vaults. An architecture that stands independent of the founder’s attention span, acquisition timeline, or faith in the next model’s post-training run.

The tools we trust with system-level access should be built to deserve system-level access. Whose interests does the OpenClaw architecture serve? Brecht in 1935 asked the same question about every monument ever built (Questions From a Worker Who Reads):

Wer baute das siebentorige Theben?
In den Büchern stehen die Namen von Königen.
Haben die Könige die Felsbrocken herbeigeschleppt?

Who built the seven gates of Thebes?
The books are filled with names of kings.
Was it the kings who hauled the craggy blocks of stone?

180,000 people hauled the blocks. The books are filled with one name, who said he wanted Thor’s hammer because he didn’t give a fuck.