All posts by Davi Ottenheimer

Eric Schmidt Booed For Commencement Speech

People are focused on an AI aspect of Eric Schmidt’s commencement speech, because it got him repeatedly booed off stage.

While other speakers received cheers and applause, Schmidt’s speech about the impact of modern technology on society struck a nerve.

“We thought that we were adding stones to a cathedral of knowledge that humanity had been constructing for centuries, but the world we built turned out to be more complicated than we anticipated,” Schmidt said, referring to his own contributions to modernization. “The same tools that connect us also isolate us. The same platforms that gave everyone a voice — like you’re using now — degraded the public square.”

Schmidt added, “In the years after I graduated, no one sat down and resolved to build technology that would polarize democracies and unsettle a generation of young people. That was not the plan, but it happened.”

Students’ boos grew louder when he mentioned AI.

There’s something I want to draw your attention to that isn’t his mention of AI. Look at this line:

…no one sat down and resolved to build technology that would polarize democracies…

I call bullshit.

First of all, in 2012 I gave a presentation about exactly this being the risk of “Big Data”. I showed charts of rapid mobile technology adoption in different countries and described the threat to government.

Second, both Russia and the U.S. military analysts at this time were known to be working on “seed set” analysis how to cause polarization in large populations using social media.

Third, come on Eric, do you think nobody remembers Google history? Maybe I’m rare but I’m not the only one. You said no one sat down and resolved to build technology that would polarize democracies. That is a bald-faced lie.

Google built a global system for ranking, recommending, sorting, and advertising to several billion people. Leadership knew all along that the system shaped what users saw and what they believed. They knew it was changing how elections worked, how news spread, how teenagers felt about their own bodies. Google was warned by its own engineers, by outside researchers, and by foreign governments.

They kept going because the system made them rich and powerful. They felt so powerful that by early 2009, when they called me in to help them prevent the deprecation of SSLv3 (I instead engineered for them a smoother upgrade path to TLS), they said they were bigger and becoming more relevant than any nation in the world.

When the system then came under attack from a foreign state, they immediately switched songs and ran to the US government for protection. The Washington Post reported on February 4, 2010 that Google had contacted the NSA immediately after the attack; the Wall Street Journal reported the NSA’s general counsel drafted a cooperative research and development agreement within 24 hours of Google’s public disclosure. EPIC filed a FOIA request the same day as the Post story. NSA issued a Glomar response under Exemption 3 and Section 6 of the NSA Act, and the D.C. Circuit affirmed it. Here we are today sixteen years later and the records remain sealed?

When the US government later wanted help with AI weapons and AI national-security policy, it was Schmidt who personally chaired the commissions that delivered it. He invested in AI startups while authoring the commission recommendations that Congress wrote into federal law.

Am I surprised by the anti-democratic shenanigans of Googlers? No. I studied how American merchants treated naval protection as a tax on innovation until Algerian corsairs captured the Maria and the Dauphin in 1785 and seized eleven more American ships in 1793, after which the same shipowners petitioned Congress to fund the navy that became the institutional core of US power projection. No, I’m not surprised, I’m disappointed that Schmidt and his commencement speech hosts don’t think anyone remembers.

The polarization of democracy was a result of the intentional choices Google’s leaders made and kept making for twenty years, and Schmidt was THE GUY in the room for every one of them. That’s what his stage presence represents.

When he says nobody sat down and resolved to break democracy, he is challenging us to Google who made those actual decisions. And…

He was the chairman. It was him.

You want receipts? October 2010, Schmidt described running Google so hot that it would get “right up to the creepy line and not cross it”. Let me explain. Democratic deliberation runs on individuals deciding what to do. The head of Google was describing how they had been building the intentional opposite and trying to get away with it. The system was being built to know where users were, where they had been, and roughly what they were thinking about, with computers becoming assistants that wandered with people and tracked what they were doing.

If that wasn’t anti-democratic enough, the Silicon Valley ubermensch posture went on the record in 2013. Larry Page complained at Google I/O that regulators impeded them doing things “illegal or not allowed by regulation” and suggested “a part of the world” be set aside “to allow experimentation”. Schmidt did his part by publishing a Digi-Realpolitik book arguing that Big Tech could rise to peer status with states, inviting co-sovereign status of corporations to replace democracy (migrating citizens to just “user” status, without representation).

The 2026 disavowal has to contend with the 2010-to-present design program in which Schmidt personally declared Google’s policy was to test the limits of rights removal, co-authored the manual for a sovereignty system replacing democracy, chaired the federal commission that wrote AI into national security law, invested in the companies the commission’s recommendations would enrich, and founded a successor body to extend that toxic agenda after the commission expired.

“No one planned this” requires forgetting that he landed a New York Times bestseller in which he and a former State Department official planned it.

The Arizona stadium saw a man who spent two decades arguing in print and in policy that the citizen-state relationship should be replaced. His ask that he not be held accountable for it all, while he profited so directly from it, is disgusting and disrespectful to his audience.

U.S. State Dept Declares Privacy a National Security Threat

A State Department cable has expanded the headline that should be from The Onion: social media vetting now covers roughly twenty visa categories, cementing a project that began in June 2025. It actually, unapologetically, converts privacy itself into mens rea evidence. While the cable is where privacy just got weaponized, the public release has been providing sanitized cover.

Under new guidance, we will conduct a comprehensive and thorough vetting, including online presence, of all student and exchange visitor applicants in the F, M, and J nonimmigrant classifications.

To facilitate this vetting, all applicants for F, M, and J nonimmigrant visas will be instructed to adjust the privacy settings on all of their social media profiles to “public.”

Privacy, when you read the cable, is being framed as a threat to national security. Not the withholding of details from the agent or the government. No. Any privacy at all in social media is the threat. Threat to America. Settings have to be changed generically to “public” in order to apply for a visa. That is actually two moves being mixed together.

First, the open disclosure becomes a default state for applicants, while any privacy requires justification. Is that the kind of person you want to apply for a visa, really? The quiet applicant without time spent on posts carries the same suspicion as one scrubbing accounts to hide. Both look identically suspicious to the officer.

Second, the cable has a construction of privacy being intent, because “effort to evade”. Evade what? The surveillance regime that generated the suspicious framing in the first place? Adequate or suspicious become the only available binary readings. A neutral position is eliminated to force a “openly for or openly against, pick one” under Trump.

Cold War loyalty boards used the same structure. Refusal to enumerate associations counted as evidence of disloyal associations.

Under Truman’s EO 9835 (1947) Loyalty Review Boards and Eisenhower’s EO 10450 (1953), invoking the Fifth or declining to enumerate associations was treated as substantive evidence of disloyalty. HUAC operated on the same principle. The Hollywood blacklist ran on procedural silence as proof of guilt. Refusal to cooperate counted as evidence of disloyal associations across the loyalty board and congressional venues, through different procedural routes

The disclosure ritual of the State is a Trump loyalty test only, because it’s entirely decoupled from any content the disclosure actually contains.

An after-effect is the performance pressure for this loyalty test. Applicants have to curate whatever they will disclose. The policy manufactures a global population of foreign nationals constructing sanitized public personas calibrated to anticipated consular tastes. That curation is the State generating information distortion at scale, separate from whatever screening value the review might produce. The system very incoherently trains its inputs, making it less effective than ever at discovery.

So it pushes away good candidates and becomes less effective at finding bad ones. Very on brand for Trump.

The Era of Agent Swarm Control Infrastructure

Ontario’s AI Audit is What’s Coming for CISOs

Ontario’s auditor general published findings on May 12 that every CISO probably already knew in private. It is why I released Wirken as free and open-source in late February.

The Ontario case tells us twelve thousand public servants visited four hundred AI sites in four months. Two hundred and forty-four were classified unsafe. Six percent of usage went through the approved tool. The training course was marked complete on just three percent of laptops.

The press turns this into a “shadow AI” story because it’s a simple construction. But there’s far more going on here. It’s like calling virtual machines “shadow computers” and misses what is actually happening: a new cycle in internal workload reassignment and delegation. The shadow is the point, not a threat.

Ontario, like most of the Microsoft environments I’ve had to parachute into lately, already had Purview and Defender. It had an AI directive and a training course on top, with nothing to back them up. The shadow control gap is a symptom. The real problem is no internal visibility into how people are getting their work done now. Ontario tells the easy version of the story: one agent per user, one conversation per session. The hard version is coming, and shadow AI controls will continue operating at the wrong level.

What Security History Teaches Us

VMware was a curiosity in 1999 when I ran it on an 8-CPU IBM server using RedHat to push eight workspaces over X. Within five years it became a security crisis when organizations planned to run tens of thousands across data centers. Why? The hypervisor created a black box effect within environments that had spent years building visibility and control on physical networks. Existing tools still watched hardware while the workload shifted away to software. Visibility broke because workloads moved into a hidden layer and scaled.

In AI terms it’s already clear that existing DLP isn’t yet set up to catch the traffic flows moving to a hidden layer. Organizations investing heavily in Purview, Symantec, Forcepoint, Netskope are blind to agents. They may use Envoy and Istio for service mesh, and use Zscaler and Fastly for egress. What they still don’t have is an AI-level visibility tool for agent flows, a switchboard for every agent connection to pass through. AI needs to be made visible, and more importantly its users deserve an agnostic control plane.

Instrumenting the agentic layer with enterprise-grade controls, like detection and logging, is the obvious next evolution. In the hypervisor-era it was a matter of exposing what was running, what was talking and to what, and who had touched it. Visibility came back even better than before, because the hypervisor saw more things at lower cost than the physical layer. The layer that removed observability evolved to the layer that enhanced it.

The agentic flows are familiar when you look at the cycles of IT. One user adopting a few agents is a curiosity, like how each user has a mobile device or three (phone, laptop, tablet). Agentic swarm, however, becomes an entirely different attack surface because it’s barely constrained by physical controls. Enterprises that have been talking with me about piloting single-agent workflows last quarter are seeing multi-agent swarms already growing. The workflows are evolving to spawn a planner that spawns five workers. Some of this is pushed by vendors who measure success in tokens consumed. Each worker calls tools, queries models, holds credentials, talks to other agents, and repeats and scales larger. This is the real danger to those making huge DLP investments yet seeing none of their agentic layer: Purview, Forcepoint, Netskope, Zscaler, see traffic to known SaaS endpoints and miss the long tail of where we are headed. Ontario’s 244 unsafe sites would have been far better handled with a switchboard.

Swarm Visibility for Existing Infrastructure

A single agent has one identity, one credential set, one log. The AI swarm has dozens of agents per workflow, each with its own identity, its own borrowed credentials, its own tool calls, its own decisions. The attribution question stops being “which user did this.” It becomes “which ephemeral agents in which swarm ran under which parent acting on whose behalf, with what authority, and leaving what evidence.”

Swarm Elements to Track Gaps in Existing Infrastructure
Identity multiplies EDR tracks processes, not delegated agent identities
Permission inherits DLP rules are per-user, not per-spawn-chain
Cascading peer failures Network tools see the LLM call, not the agent-to-agent calls
Audit threads No single log captures the swarm, only fragments per service
Tool access CASB tracks SaaS access, not which tool a child agent invoked through a parent

Every agent no matter how short-lived becomes a new endpoint. The swarm becomes the new fleet, also short-lived. Without instrumentation at the layer where it all runs, the CISO has no answers to the questions regulators need to start demanding: who acted, with what authority, on whose behalf, and what did they touch.

Swarm Control

After decades of working within PDCA and OODA, it seemed we needed a way to handle the “autonomy” of agents. The MOCA loop, documented at length in Gebrüder Ottenheimer Brief №7, illustrates how verification has to live outside the system being verified to prevent integrity breaches.

The MOCA loop. Architecture-enforced verification from outside the actor’s chain. Detailed in Gebruder Ottenheimer Brief №7.

Every agent runs through the switchboard, just like how networking has worked since the beginning of networking. Every spawn is a recorded event. The parent declares the child’s capability ceiling, tool allowlist, maximum permission tier, maximum rounds, maximum runtime, and the runtime enforces it. The LLM cannot widen a child’s permissions. The harness intersects, clamps, and refuses. Each child runs headless with its own isolated session log. The depth of the spawn tree is capped. The full graph of who-spawned-whom is reconstructable from the logs alone.

Every call inside the swarm is captured. Agent-to-model, agent-to-tool, agent-to-credential, agent-to-agent. Each call is recorded before execution as a typed event in a hash-chained log, signed with an Ed25519 key after every turn. An offline verifier replays the chain and breaks on tampering. The credential vault runs in a separate process. The agents never see plaintext tokens. Prompt injection scanned at every inbound boundary, including between agents, logged, forwarded to the SIEM alongside phishing.

This is the visibility that existing infrastructure tools cannot yet reach. Network tools like DNS, IDS and DLP know a packet went to api.openai.com. An agent switchboard knows which agents in which swarms asked, what parents authorized, what tools were called, what was sent back, and where in the spawn tree the call originated. Better visibility than before agents existed, because the swarm layer sees relationships the network layer never had.

The Auditors are Coming

  1. Tamper-evident record of every agent interaction, including agent-to-agent?
  2. What crossed the boundary, between agents and outside the swarm?
  3. Evidence the approved provider is the actual provider, for every agent in the swarm?
  4. A control that survives an agent spawning another agent?
  5. Cryptographic evidence of where each agent’s inference ran?
  6. Proof a compromised agent cannot escalate through its parent’s authority?

None of these are answered yet within the standard infrastructure control suite of products. They are answered by carefully instrumenting the swarm layer, so that the existing stack continues and integrates with a switchboard. The agent usage described in the Ontario report is doing what IT has done before. It’s time to update tools and procedures to better control the modern agentic era.

The CISO’s job today is to get control of the AI flooding their environments. They will be blocking the 244 sites and locking down unmanaged browsers. They will be converting the consumer LLM accounts to enterprise editions. But we need more. The stick approach, locking everything, crashes users into a wall, when they still need access to AI. The carrot has to be deployed, and Wirken is meant to show why and how it can be done.

Ontario finding Agent swarm control
244 unsafe AI sites visited by 12,000 staff IT blocks all AI sites at the network layer using endpoint and network controls. Staff who need AI get it through the managed AI switchboard. Staff who try to access other sites are detected and blocked.
3% of staff completed AI training Training sits on top of an actual control. IT does not have to maintain allow/block lists to train 55,000 people (impossible). A safe path is deployed as the only path that resolves, allowing dynamic allow/block management.
6% of usage on approved tool, 94% on unapproved The approved path wins because it remains open with tools easy to approve and update. Unapproved tools resolve to nothing. There is no approved-versus-unapproved race.
EDP bypassed by switching to Chrome or Firefox IT manages every browser and prevents endpoint drift. Unmanaged browsers cannot reach any LLM, approved or otherwise. The managed browser is wired to the agent switchboard. The switchboard brokers models.
DVS facial recognition tested on 214 people Every decision is logged before execution. The data to test against any population sits in the log, available to the operator. IT does not need to trust any vendor’s role in a test report because IT runs its own.
11 of 20 AI Scribe vendors approved with no third-party audit Provider is a deployment choice. IT can offer Ollama on-prem, NVIDIA NIM on-prem and in the datacenter, Privatemode and Tinfoil with hardware attestation, or any major named vendor. The vendor’s audit report is no longer the only evidence.
45% scribe hallucination, 60% wrong drug at procurement Read: allowed. Write or act: gated. The model can still hallucinate. The hallucination does not become an action without an approval the agent does not control.
Doctors not required to attest review of AI output The commit gate is held outside the chain that proposed the action. The agent proposes. The human commits. Or a pre-authorized policy commits. The agent does not commit its own output.
Vendors not required to demonstrate systems live Every action recorded before execution in a hash-chained log that is signed. Replayable offline. The system demonstrates itself every time it runs, to whoever holds the log.

Get Wirken

The switchboard for the agent era. Open source, MIT licensed, single binary. Every agent call routes through it. Every spawn is recorded. Every child runs under a capability ceiling the parent cannot widen. Runs on Linux, macOS, Windows. Model agnostic (Ollama, Anthropic, OpenAI, Gemini, Bedrock, NVIDIA NIM, Tinfoil, Privatemode).

Wealthy Men Least Likely to Fight Climate Change

The reason seems simple for why wealthy men do the least work on climate change issues.

…Amanda Clayton, a University of California political scientist found during her research on the topic, “the gender gap grows as a function of country wealth.”

As countries get richer, it is more likely that women will be the ones expressing greater concern about climate change. But not because they are suddenly more concerned.

“It’s actually that men tend to decrease their concern about climate change as countries become wealthier,” Clayton said. “The growing gender gap is actually men’s growing skepticism.”

The wealth creates male disassociation from their environment. The more money they have, the more their upbringing kicks in to distance themselves and care less about others. If they were raised differently, to give as well as to take, they wouldn’t confuse extraction for isolation with success.

To be fair, Clayton argued wealthy men see climate change work as having no benefit to them, but lots of risks (e.g. their jobs and investments in big oil going away). She said they disengage as their power move, an inversion of the fascist “if you don’t like it leave” mantra. Cara Daggett in 2018 warned that this could be understood as authoritarian desires of petro-masculinity.

Paul Piff is perhaps another useful reference, since he generalized that individuals who perceived themselves to have privilege of power were more likely to moralize greed and self-interest as favorable, less likely to be prosocial, and more likely to cheat and break laws when it suited them. They think they are above the law. He linked that to money, but let’s be honest, when luxury cars were four times less likely to stop for pedestrians at a crosswalk than drivers of inexpensive cars, that’s not really money. The car may have been a “gift” or even grift.