The AI Crisis Is the Governance

Three serious AI governance reports landed this month from the Centre for International Governance Innovation. One maps Russia’s generative AI disinformation evolution. One surveys AI’s role in the future of war. One lays out national security scenarios (Stall, Precarious Precipice, Hypercompetition, Hyperpower, Rogue ASI) with careful attention to what happens when a single entity controls superintelligence without adequate checks.

All three still treat AI governance as something to build before crisis hits. It’s like saying a barn really needs to think about installing some doors before a horse leaves, without recognizing how many already left.

None grapple with the possibility that the crisis is the governance.

The Canada-CIGI scenario workshop described the Hyperpower risk this way: a system where “ultimate control would be by one company’s CEO,” where that company “might start a process of disempowering competitors and preparing for long-term plans” before the public understands what’s happening. Participants flagged this as a future requiring urgent preparation.

That’s a description of March 2026.

Anthropic, Google, and xAI each received $200 million Pentagon contracts for agentic AI last July. The agencies that were supposed to provide oversight — CISA, the State Department’s Global Engagement Center, the AI Safety Institute — have been gutted or captured. The Biden-Xi agreement that humans should control nuclear weapons decisions has no institutional successor. The companies writing safety frameworks are the same companies winning the military contracts.

The scenario planners ask: what if a small faction gains control of the most powerful AI systems and uses that position to shape government policy? The answer isn’t hypothetical. The question is whether anyone with standing to respond recognizes it as the situation they’re already in. Also worth noting is that nobody asks what if a large faction does not gain control of powerful AI, meaning only a small faction benefits from it.

What the reports miss isn’t technical. It’s political. Governance capture doesn’t announce itself. It performs accountability while producing none (e.g. safety cards, responsible AI pledges, congressional testimony) while the structural consolidation continues underneath. The Hyperpower scenario doesn’t require AGI. It requires the right contracts, the right regulatory vacuums, and enough institutional inertia to mistake motion for oversight.

We’re long past the point of alarm. The question is whether the people writing the scenario plans notice.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.