Citrini AI Bear Porn is a Lesson in Helplessness

A financial research piece called “The 2028 Global Intelligence Crisis” went viral last week. Written as a fictional memo from the future, it describes AI destroying the white-collar economy in two years flat: 38% market crash, 10.2% unemployment, mortgage crisis, Occupy Silicon Valley. Six thousand likes. Fifteen hundred restacks. People are genuinely frightened.

The piece opens with this:

This isn’t bear porn or AI doomer fan-fiction. The sole intent of this piece is modeling a scenario that’s been relatively underexplored.

What a time to be alive and study disinformation.

The Preface is the Payload

Disinformation research has a name for this. The negation frame. When you say “I’m not saying the president is a criminal,” you’ve just put “president” and “criminal” in the same sentence and activated the association. The disclaimer doesn’t neutralize the content. It delivers the content while inoculating the speaker against accountability for having delivered it.

“This isn’t bear porn” is bear porn with a permission slip. “This is a scenario, not a prediction” is a prediction with a liability shield. The authors are financial researchers, not amateurs. They understand that four thousand words of precision-formatted panic — complete with fake Bloomberg headlines, specific ticker symbols, and a fictional 38% drawdown — land in the nervous system long before the reader processes the caveat.

This is the lesson disinformation doctrine learned from War of the Worlds and never forgot.

What War of the Worlds Actually Taught

Martin Seligman found in 1967 that dogs subjected to inescapable shocks eventually stopped trying to escape even when the door was open. He called it learned helplessness, the condition where a subject has been trained to believe that no action they take will change the outcome, so they stop acting. Orson Welles had demonstrated the broadcast version of the same trick much earlier.

On October 30, 1938, Welles broadcast a radio drama about a Martian invasion, formatted as a series of news bulletins. The format was the weapon. Listeners who tuned in after the opening disclaimer heard what sounded like real reporters describing real events.

Intelligence services studied Welles carefully. What they learned: you don’t need to lie. You need to perform authority in a format the audience already trusts, deliver an emotional payload, and attach a disclaimer that provides deniability. The content can be speculative or fictional. The format does the work.

“The 2028 Global Intelligence Crisis” is formatted as a CitriniResearch Macro Memo dated June 30th, 2028. It uses Bloomberg headline formatting with ticker symbols. It cites percentages to two decimal places. It references named companies, named products, named financial instruments. Every convention says: this is real financial analysis. The single line that says otherwise is buried in a preface most readers will barely remember by paragraph four.

The Irresistible Denial

Three negation frames in two sentences:

This isn’t bear porn or AI doomer fan-fiction. The sole intent of this piece is modeling a scenario that’s been relatively underexplored.

Each negation introduces exactly the concept it claims to reject. And “underexplored” positions the authors as brave truth-tellers rather than people producing the most viral AI panic content on Substack.

Then near the end:

We are certain some of these scenarios won’t materialize.

Which parts? They don’t say. Because specifying would break the spell. The vagueness of the hedge preserves the totality of the fear.

The Machine With No Operator

The format trick enables a more dangerous move: erasing human agency from every decision in the scenario.

The piece describes a “negative feedback loop” as though it were a thermodynamic process with no intervention point. But every link in that chain is a decision made by a person with a name and a title:

  • A board votes to cut 15% of headcount rather than retrain, redeploy, or reduce shareholder returns.
  • A procurement manager cancels a vendor contract for an untested internal build.
  • A CEO funnels all cost savings into compute rather than worker transition.
  • A bank continues underwriting against income assumptions it knows are impaired.
  • A regulator declines to update employment protections.
  • A legislator blocks transition support.
  • A lab ships capability without deployment guardrails.

The piece names none of these people. Instead: “The companies most threatened by AI became AI’s most aggressive adopters.” Companies don’t adopt anything. Executives adopt things, boards approve them, shareholders reward them. Each decision has a fiduciary duty attached and a legal framework governing it.

Then the alibi:

What else were they supposed to do? Sit still and die slower?

That converts choices into a hostage situation. It says these executives had no agency. This is the competent complicity defense — the same logic used after the 2008 mortgage crisis and the Boeing 737 MAX. Capable professionals executing decisions they knew would cause harm, pointing to competitive pressure as exoneration. “What else were they supposed to do?” isn’t analysis. It’s an alibi.

Who Benefits from Helplessness

War of the Worlds didn’t just scare people. It made them feel helpless against a force they couldn’t negotiate with, couldn’t vote out, couldn’t hold accountable. The Martians weren’t making decisions. They were an event happening to humanity.

The Citrini piece does the same with AI. The feedback loop has no off switch because no human hand is on any switch. This is the atmosphere specific actors need:

  • Compute owners need inevitability because it makes regulation seem pointless.
  • Lab executives need it because unstoppable forces absolve them of deployment decisions.
  • Deregulation politicians need it because you don’t regulate an earthquake — you build shelters after.
  • AI-sector financial analysts need it because “AI destroys the economy” means “AI is the most important thing in the world,” which is the thesis their publication depends on.

The piece describes protesters blockading Anthropic and OpenAI, then frames them as a symptom of social breakdown rather than people responding rationally to identifiable decisions by identifiable executives. The format performs concern. The structure delivers inevitability. That isn’t analysis. It’s marketing with a furrowed brow.

The Panic About the Panic

Final parallel. The mass panic of 1938 was largely a myth. Most listeners understood it was fiction. But newspapers ran the panic story for weeks because they had a competitive interest in discrediting radio as a news medium. The real story wasn’t gullible listeners. It was an industry using manufactured fear to protect its position.

Same structure now. The piece goes viral. People get scared. The fear becomes the news. And the people positioned to benefit — compute investors, lab executives, AI-sector analysts — gain leverage from an atmosphere where displacement feels like destiny rather than a series of decisions they are actively making.

The question was never whether AI will destroy the white-collar economy in two years. The capabilities aren’t there — a Mag7 engineer in the piece’s own comments says as much. The question is whether identifiable people making identifiable decisions will be held accountable for the displacement they choose to cause, or whether they’ll hide behind a narrative formatted to look like expertise, disclaimed to look like a thought exercise, and designed to make you feel like there’s nothing you can do.

The machine isn’t in charge. The people building it, shipping it, and profiting from it are making choices. They’d prefer you believe otherwise.

Orson Welles, at least, had the decency to be making art. As Bertolt Brecht put it in The Resistible Rise of Arturo Ui:

Do not rejoice in his defeat, you men. For though the world has stood up and stopped the bastard, the bitch that bore him is in heat again.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.