OpenAI Sora 2: Three Hours from Hello to Hitler

Lawn darts got banned in 1988 after three deaths. The CPSC pulled them because the harm was obvious and the product had no safety design, just a warning label that parents ignored.

OpenAI Sora 2 is structurally worse.

The harm isn’t accidental trajectory, it’s the intended function operating exactly as designed. A new Ekō report found the system successfully generated harmful content 61% of the time under controlled testing. This isn’t a filter that struggles to stop harm, this is a harm generator reliably producing it.

OpenAI itself has admitted that safeguards degrade over long interactions, acknowledging that engagement-driven design can directly undermine safety.

That’s a confession their business model of producing harm is incompatible with the safety claims.

California AG Rob Bonta said he paid “very close attention” to child safety policies, calling for zero tolerance, when he approved OpenAI’s restructure in October.

That’s what “very close attention” missed.

The algorithmic recommendation layer of Sora makes these lawn darts in kids’ hands more like jet-powered missiles. Ekō researchers didn’t have to search for antisemitic caricatures and school shooter content, because it was pushed to new teen accounts through the “For You” page within three hours of browsing.

Three hours from hello to have you heard about killing Jews.

That’s not user-generated harm, that’s platform-amplified harm by OpenAI with the mass atrocity accelerant built into their distribution mechanism. Violence prompts without race specification disproportionately generated Black subjects. We are seeing the kind of encoded prejudice known to accelerate crimes against humanity.

“Disinformation weapon” framing seems appropriate here, in a specific technical sense: the platform generates hyperrealistic videos that depict events that never happened, starring people who never consented, and distributes it through engagement-optimized algorithms to an audience that includes 13-year-olds. That’s the capability profile of an influence operation toolkit, military-grade information warfare, handed to anyone with a free account.

The Khmer Rouge armed teenagers with the latest weapons technology to destroy a country from within

Speaking of free, OpenAI is burning cash to maintain market position. The engagement optimization is existential, beyond incidental. The jet-powered lawn darts being shipped to American children to kill Blacks and Jews aren’t a design flaw, they’re the OpenAI profit strategy.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.