This news should end OpenAI. Their management officially states they not only buried danger warnings, they did so intentionally, choosing not to warn of coming mass murder.
According to the Wall Street Journal, which first reported the story, “about a dozen staffers debated whether to take action on Van Rootselaar’s posts.”
Some had identified the suspect’s usage of the AI tool as an indication of real world violence and encouraged leaders to alert authorities, the US outlet reported.
But, it said, leaders of the company decided not to do so.
In a statement, a spokesperson for OpenAI said: “In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities.”
[…]
OpenAI has said it will uphold its policy of alerting authorities only in cases of imminent risk because alerting them too broadly could cause unintended harm.
On February 10, 2026, Van Rootselaar shot two people at the family home, then went to Tumbler Ridge Secondary School and killed six more people, including five children, before committing suicide. Twenty-five others were injured.
To be clear, OpenAI claims to be an intelligence product. It claims to predict accurately and wants to be a defense tool. And yet here we see exactly the opposite. Allowing children to be murdered. The shooter here is incidental. We are reading that every future flagged user gets the same cost-benefit analysis run to enable potential victims.
First, they claim their detection system works. They tout that they proactively identified the account, flagged it, had human reviewers examine it, and banned it. They present this as responsible behavior.
Second, what they say is they had concern for families, what they mean is concern only for themselves and user retention. Actually alerting anyone who could have helped prevent tragedy is being positioned as if harmful to… perception of OpenAI! The company argues that preventing mass death could be distressing for the young people and their families.
How?
This is the terrifying logic of a doctor who says they didn’t want to alarm a patient with a cancer diagnosis, so they watched preventable death as a silent show of concern.

OpenAI built a surveillance system that scans private conversations. They bank on being able to predict. Yet they claimed after mass murder that nobody was warned of a known threat, on purpose, because such warnings would change user perception of being constantly under surveillance. But, and this is a huge one, they could have quietly cooperated with RCMP and said nothing publicly.
This is canon in big tech. You warn. You save lives. I’ve built these sausage factories as Head of Trust for the largest data storage products and know from decades on the inside. OpenAI failed basic duty.
The system detected a significant threat. Employees recognized it as serious enough to debate reporting and urged disclosure. OpenAI management overruled them to protect their market value, arguing that enabling a probable mass shooter, by burying intelligence reports, outweighed protecting the eventual victims.
That’s a cruel policy choice that prioritized company power and selfish gain over public safety. Cruelty explains why now they are announcing they detected the shooter long ago, to defend their arrogant “intelligence” reputation, so they can monetize credit for detection despite NOT helping with prevention.
They are literally up-selling detection power after the fact, emphasizing OpenAI management controls public fate, while families grieve the preventable dead.
I can not emphasize enough how this should make OpenAI directly responsible, enabling mass murder. They assumed a duty by constructing the system. The legal term is “voluntary assumption of duty”: once you undertake to act, you can be liable for doing so negligently.
OpenAI goes far beyond negligence. Either they didn’t check whether authorities already knew about this person, which is already crossing the line and negligent, or they did know and still declined, which is far worse.
The people who actually suffered unintended harm were the eight dead and twenty-five wounded. OpenAI’s framing cruelly inverts the duty of care by positioning user discomfort of being reported as equivalent to or greater than the risk of mass death.
Authorities already had multiple contacts with Van Rootselaar before the shooting, had apprehended him under the Mental Health Act more than once, and had previously removed guns from the residence. A report from OpenAI would have fit within an existing file of law enforcement. There was ZERO risk of a cold call if corroborating evidence was for an active concern. OpenAI effectively is lying in their calculated framing of who is at risk and why.
OpenAI’s entire “over-enforcement” defense collapses against the fact that RCMP already had an active file. OpenAI’s entire “we knew eight months early” should be used to shut them down.
They knew, they had the power to act, they chose not to, and now they want to be rewarded for knowing. That’s the architecture of impunity for mass murder.
Related: Judges are unable to find a jury to put OpenAI co-founder Elon Musk on trial because he is so hated for being unaccountable for crimes against humanity.
I believe it would be to the benefit of the human race for Mr. Musk to be sent to prison.

