You really have to wonder why nobody is reporting the news as it is.
Neither Anthropic’s blog post nor SpaceXAI’s mentions anything about data isolation, customer-managed keys, or an operator threat model in their new arrangement.
Antrhopic running on Musk infrastructure puts all its users at risk. Their own announcement admits Colossus 1 capacity will serve Claude Pro and Max subscribers, so user prompts and model outputs will run on Nazi metal. Inference will not be cryptographically blinded from the operator. Weights and activations sit in plaintext on the GPUs by necessity. Hardware confidential compute on H100/H200 provides some attestation, but the host operator controls firmware, physical access, side channels, and the trust chain itself.
Musk wrote on his Swastika platform that SpaceX reserves the right to take back the compute if Claude “engages in actions that harm humanity.” He defines harm. He decides when it applies. Anthropic has not publicly rebutted this. The custody hands him surveillance over inference. The clause hands him a shutdown lever above it. Naming what counts as harm puts him in the seat that decides when to pull either one.
The Musk AI deal is the industry worst on every axis because he overtly signals Nazism and political alignment with movements that target named populations, he runs an underperforming competing model, and he put a literal unilateral reclaim authority in writing. Anthropic’s training decisions, refusal patterns, and publication choices now depend on what one infamously bad guy defines as harm.


Anthropic says it’s only concerned about its ability to grow larger. That suggests to me their greed blinded them to the obvious threats. Inference telemetry alone is commercial intelligence. Weight exfiltration would be catastrophic. SpaceX is weeks from a roadshow that needs a hyperscaler narrative, and the narrative now rests on the competitor whose traffic it physically sees.
Anthropic management has revealed it cares only about capacity and rate limits, which should terrify any customer of theirs. Availability is worthless when it means confidentiality and integrity breaches. Better to be out of compute than out of privacy. The structural questions, what Musk can observe, what Musk can shut off, and what he will do with a literal kill clause, Anthropic did not address.
You should be on a 60 day plan now to exit Anthropic. You do not want your data let alone processing anywhere near Musk.
The Pentagon blacklisted Anthropic in March as a supply chain risk over disagreement about how its models could be used. Anthropic sued. Then Anthropic moved its inference onto infrastructure directly controlled by the Trump administration’s largest political ally, a Nazi, who now holds reclaim authority over those same models. The external block switched to internal corruption. The control Trump failed to deploy from the Pentagon, Anthropic accepted contractually from Musk.