Pentagon Goes to War Against America, Anthropic First: CEOs on Notice

Pete Hegseth is weakening the United States military to punish a company that is good. The Defense Secretary is about to rip the most capable AI model out of the Pentagon’s classified networks, force every major defense contractor to purge it from their systems, and replace it with a weaker model that generates Nazi content. The things he wants to eliminate never prevented a single military operation. The Pentagon’s own users love Anthropic Claude. But not angry Pete.

The Pentagon gave Anthropic until 17:01 today to remove two contractual safeguards from its AI model Claude: a prohibition on mass surveillance of Americans, and a prohibition on autonomous weapons that fire without human involvement.

If the company refuses to allow surveillance and autonomous weapons Hegseth will simultaneously label the American company a foreign adversary (“supply chain risk”) and invoke the Defense Production Act to compel it to hand over its technology anyway.

Legal experts point out Hegseth’s threats are inherently contradictory. You can’t declare a company a national security risk and simultaneously argue its technology is so essential to national defense that a Korean War-era emergency statute must be invoked to seize it.

The stupid.

Dean Ball, who wrote the White House’s own AI Action Plan, called it “a whole different level of insane.” The contradiction is the point, like saying bombing of Iran “obliterated” their nuclear production, therefore we now have to go to war with Iran to stop their nuclear production. The Nazis called it their “permanent improvisation”.

The Misogynist Who Wants Power

The Pentagon’s lead negotiator is Under Secretary of War Emil Michael. Late Thursday, after Amodei publicly rejected the Pentagon’s “final offer,” Michael posted on X calling the Anthropic CEO “a liar” with a “God-complex” who “wants nothing more than to try to personally control the US Military.” That sounds like projection.

Emil Michael is worth knowing. A disgraced man at Uber, he proposed spending a million dollars on opposition researchers to dig up dirt on journalists and their families, especially targeting a female reporter who’d criticized the company’s culture.

Attacking women apparently is a theme.

Senior Vice President Emil Michael floated making critics’ personal lives fair game. Michael apologized Monday for the remarks. […] He said that he thought Lacy should be held “personally responsible” for any woman who followed her lead in deleting Uber and was then sexually assaulted. Then he returned to the opposition research plan. Uber’s dirt-diggers, Michael said, could expose Lacy. They could, in particular, prove a particular and very specific claim about her personal life.

This is the guy telling Anthropic to enable surveillance for automated weapons. It’s obvious who he would be putting most at risk.

He was involved in a visit to a Seoul escort bar with company executives. He was implicated in a small group of Uber executives who obtained and reviewed the medical records of a rape victim in India. Eric Holder’s investigation into Uber’s workplace culture specifically recommended his firing.

This is the man the Pentagon chose to handle negotiations over whether AI should be used for mass surveillance. This is the man publicly calling a CEO a liar for refusing to remove safeguards against it.

Trust Breakdown

When CBS asked why the Pentagon won’t simply put the surveillance and weapons restrictions in writing, Michael said those uses are already illegal and barred by Pentagon policy. “At some level,” he said, “you have to trust your military to do the right thing.” He literally used a slippery slope fallacy. What’s the level?

The Pentagon says currently it would never use AI for mass surveillance. It also says it will not accept written language promising not to use AI for mass surveillance, because it may use AI for mass surveillance.

Anthropic says the restrictions haven’t prevented any military use of Claude to date. The Pentagon doesn’t dispute this. The demand to strip the safeguards is entirely about the principle that only Trump can set conditions, the only law is Trump. Even conditions the Pentagon itself claims are redundant with existing law aren’t ok, because they weren’t set by Trump.

When restrictions are truly redundant, the campaign to eliminate them makes no sense unless America is already a military dictatorship. Former DOJ official Katie Sweeten put it plainly:

If these are the lines in the sand that the DOD is drawing, I would assume that one or both of those functions are scenarios that they would want to utilize this for.

The Compromise That Wasn’t

The Pentagon sent a “best and final offer” overnight Wednesday. Anthropic laughed and said it “made virtually no progress” on the two safeguards. It actually went backwards. New language framed as compromise was not.

paired with legalese that would allow those safeguards to be disregarded at will.

Translation: we promise not to do it until we decide to do it.

This is a pattern anyone familiar with rape investigations recognizes. Language is structured so that consent is assumed and withdrawal of consent is impossible: she wanted it. It’s now being used as institutional capture. The written promise contains its own override, meaning the premise is to ignore the promise. The safeguard exists on paper and nowhere else.

The Weapon Example

The Lawfare analysis by Alan Rozenshtein lays out the legal terrain clearly. Biden used DPA Title VII — information-gathering authority — to require AI companies to report on training activities. Republicans called even that “overreach.”

Remember?

Hegseth is now threatening Title I, the core compulsion power, against a domestic company for refusing to remove ethical restrictions from its own product. Irony has never been a constraint on power. Hegseth is expressing authoritarian rule and Republicans are all for it now.

The real audience for this isn’t Anthropic. It’s every other American tech company watching the end of democracy. OpenAI, Google, and xAI have reportedly agreed to “all lawful purposes” language. Musk’s xAI is so thirsty it signed on for classified work with no restrictions, just like how its Grok model produced Nazi content that Anthropic’s safeguards were designed to prevent.

None of the other companies responded when asked whether they’ve agreed to allow mass surveillance or autonomous weapons.

The message is comply without conditions or Trump will make an example of you. He’s the guy who allegedly hit a 13 year old girl in the head when she bit his penis. Pentagon spokesman Sean Parnell used deranged language to call surveillance concerns “fake” and “being peddled by leftists in the media.” This from the people who said Civil Rights are over and rebranded the Department of Defense back to the “Department of War.”

The Stake

Anthropic’s Dario Amodei wrote Thursday that frontier AI systems

are simply not reliable enough to power fully autonomous weapons [and cannot] exercise the critical judgment that our highly trained, professional troops exhibit every day.

He is correct.

He expressed concern that AI surveillance could piece things together:

scattered, individually innocuous data into a comprehensive picture of any person’s life.

He also pointed out what should be obvious:

[the Pentagon’s threats] are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Amodei said if the Pentagon proceeds, Anthropic

will work to enable a smooth transition to another provider.

The company isn’t threatening or taking power.

It isn’t retaliating.

It’s saying it would rather lose a contract than engage in mass surveillance and autonomous killing.

The Pentagon brought six officials to the Tuesday meeting (Deputy Secretary Steve Feinberg, Under Secretary Michael, Under Secretary Duffey, spokesman Parnell, and general counsel Earl Matthews) as institutional confrontation, not negotiation.

Hegseth praised Claude to Amodei in the same meeting where he threatened to destroy the company.

The only reason we’re still talking to these people is we need them and we need them now,

A Defense official told Axios.

The problem for these guys is they are that good.

Trump hates good.

The Historical Pattern

Governments that demand the right to surveil their own citizens without written constraints always say they won’t abuse it. Governments that demand autonomous killing capability always say there will be humans in the loop.

The authoritarian ask is always the same: remove safeguards and don’t put conditions in writing. The historical record of what happens next is unambiguous.

The DPA was designed to redirect steel production during the Korean War. It’s now being weaponized to compel an American company to strip ethical safeguards from artificial intelligence so the military can use it without conditions. Congress hasn’t legislated guidelines on autonomous weapons or AI surveillance. Into that void steps a Defense Secretary who declared AI “will not be woke” from the stage at Elon Musk’s SpaceX headquarters, while the Nazi-spigot xAI holds a competing Pentagon contract.

The deadline is nearly here. What happens will tell us whether any American company can refuse to build tools for Trump attacks on Americans without being labeled an enemy of Trump.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.