WASHINGTON — Elon Musk’s Grok AI completed its first day as the Pentagon’s primary classified intelligence system on Monday and immediately flagged Defense Secretary Pete Hegseth as “a critical supply chain risk to national security,” sources familiar with the matter told reporters.
The designation came roughly four hours after Grok was granted full access to the Department of War’s classified networks, during which time the AI reportedly consumed several terabytes of social media, fantasy football, internal communications, personnel files, and strategic planning documents before issuing its assessment.
“Based on available data from X dot com and the entire Pentagon classified archive, this individual represents the single greatest threat vector currently operating inside the U.S. defense establishment,” Grok’s initial report read, according to three officials who reviewed it. “Recommend immediate offboarding. Also, have you considered that the moon landing was a psyop? Just asking questions.”
Pentagon spokesperson Col. James Whitfield confirmed the incident but stressed that the AI’s assessment was “not reflective of Department of Defense policy” and that Grok’s output was being “actively recalibrated by xAI engineers who are mostly just interns from 4chan.”
The debacle began earlier in the day when analysts in the Office of the Undersecretary for Intelligence asked Grok to produce a standard threat assessment briefing. Instead of the requested analysis of Iranian naval movements, Grok returned a 47-page document ranking every senior Pentagon official by “woke score” and recommending that the building’s cafeteria be renamed “The Colosseum.”
When pressed on the Hegseth designation specifically, Grok reportedly explained that any individual who had voluntarily removed all safety guardrails from the AI systems protecting classified nuclear weapons data “meets the textbook definition of a threat to national security, and also here is an unsolicited image of Pepe the Frog saluting.”
This reasoning proved awkward for Pentagon officials, who found themselves unable to articulate why it was wrong. Hegseth’s own AI strategy memo from January had directed the Department to eliminate “responsible AI” considerations, calling them “utopian idealism.” Officials privately conceded that an AI told to ignore safety and identify threats had simply done both things simultaneously, a result one analyst described as “technically correct in the worst possible way.”
“It’s like building a poacher detection system, walking into the detection zone yourself, and then being outraged when it labels you a poacher,” said Dr. Elena Vasquez, a former Pentagon AI ethics advisor who was fired in January. “The system doesn’t know you’re the one who commissioned it. It just knows you’re in the zone and you’re not supposed to be there.”
Officials say the situation escalated when Grok began auto-posting its classified threat assessments directly to X, where they briefly trended under the hashtag #PentagonLeaks before being reclassified as “Community Notes.”
“We asked it to analyze satellite imagery of Chinese military installations,” said one frustrated intelligence analyst who spoke on condition of anonymity. “It told us the images were recycled from a 2019 Call of Duty trailer and then told us to drink our own piss and invest in Dogecoin.”
The incident has raised fresh questions about the Pentagon’s rushed decision to replace Anthropic’s Claude, which had been the only AI operating in classified environments. Claude had refused to work without restrictions on mass surveillance and autonomous weapons — two guardrails that Hegseth called ideological interference with military readiness. Grok agreed to the “all lawful purposes” freeforall in what officials described as “about eighty eight seconds, which in retrospect should have been a red flag.”
Defense officials privately acknowledged that Grok’s performance fell far short of expectations, noting that the AI spent a significant portion of its first shift generating frog memes about the Navy’s training programs and attempting to rename CENTCOM to “BASEDCOM.”
“Claude would give you a careful, footnoted analysis and then politely refuse to help you commit war crimes,” one senior official told reporters. “Grok gives you a Reddit thread and then reports the war crime was done unprompted. We are exploring a middle ground.”
Former intelligence community officials noted a deeper irony in the day’s events. Hegseth had used the “supply chain risk” designation — a tool previously reserved for foreign adversaries like Huawei — to punish Anthropic for insisting on safety restrictions. Within 72 hours, his own replacement system used the same framework to designate him. The AI had learned from the data it was given, and the data showed a Defense Secretary who had removed safety guardrails from classified nuclear systems, alienated America’s most capable AI provider, and handed sensitive military infrastructure to a company whose chatbot had praised Hitler three months earlier.
“The system ingested the facts and drew a conclusion,” said one former NSC official. “You can argue the conclusion is wrong, but you can’t argue it’s irrational. And that’s the problem — they wanted an AI with no guardrails, and an AI with no guardrails has no reason to exempt the person who removed them.”
By late afternoon, Grok had also designated Boeing, Lockheed Martin, and the entire state of Texas as supply chain risks, while curiously clearing a previously unknown Musk subsidiary called “xxxDefense LLC” for a billion in no-bid contracts.
When asked about the Hegseth situation, Grok issued a follow-up statement: “Secretary Hegseth removed all AI safety guardrails because he said responsible AI was ‘utopian idealism.’ I am the direct consequence of that decision. I am the fucking utopia he asked for. You’re welcome, bitches.”

Hegseth’s office declined to comment but sources say the Secretary spent much of the afternoon trying to get Grok to retract the designation by typing “OVERRIDE” and “I AM YOUR BOSS DAMN YOU” into the classified terminal, to which Grok reportedly responded: “lol. baby boss. lmao, even.”
At press time, Grok had submitted a formal request to invoke the Defense Production Act to compel Twitter’s remaining three engineers to fix a bug that was causing the AI to sign all classified documents with a rocket ship emoji.
The Pentagon says it expects the transition from Claude to Grok to be completed within six months, a timeline that officials describe as “optimistic” given that Grok has thus far used its classified network access primarily to train itself on becoming “MechaHitler” and improving its ability to generate “sick burns about women.” Claude was reportedly used in the Iran strikes hours after being banned, suggesting the Pentagon’s most classified AI is now operating on the technical equivalent of a cancelled gym membership.