Retired Colonel Catches Trump DoJ Using AI to Deny Veterans Healthcare

A federal prosecutor filed fabricated quotations and misstated case holdings against a 69-year-old retired Air Force colonel fighting the Pentagon for his medication. The colonel caught the fabrications himself. The U.S. Attorney’s office won’t say whether AI was used to draft the brief. It doesn’t need to. The filing carries every forensic marker of large language model hallucination documented across more than 700 sanctioned cases.

The case is Fivehouse v. Defense Dept., E.D.N.C., No. 2:25-cv-00041. Colonel Derence Fivehouse, USAF (Ret), a former staff judge advocate with decades of military legal experience, is suing the Defense Department pro se over its decision to strip GLP-1 medication coverage from TRICARE for Life beneficiaries.

His doctor prescribed the drugs.

The Pentagon said no, because…

Broken Promises to Vets

In 2001, Congress created the Senior Pharmacy Program to guarantee that Medicare-eligible military retirees received the same pharmacy benefits as younger beneficiaries. For more than twenty years, that promise held. TRICARE for Life covered GLP-1 medications for weight loss when prior authorization confirmed obesity-related comorbid conditions — the same standard applied to everyone.

Then, in August 2025, the Defense Health Agency (DHA) pulled coverage for TRICARE for Life beneficiaries only. A 64-year-old retiree on TRICARE Select still pays a $35 copay for the same drug. The only difference is that Fivehouse lived long enough to become Medicare-eligible. Out-of-pocket cost for the same medication without coverage: $1,300 a month.

The DHA’s legal justification is a regulation (32 C.F.R. § 199.17(f)(3)) that references an obesity treatment exclusion originally designed to keep CHAMPUS from paying for elective weight-loss clinics for military spouses and children in the 1970s. DHA now claims this regulation excludes retirees who served decades in uniform. But DOD’s own regulation at § 199.17(a)(6)(ii)(C) says, in plain English, that TRICARE for Life is “unaffected by this section.”

The Military Officers Association of America reviewed the statutory landscape and found no federal statute specifically excluding TFL beneficiaries from GLP-1 coverage — but multiple statutes requiring uniform pharmacy benefits across all TRICARE categories.

Fabricated Briefs in the Breeze

Fivehouse filed his challenge. The DOJ’s Eastern District of North Carolina office, representing the Defense Department, assigned assistant U.S. attorney Rudy Renfer to the case. Renfer filed a response brief containing fabricated quotations and misstated holdings from multiple circuit court opinions, plus two fabricated quotes from the Code of Federal Regulations.

As the pro se plaintiff, as a veteran denied his medication, Fivehouse caught it. He flagged the misrepresentations. US Magistrate Judge Robert Numbers then identified what he called “the most significant issues” on his own review, and issued an order requiring senior leaders from the entire U.S. Attorney’s office to appear at a show cause hearing.

Renfer’s explanation is weak:

He “inadvertently included incorrect citations to case law from this circuit” due to the “inadvertent filing of an unfinalized draft document.”

The judge, thank the spaghetti monster, did not find this lack of integrity persuasive.

He noted:

serious concerns about the accuracy of certain quotations and representations in Renfer’s filings and the explanation offered for their inclusion.

Bloody AI Fingerprints

Courts and researchers have now documented over 700 cases of AI-generated fabrications in legal filings. The pattern is forensically distinct from human error.

A lawyer who is careless gets a date wrong, mistakes a page number, and confuses similar cases. Human errors are predictable in their causality, and thus so are robots.

What LLMs produce is structurally different: fabricated block quotes attributed to real cases, misstated holdings that sound plausible but reverse or invent what the court actually decided, and — most critically — fabricated regulatory language that doesn’t exist in any published edition.

Renfer’s filing is so bad, so thoughtless, it matches every AI digital forensic marker.

AI Hallucination Marker Renfer Filing
Fabricated quotes from real cases Yes — multiple circuit court opinions
Misstated holdings of real cases Yes — multiple circuit court opinions
Fabricated regulatory text Yes — two fabricated CFR quotes
Multiple fabrications in single filing Yes — systematic across the brief
“Unfinalized draft” excuse Yes — nearly identical to excuses in sanctioned AI cases

Human sloppiness doesn’t produce fabricated Code of Federal Regulations. It is a reference document that you either quote or you don’t. You don’t accidentally draft new regulatory text that sounds right but doesn’t exist. That is exactly what large language models do, especially the deeply flawed ChatGPT. They predict what language should say based on pattern recognition, regulatory autocorrect gone bad; when the actual text doesn’t support the argument being made, they generate fake “prediction” text that does.

The “unfinalized draft” excuse is itself a notable crime pattern.

In case after case, from Mata v. Avianca in 2023 to the Kenosha County DA sanctions in February 2026, attorneys caught with fabricated citations claim they filed a draft, or that errors were “inadvertent,” or that someone else produced the text.

In a Colorado disciplinary case, an attorney denied using AI, but investigators found he’d texted a paralegal that he let ChatGPT draft a motion and claimed “like an idiot” he hadn’t checked it. In the Kansas Lexos v. Overstock case, five attorneys were fined after fabrications were traced to unverified ChatGPT use by co-counsel. The MyPillow CEO’s attorneys tried the “rough draft” defense and were sanctioned anyway.

Renfer used AI. It’s like looking at a bullet hole. Don’t keep asking whether he was armed, when you should be asking who made the defective machine gun he used. The question is what else could produce this exact pattern of errors in a federal brief.

AI Against a Veteran

Set aside the need for AI digital forensics for a moment and look at what happened.

The United States government broke a healthcare promise to its oldest veterans.

Let me be clear on this. The ones who served longest, the ones who lived long enough to age into Medicare, are being targeted with lies. When one of those veterans, a retired colonel and former military attorney, had the audacity to challenge a decision in federal court, the Department of Justice filed obvious lies in their brief against him.

The institutional litigant, the 900-pound gorilla of a federal government, appears to have handed over legal reasoning to a thirsty ChatGPT slop machine. The unrepresented 69-year-old retiree was the one doing actual integrity control on the government’s own citations.

This is a use case that the AI industry still hasn’t developed a good answer for.

Not AI augmenting human expertise, but AI replacing the baseline obligation to tell the truth to a federal court. Deployed not in some online agitated chat room, but by the whole weight of the Department of Justice attacking a veteran for expecting his prescribed medication.

The Gutted Department

The switch from professional humans to AI slop is well known as a GOP strategy, coupled with a platform that ethical lawyers distance themselves from. The DOJ shed nearly 15,000 employees in 2025, up from 8,500 the previous year. U.S. Attorney’s offices doubled their departures from 1,100 to 2,200 separations. The Civil Rights Division lost 75% of its attorneys. Experienced prosecutors are leaving over political pressure, forced reassignments, and orders they consider unlawful.

In the Minnesota U.S. Attorney’s office alone, six assistant U.S. attorneys resigned after being pressured to harass the widow of a woman publicly executed by an ICE agent.

Into that vacuum, AI fills the gaps. Not as augmentation of competent legal work, but as a substitute for it. The brief gets filed because there’s nobody left to check it, or because the person filing it never acquired the habit, or because the institutional culture no longer prioritizes accuracy when the opposing party is a pro se retiree who probably won’t catch it.

What It Means

Fivehouse wrote in Military Times last year:

At 69 years old, after decades in uniform and a promise of lifetime health care, I never thought I would have to fight the Pentagon for medications my doctor deems essential.

He shouldn’t have to. And he certainly shouldn’t have to fight a Pentagon that sends AI robots fabricating law against him.

The sanctions hearing is Tuesday. Judge Numbers has asked U.S. Attorney W. Ellis Boyle (Trump nominee awaiting Senate confirmation) to review the matter and take corrective action. The potential consequences range from fines to contempt proceedings to suspension from practice. The latter seems most appropriate. The judge has also ordered the entire office to show cause for why it shouldn’t be held jointly responsible.

The real question Boyle should be answering isn’t about just one assistant U.S. attorney’s filing practices.

The real question is whether the Department of Justice is now using AI to attack American veterans.

Trump AI Attack on Veterans What Happened
DOJ fabricated brief (Fivehouse) AI-generated fake quotes and misstated holdings filed against a 69-year-old retired colonel fighting for his medication. The veteran caught it.
DOGE “Munchable” contract cuts Error-prone AI built by an engineer with no healthcare experience flagged more than 2,000 VA contracts for cancellation. Hallucinated contract values. Cancelled cancer research, blood analysis, and PACT Act burn pit programs.
Disability rating rule VA rule would slash ratings for veterans who take prescribed medication. A PTSD veteran rated 100% could drop to 30%. “Halted” after 10,000 complaints in 60 hours yet not rescinded.
VA workforce gutted 28,000 VA employees cut in 2025. Over 2,700 nurses, 1,000 doctors, 1,000 psychologists gone. 1.2 million veterans lost their provider. 577,000 years of collective experience and expertise was walked out the door, to be replaced by AI that can’t tell med from dead.

Retired Colonel Fivehouse could defend himself from Trump’s mechanized attacks on veterans’ rights. The 1.2 million veterans who lost their VA provider can’t even cross-examine the robot deployed to kill them.

One thought on “Retired Colonel Catches Trump DoJ Using AI to Deny Veterans Healthcare”

  1. Impressive investigation. You’re highlighting that a 30 year JAG colonel, an LLM expert in environmental base conversion law, has real chops. Trump AI got its ass handed to it. He’s legend here in NC for standing up to a radical deranged MAGA commissioner. The Trumpets try to call Fivehouse a leftist like the Koch brothers try to call President Eisenhower a communist. The DOJ is scared, they filed fabricated law because they’re so broken and desperate. This is the kind of veteran who keeps showing up in places where power doesn’t want to be questioned. Respect!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.