Angry Gait Detection: Surveillance Claims Receipts for Intent

A new study published in Royal Society Open Science proved that a single kinematic variable (amplitude of coordinated arm-and-leg swing during walking) causally determines whether observers perceive someone as angry, sad, or fearful.

This is a watershed moment for surveillance shifting gears from authentication to authorization, without any integrity controls.

What if you are falsely accused of being angry? What if you can trick the system into reading your anger as happiness?

Breaking AI

The researchers decomposed gait into principal components using motion capture, identified which component tracked perceived emotion, then manipulated that one component in a neutral walk and shifted observer judgments in the predicted direction. Increase the swing by 50%, observers say angry. Decrease it by half, they say sad or afraid.

This spells trouble because all the paper by Wakabayashi et al. frames this as cognitive neuroscience. The press coverage so far lands on video games and robot training.

Nobody mentions the word safety, let alone threat detection through surveillance.

Pipeline Built

Gait recognition for identity (authentication, who is this person) has been deployed at scale for years. China’s Watrix system analyzes body contour and arm movement from standard CCTV to identify individuals at 50 meters, face or no face. Police in Beijing, Shanghai, and Chongqing have been running it since at least 2019. Airports, bus stations, schools, nuclear facilities.

The identity pipeline is a solved engineering problem.

Gait recognition for emotion has its own engineering literature. Deep learning frameworks like “Walk-as-you-Feel” already classify emotional states from gait without facial cues. A 2024 review in PMC on gait analysis in criminal investigation describes systems designed to flag “aggressive body posture, abnormal motions, odd gait patterns” in real-time surveillance feeds.

Some vendors, for example, claim they can detect in surveillance video audio whether a person is about to fight (yelling in anger as opposed to joy).

What was missing was the causal proof. Correlational studies show that angry people tend to walk a certain way. That’s an association, and associations can be challenged. The Wakabayashi paper demonstrates that manipulating one movement component causes the emotion judgment. That’s the difference between a screening heuristic and an evidence base for deployment.

Emotion Scanning in Intent Detection

Gait-based identity recognition tells a system who is walking toward a building. That’s already a civil liberties problem, with many experts arguing about it.

Gait-based emotion recognition is going to report now what that person intends. It’s the Trump-on-Iran logic applied to individuals: they’re walking like they intend something, so start shooting before they get there and say that nobody can ask questions. That’s a categorically different kind of surveillance, and almost nobody is even talking about it.

When a camera flags someone as “30 year old drunk white male approaching in anger” based on their arm swing amplitude, that’s not biometric identification. It’s judgment of intent. It’s pre-crime. The system isn’t asking “is this a known suspect?” It’s asking “does this person’s body language indicate hostile emotion?” The Wakabayashi paper just provided the causal mechanism that says it makes an answer defensible.

The point-light display finding makes this worse, not better. The method works with degraded inputs — skeletal outlines, minimal resolution, no face required. Pose estimation from standard CCTV is trivial. PCA decomposition on joint trajectories is freshman linear algebra. The computational cost of extracting PC2 amplitude from a walking figure in real time is negligible.

Slipping Into the Void

The EU AI Act, effective February 2025, bans emotion recognition AI in workplaces and educational institutions. That sounds comprehensive until you read what it doesn’t cover. The European Commission’s own guidelines on prohibited practices cite as a permitted example: cameras in a supermarket or bank used to detect suspicious clients and conclude whether someone is about to commit a robbery. The prohibition does not cover emotion recognition in commercial contexts, public spaces, or security applications.

Read that again. The EU banned your employer from reading your face on a Zoom call. It explicitly left open the use of emotion detection to decide whether you look like you’re about to rob a bank. The regime protects you from HR and leaves you exposed to law enforcement, private security, and anyone operating a camera in a public space.

Gait-based identity recognition has at least drawn attention from Privacy International. But gait-based emotion detection slots into the gap the AI Act carved out on purpose. It’s not facial recognition because no face is required. It’s not workplace or educational use because it’s deployed in transit hubs and public infrastructure. It reads emotional states from skeletal movement at distance. It’s slipping into a technology regulation void that means it isn’t illegal.

Watrix’s own CEO has said the company expects gait recognition to survive even if facial recognition gets banned, because it’s perceived as “less intrusive.” That framing collapses the moment the system stops asking who you are and starts asking how you feel. Reading someone’s internal emotional state from their walk, without their knowledge or consent, at distance, is not less intrusive than photographing their face.

It is more intrusive than anything in the current biometric toolkit, because it claims access to mental states.

The EU’s regulatory framework fails to address this, and appears designed to actually accommodate it.

Proof is in the Paper

The study used actors walking while recalling emotional episodes, recorded with motion capture, rendered as point-light displays (white dots on black background, no body shape, no face, no clothing). Observers classified the emotion. PCA decomposed the movement into independent components. The second principal component (coordinated arm-and-leg swing) showed systematic amplitude differences across all five emotions.

In the causal test, the researchers took a neutral walk, scaled PC2 up by 50% or down by 50%, and showed the manipulated versions to new observers. The 1.5x version was judged angry. The 0.5x version was judged sad or fearful. Neutral judgments dropped in both conditions. One number, extracted from limb swing, changed what people thought the walker was feeling.

This is what the authors based their suggestions on: dance analysis, aesthetic evaluation, animation.

But to me the obvious, and far more easily funded, application will be automated threat assessment at distance using existing camera infrastructure.

Israel maybe already has this technology deployed? Consider their system codenamed “Where’s Daddy”, which frames the Israeli military tracking a man to his home to kill him in front of his family in the language of a toddler looking for a parent. The people who named that system understand emotional manipulation and abuse perfectly well. Israel had a hard time explaining why their drones chase children to shoot them in the head. Now they could pop out a cooked emotive gait report.

Let’s Be Honest

A research team has established a causal link between a single, computationally trivial kinematic feature and perceived emotional intent. The feature can be extracted from low-resolution skeletal data at distance. The engineering pipeline for such a real-time gait emotion classification already exists. The surveillance infrastructure is widely deployed. The regulatory framework permits emotion monitoring everywhere except for the places you are the least vulnerable.

The paper’s title is “Identifying and Manipulating Gait Patterns That Influence Emotion Recognition.” The word manipulating is right there. The authors probably intended the experimental sense, as they manipulated the variable to test causality. But in any deployment context, the manipulation runs the other way. The system manipulates the judgment. It decides what your walk means, and it decides before you arrive… before you can say don’t shoot.

Every security paper that cites this work will call it “behavioral analytics.” The honest term is intent judgment if not monitoring. The distance between behavioral analytics and pre-crime extrajudicial assassination needs to be far more than exactly zero.

Silicon Valley Renamed “Soviet Volley” to Represent AI Token Fraud Economics

The most consequential fraud in modern technology is not happening in the code. It is happening in the units.

If you ever studied the collapse of Soviet economics, you know exactly what I’m about to explain.

AI companies have built a billing infrastructure in which the seller defines the unit of measurement, counts the units, and invoices the buyer. All with no independent verification at any point in the transaction. All without any enforcement mechanism.

If you prompt AI to build something and it launches a dozen agents and burns an entire day worth of credits in an hour, that’s business as usual, especially if they delete their own work and complain they have nothing to show you for it.

The unit of fraud is called a “token.” It has no fixed definition. It varies by model, by provider, and by tokenizer version. It can be changed at any time, by the vendor, without notice. There is no regulatory body certifying token measurement. There is no weights-and-measures regime. There is no audit trail the customer can independently verify.

This is not a new problem, as I already hinted.

It is one of the oldest problems in commercial history, and every previous instance ended the same way. It won’t be different this time. It’s logic any five-year-old should be able to figure out.

In the book, every single thing the peddler does, the monkeys imitate. He shakes his fist, they shake their fists. He stomps his foot, they stomp their feet. That’s OpenAI, Google, Anthropic all copying each other’s opaque token pricing structures, each imitating the other’s billing model, because there’s no independent standard to do anything else. Monkey see, monkey do.

Caps for Sale

Let’s start with clause 35 of the Magna Carta, 1215:

Let there be one measure of wine throughout our whole realm; and one measure of ale; and one measure of corn.

This was the language of liberty from oppression. It was a response to documented, systematic fraud by royal merchants who controlled their own measures. A bushel in London was not a bushel in York, and the difference was profit.

It took England six centuries to arrive at a proper Weights and Measures Act. Every iteration addressed the same structural deficiency: when the entity selling the goods also controls the unit of measurement, the unit will be corrupted. The entire history of metrology from the Bureau International des Poids et Mesures to NIST to the EU’s Measuring Instruments Directive, is the history of forcibly separating the measurer from the seller.

It’s fundamental to the rise of industrialization that the clocks had to run on universal time, even with time zones, such that trains could have externally judged arrival and departure times. The British and Dutch factories that invented assembly lines to defeat Napoleon (infamously copied by Ford) couldn’t work without shared units of measure.

Given this context it appears now that AI companies are the most historically illiterate and economically unsound ever.

Their “token billing” has undone a fundamental tenet against trivial fraud. We are back to the royal merchant having their thumb on the scale for every transaction, except the thumb is an algorithm and the scale is proprietary.

How dumb does the intelligent machine business think we are, seriously?

LIBOR for Compute

Let’s review, for example, the London Interbank Offered Rate (LIBOR) that underpinned roughly $350 trillion in financial instruments worldwide. LIBOR was calculated from self-reported borrowing rates submitted daily by the banks that profited from the number. No independent verification. No transaction-based measurement.

Just trust.

And it failed. Banks manipulated it for years. Of course they did. The entity producing the number was also the entity whose trading positions depended on the number. When the fraud was finally exposed, the fix was to replace LIBOR with SOFR (Secured Overnight Financing Rate) which is derived from actual observed transactions rather than self-reported claims.

Now consider the AI jar of pickles we are being told to get in.

OpenAI reports that average reasoning token consumption per organization has increased approximately 320 times in the past twelve months. This number was produced by OpenAI, about OpenAI’s product, using OpenAI’s proprietary tokenizer, and reported to the press as evidence of adoption. It is Barclays submitting its own LIBOR rate as if nobody knows why we stopped them from doing this.

The difference is that LIBOR at least had the pretense of multiple submitters. Token counts have one source: the vendor.

Intelligence machine vendors have truly produced their most cynical moment.

Gosplan of Sand Hill Road

Soviet central planning failed not because the planners were being stupid. Many were brilliant, which probably made everything worse. It failed because the information system was structurally corrupt, and compliant agents corrupted it further. Every layer of the reporting chain had an incentive to inflate their output numbers, and there was no independent verification mechanism capable of correcting the distortion.

The famous case study is the Soviet nail factory. Measured by weight, the factory produced fewer, heavier nails that nobody needed. Measured by quantity, it produced millions of tiny nails nobody could use. The metric became the product. Actual utility was irrelevant because utility was not being measured, only the unit was.

Here’s another token output example of fraud I was taught in college. Soviet window manufacturers measured weight and nobody could install the heavy, thick glass. They measured by size, and all the very large, thin glass broke before it even could be loaded for delivery. Actual utility was irrelevant because utility was not being measured, only the unit was.

Every day that I use AI it wastes unbelievable amounts of money and time, measured in units of tokens, as it tells me if I don’t like it there’s nothing I can do.

Jensen Huang’s proposal at GTC this month is the Soviet nail or glass factory at much larger Silicon Valley scale.

He suggested that every engineer should have an annual token budget, where these allocations could reach half of base salary in value. Consider what this fraud means structurally. You are telling workers they have an annual allocation of a unit that measures interaction volume, not outcome quality.

Record scratch.

So a notoriously wasteful industry already in trouble for water and air pollution will optimize entirely for high consumption. An engineer who solves a problem by thinking for ten minutes and never touching the AI has, under this framework, underperformed relative to one who burned through a million tokens generating refuse. Yet the engineer who still thinks, and conserves tokens, is undeniably the superior engineer to the ones that do not!

Pray and spray, running out of ammunition and begging for $200 billion to keep firing at ghosts, is so inversely proportional to the efficiency of Delta operators I can’t even….

Tokens are not a productivity metric, like ammunition is not even a kill rate, because Nvidia is incentivized inversely to what customers actually need. It is Gosplan announcing the Five-Year Plan for compute consumption, and every factory manager is about to start filing reports showing they exceeded their quota of tokens, meaning… nothing.

“In 20 years the USSR will produce nearly twice as much industrial output as all non-socialist countries produced in 1961.” This is like AI companies saying tokens up 320x. Just volume, presented as progress, approved by the 22nd Congress of the CPSU, a template for how Silicon Valley wants us to cheer their charts.

Shovel Seller Tithe

Huang’s position is particularly elegant because Nvidia does not sell tokens. They sell the GPUs that generate them. Every token consumed requires silicon to produce. If token budgets become a standard corporate expenditure pegged to payroll, Huang has created a permanent demand floor for his hardware.

Gross. Literally gross product.

He does not need to manipulate the token count himself. He just needs the token to become the unit that corporations manage against, and every dollar allocated to token budgets flows upstream to GPU purchases.

He skips actual measurement. He proposes that companies commit, in advance, to spending a fixed percentage of their payroll on his product for compute.

That is not a metric. It is a tithe.

And the structure insulates him perfectly. The AI providers already grossly inflate the token counts. The customers overpay the AI providers, given that most of the token count is for fixing things the tokens were spent on to begin with, like a protection racket. The AI providers buy Nvidia’s GPUs to service the consumption they have encouraged and caused without any accountability for outcomes. Nvidia never touches the books. They sell shovels to the people salting the mine.

The Arc

Every instance of self-reported commercial measurement in recorded history has followed the same progression: self-reported measurement, then market adoption of the metric, then discovery of systematic manipulation, then regulatory intervention mandating independent measurement.

Medieval grain measures. LIBOR. Credit ratings. Remember Facebook’s video metrics? The company admitted in 2016 to inflating view times by 60 to 80 percent, having defined the view, counted the view, and sold the view. The pattern is not debatable. It is one of the most thoroughly documented dynamics in economic history.

Token billing is currently at stage two: market adoption. Enterprises are building budgets around it. Analysts are publishing reports denominated in it. A CEO is proposing tying it to compensation.

Nobody is asking who audits the count.

Auditors are completely absent.

The harsh reality for every major AI provider on earth, like royalty before the Magna Carta, is that nobody has the independent authority needed to vouch for them. The merchant is being made the king who declares their own scale valid no matter what. And this time the scale is processing trillions of transactions per day, denominated in a unit that has no legal definition, no regulatory oversight, and no independent verification mechanism.

No kings.

We have eight hundred years of evidence for this bullshit. The only variable today is how much it costs before someone reads basic history of economics and enforces an honest measure.

The AI industry pretends to be terrified about regulation, but really they are in danger of transparency. Because the moment an independent third party can compare token billing against actual computational work performed, or the moment someone builds a SOFR for inference, every provider’s margins become visible. And if those margins look anything like LIBOR spreads or Facebook’s video metrics, the correction won’t be gradual.

I’m telling you, even the best of the best agents are a tragedy of token inflation and massive waste.

Nobody inside the Soviet system volunteered for glasnost. It was forced by the fact that the gap between the reports and reality had become so grotesque that the system could no longer function even on its own terms.

Token economics in Silicon Valley is rapidly approaching that threshold. Engineers know. We watch agents burn through whole budgets producing garbage, watch our token counts spike on failed reasoning chains we are billed for anyway, watch “reasoning tokens” appear on invoices for computation we never requested and cannot inspect.

The bigger the tool failure and productivity suck, the more the AI companies try to report a Soviet-sounding productivity “gain”. The more energy they burn, the more they claim to have a “big engine”, which means literally nothing useful.

Gorbachev didn’t reform Soviet economics. He revealed that it was dead inside.

The production numbers had been fraudulent for decades. Everyone inside the system knew. The factories knew. The ministries knew. Gosplan knew. But the reporting structure made it impossible to say so, because every career in the chain depended on the numbers going up. Glasnost (openness) didn’t fix fraud any more than exposure of Enron balanced its sheets. It made it permissible to say out loud the numbers meant nothing. The gap between reported output and actual value had grown so large that the moment anyone was allowed to measure honestly, the entire structure lost legitimacy overnight.

That’s the truth of the AI bubble. Token output is the absolute wrong measure and will only bring pain to those who adopt it without audit.

Silicon Valley is now all about doing without thinking, like the monkeys sitting in a tree, unaware they are about to throw all their hats on the ground the moment the truth is spoken.

Trump Operation Bone Spurs Has Troops Leaving Iran by Going to Iran

Fake injury President is framing a fake withdrawal from a non-war-war.

The same Trump derangement syndrome we saw in NATO/Ukraine is showing up in Iran: berate allies for not doing enough, then announce there is nothing to do and you will be doing even less.

President Donald Trump freaked out at U.S. allies on Friday as “cowards” after begging for help in securing the Strait of Hormuz as energy prices skyrocket. Trump, 79, also declared a victory in the war against Iran by claiming the U.S. had already won “militarily”…

Already won, he’s leaving. But go fight or you are the “coward”.

Trump calling everyone else the “coward” while he withdraws, preemptively reframes his own failures as their fault. He wants everyone to believe when Hormuz stays closed and oil stays above $100, it wasn’t because he started the war without a clue, and still can’t find one. He says blame NATO, because they act like he did all this on his own.

And the troop deployments make Trump derangement into pure theater. The Boxer group from the Pacific won’t arrive for three weeks. You don’t claim you are winding down and then send an expensive MEU on a three-week sail to wind down. You send it to have options that you’re publicly denying you want.

When asked Thursday if troops were being sent, Trump said:

I’m not putting troops anywhere. If I were, I certainly wouldn’t tell you, but I’m not putting troops.

Then the Marines in California were ordered to sail for Iran.

The Onion couldn’t write this.

Red Wings Doctrine Revisited: DON’T GET F%&*#$@ COMPROMISED

A decades-old Naval Special Warfare mission outline has popped into the news again. It contained these five words, in all caps with exclamation points, that functioned as doctrine:

DON’T GET F%&*#$@ COMPROMISED!!!!!

It was for a four-man SEAL reconnaissance team as they deployed into the Hindu Kush mountains of eastern Afghanistan, June 27, 2005.

An excerpt from the Red Wings mission outline. Source: POLITICO

The phrase obviously was a prohibition, and not a procedure. It was the prohibition of being uncovered, and perhaps more importantly on revealing being seen.

It seems more relevant than ever, given Hegseth and Trump lecturing everyone on how to not talk about anything real ever again. Trump has called those reporting on ground truth in war treasonous.

I’m not arguing that no contingency planning at all was the problem. I’m arguing that the institution gave prohibition so much weight, the truth and necessary procedures got nothing. The slide proves it visually. The team that followed “soft compromise” would be diminished and punished against the team that internalized DON’T GET COMPROMISED and stayed silent, denying their compromised situation on Sawtalo Sar.

And that is what I want to talk about today.

A professional special operations organization writes contingency for failures, because things never go just right. When compromised, execute plans. Cancel the operation, call it in, meet at a predesignated point for extraction. Everyone on SEAL Team 10 knew these doctrines and of course the rising risks of concealment failure. What they actually received, however, was something worse, in all caps.

The difference between a belief-based psychological prohibition and a procedure is the difference between denial and team survival. “Don’t get compromised” turns a predictable operational risk into a personal dead-end to be avoided at all costs, rather than a map to execute.

Nineteen years after the mountain tragedy, on January 11, 2024, two SEALs from Team 3 attempted to board an unflagged dhow carrying Iranian-made weapons off the coast of Somalia. Chief Special Warfare Operator Christopher Chambers fell while climbing aboard in heavy seas. Nine feet, into the water, fully loaded. Special Warfare Operator 1st Class Nathan Gage Ingram jumped in after him eleven seconds later.

Both disappeared. It took just forty-seven seconds.

Chambers shouldered fifty pounds of gear. Ingram had eighty. Neither their physical capability nor their emergency flotation devices would keep them above surface. The Navy’s investigation found SEALs had practiced with flotation devices just once, if ever, in their entire careers.

Once, if ever.

There was no standardized buoyancy guidance. Individual operators were expected to calculate it themselves. The investigation concluded:

confusion and ineffective execution.

It was the prohibition instead of procedure, again. Nothing mapped how eighty pounds would turn out in a fall. If you hit water, activate flotation, shed gear, grab for line or ladder… or just, DON’T GET F%&*#$@ DROWNED.

General Michael Kurilla, head of U.S. Central Command, wrote:

This incident, marked by systemic issues, was preventable.

Preventable in 2024, like preventable in 2005. Those nineteen years apart share far too much in common, and we need to talk more about why.

Mind the Procedure

A POLITICO Magazine investigation published this week puts in one place what many have tried to say for years individually on their own. It shares interviews with more than a hundred people with direct knowledge of Operation Red Wings, and documents that are rarely seen outside special operations. The overall tone reveals how prohibition culture affected mission planning let alone success.

SEAL Team 8, which preceded Team 10 in Afghanistan, reported being compromised by goat herders three times on recon missions. It was so front of mind that, as two teams continued on, a third team called headquarters and got a helicopter to pull them out. They followed a compromise procedure, did exactly what the doctrine said. For this, they were mocked as soon as they returned to base. Junior SEALs under Lt. Cmdr. Erik Kristensen were the most exposed to this prohibition message. One operator made it explicit to the newcomers: be aggressive, ignore doubts, force the mission. Another SEAL said his team ran into goat herders on his first operation and never told anyone because of pressure to complete it.

The failure to admit compromise is crucial context for Murphy’s team, who encountered goat herders on Sawtalo Sar. The institutional incentive was clear: don’t admit the situation. Don’t abort. Keep on saying it’s fine. They released the herders and pressed on without telling base. Neither the relay chat logs nor the situation report obtained by POLITICO mention goat herders. The communications technician monitoring the radios at Jalalabad didn’t hear about them until after Luttrell was rescued. The recon team’s last communication before going silent was that it was “packing up and moving on.” They were compromised, yet never reported they were compromised.

The procedure of safety was prohibited, cultural shift made it unusable.

Everything else that went wrong flowed from the same institutional refusal to build real contingency planning around predictable risks: the mission launched during a transfer of authority, the split command structure, the four-man team that was too small, the loud helicopters that alerted the valley, the fast rope snagged on a tree stump marking exactly where they landed, the target who traveled with three to five bodyguards rather than the army depicted later.

The Green Berets advising the mission saw it all clearly. Army Lt. Col. J.P. Roberts recalled saying what needed to be said.

This is going to be a shit show

He tried to delay. The Marines’ operations officer called the command arrangement “fucking outrageous.” CIA officers at Asadabad were dumbfounded.

Every non-SEAL in the room seems able to point back to their procedural reasons to abort, yet unable to prevent the SEAL tragedy that unfolded in front of them.

Nineteen Americans died. Three SEALs on the mountain and sixteen on a rescue helicopter shot down with an RPG.

Nineteen years later, two more died while everyone struggled to make sense of their loss. A champion swimmer SEAL simply drowned in seconds? How? It can’t be. The 2024 investigation turned up performance-enhancing drug use, unauthorized surgery hidden from Navy medical, and alcohol on the ship. All of it begged questions about exceptionalism, the same institutional refusal to admit being compromised or able to accept and adapt to failure.

The Myth

Nick Baggett, a retired SEAL master chief and the father-in-law of Danny Dietz (one of the three SEALs killed on that mountain) read Luttrell’s intelligence debrief shortly after the tragedy.

He read it shortly after. Keep that in mind.

It diverged from the memoir that everyone read much later. He watched the institution choose a narrative over the details needed for accountability, as that narrative served a different objective. Lone Survivor became a bestseller, then a blockbuster, then a recruitment brand. He told POLITICO in retrospect:

We morphed from an operational unit into something more commercial.

Commercial means disposable, as operational “margins” and capital flows take over the value system. Who “loses” in commercial enterprise is completely redefined from military operational outcomes.

The memoir and the movie performed an extraordinary inversion. The institutional failures that had caused the disaster, evidence of a culture that replaced contingency planning with shame, was written into a mythology of heroic moral dilemma selling tickets.

We were expected to believe the SEALs didn’t die because they were pressured to ignore basic protocol and admit compromise. They died, the hot selling story went, because they were too noble to kill those who compromised them. The prohibition that created the problem was rewritten into virtue that ennobled their tragedy, for profit.

This week, Luttrell’s lawyer ironically told POLITICO that anyone revisiting the mission “has an agenda” and that “everything he wrote in his book is absolutely true.”

That statement is not a legal argument. And it is obviously self-defeating. The book revisited the mission. It came second, not first. And it didn’t really hide its agenda in revisiting the mission.

Moreover, such a statement by the lawyer serves as a deterrent, echoing the reason that a doctrinal fix never happened. You can’t write “if, then” for compromise if the institution has committed to a story where compromise was a moral choice rather than an operational failure. Fixing the doctrine now hits a religious nerve, because it means admitting a mythology is wrong.

Former SEAL James Hatch, who recovered the bodies of the nineteen Americans killed in Red Wings, wrote in his memoir that “the American myth-making machine” had distorted what happened. He described the pain:

when you take a version of their story and just tell the parts that allow it to be a legendary epic about flawless heroism.

A former SEAL recalled going through land warfare training a decade after Red Wings. One exercise involved a helicopter being shot down. He pointed out the training was unrealistic because in the real world, the operation would stop as you have to protect the aircraft and search for survivors.

Everyone looked at me like I was an idiot.

Naval Special Warfare hadn’t passed on the essential operations knowledge. The institution that should have been teaching hard lessons was selling inspirational ones instead.

The mythology fed political mistakes. Six days into taking control of the military, Trump pushed go on the Yakla raid in Yemen, over dinner, not in the Situation Room, sending SEAL Team 6 into a village where the enemy was allegedly tipped off in advance.

Senior Chief Ryan Owens was killed. At least six women and ten children under thirteen were also killed. A $70 million Osprey was destroyed. It was a command disaster.

Flynn had pitched it as a “game changer” to distinguish Trump from Obama, who had turned down the mission. When it went wrong, Trump deflected:

This was a mission that was started before I got here.

Then he used Owens’ widow’s grief as a prop during his address to Congress while refusing his father’s demand for an investigation. Bill Owens said:

Don’t hide behind my son’s death to prevent an investigation

Push operators into ill-conceived missions, mythologize the dead, medicate the survivors, attack anyone who questions the lessons under the loss.

The Fix

A former SEAL Team 10 officer quoted in the POLITICO piece put it plainly:

Fuck legacies and egos. Whatever screw-ups I made, publish them. That way, in the future, some young kid doesn’t get killed.

The fix is not complicated.

Replace shame and prohibitions with curiosity and procedures. Train until they’re reflexive, subconscious. Stop punishing teams as protocol followers. Conduct after-action reviews that prioritize compound lessons over competitive legacy. Require depth to the checks made before every maritime boarding.

Formalize compromise response as a drilled contingency, not a career-ending confession. There’s neuroscience behind this: a prohibition locks the amygdala into threat suppression. The brain isn’t optimized, can’t plan, while it’s hiding. A procedure offloads that threat response into motor memory and decision trees, freeing the prefrontal cortex to actually run the situation. The difference between “don’t get compromised” and “if compromised, then execute” isn’t just doctrinal. It’s neurological. One freezes operators (run up a tree by a Chihuahua). The other keeps them operational.

None of this dishonors the men who died.

Baggett, Macaskill, Thomas, and the other veterans now speaking publicly are clear on that point. The courage of Murphy, Dietz, Axelson, Chambers, Ingram, and the sixteen men on that helicopter is not diminished by admitting the institution failed them. It is diminished by pretending that it didn’t, and letting an unaware team deploy into the same trap.

Chambers was thirty-seven. Ingram was twenty-seven, on his first deployment.