Ganeni by Elyanna

In WWI just outside of the trenches of Gaza, a British officer deliberately fired at Ottomans and then dropped a haversack containing false battle plans covered in (horse) blood for their scouts to “find”. This 1918 ruse not only worked—decieving the entrenched Ottomans and routing their positions—it led to Britain seizing the whole Middle East, carving out new states we know to this day.

Elyanna’s big new hit “Ganeni” reminded me of this with the video’s opening scenes. The catchy Arabic pop song is all about relationship drama, or is it? She perhaps also offers us old historians of information war a commentary on colonial manipulation and authoritarianism beyond gender dynamics.

“Ganeni” means “drive me crazy.” The song describes control resurfacing just when you think you’re free—like reeling in a fish, close then pulling away, creating an exhausting cycle. This mirrors both danger in relationship patterns and colonialism: engagement followed by an abandonment, promises followed by betrayals.

Harmless relationship advice is also a resistance anthem to sing openly. The genius of encoded music (e.g. General Tubman) is the deniability—universal enough to chart internationally, specific enough to speak directly to those who understand.

The progression from confusion (“What brought you back?”) to decisive refusal (“I don’t want this anymore”) maps both personal healing and political awakening. The recognition that inconsistent treatment (e.g. Nazi permanent improvisation doctrine) is control, and that saying “enough” is de oppresso liber.

Translation

جارا إيه؟ مش كنت ناسيني؟ What happened? Hadn’t you forgotten me?
إيه اللي جابك تاني يا عيني؟ What brought you back again, my dear?
أيوة زمان كنت حبيبي Yes, once upon a time you were my love
بس خلاص دانا مش عايزة But enough—I don’t want this anymore
أرجع لأ، فكر آه، سيبك لأ Go back? No. Think? Yes. Let you be? No.
مش عارفة دماغي لفة I don’t know, my mind is spinning
وأنا دايماً يا يا يا يا And I’m always going ya ya ya ya
جنني، جنني Drive me crazy, drive me crazy
على دقة ونص رقصني You made me dance to your rhythm and a half
جنني، جنني Drive me crazy, drive me crazy
قربه وبعده بيتعبني Your closeness and distance exhaust me
يا حبيبي أغنيتي خلصانة My darling, my song is finished
ما تعرفش تلعب معانا You don’t know how to play with us
الموزيكا بقى تمشي إزاي How does the music even flow now?
بس بتلعب طبلة ونای You just play drums and flute
آه جنني Ah, drive me crazy
يا معذبني Oh, you who torment me
إيه ده اللي إنت بتعمل بيا What is this that you’re doing to me?
يوم جايبني يوم ماخدني One day bringing me close, one day taking me away
بس خلاص دانا مش عايزة But enough—I don’t want this anymore
Australia’s ambassador to Israel Chris Cannan (left) and Aboriginal Light Horseman Jack Pollard’s grandson Mark Pollard unveil a Haversack Ruse statue in Israel

Like the Beersheba Ruse carrying a carefully curated message to the Ottomans in 1918, followed by a daring stampede by Aboriginal horsemen to overwhelm machine guns and artillery, “Ganeni” calls upon reason with its rhymes. The British couldn’t directly confront Gaza fortifications, so they crafted a clever ruse to go around.

The Beersheba campaign in WWI has been called “Australia’s first big achievement on the world stage”. Source: SMH

In regions where direct political expression is dangerous, art continues as the vehicle for truth. The song charts internationally in Arabic, promoting messages of resistence to oppression.

Related: هواجيس – Worries

FSD Blood Money: Why Tesla’s Online Army Spins Every Fatality as Fantasy

Another Tesla “Full Self-Driving” (FSD) crash. Another death. Another round of online social media denials that follow a script as predictable as a Pinto’s design failures.

Take for example a Tesla Model Y that struck and killed a motorcyclist in Washington state while operating in FSD mode. Within hours of the news breaking, Tesla’s online community had deployed its standard “no true Scotsman” fallacy army: this wasn’t real FSD, the investigation is biased, and anyone reporting on it is part of a coordinated attack.

Scripted Donkey Deployments

When an Arizona FSD fatality started getting attention a couple days ago, Reddit’s r/TeslaFSD immediately spun into emergency response:

Source: Bloomberg

  • This was running 11 which had known issues. We’re on 12 now which completely fixes this problem. Misleading to call this an FSD issue when it’s ancient software.
  • Convenient how all these ‘investigations’ happen right before earnings calls. Follow the money – who benefits from Tesla’s stock price dropping?
  • The NHTSA has been gunning for Tesla since day one. They never investigate Ford or GM crashes this aggressively. Pure bias.

The pattern of information warfare is consistent across every Tesla FSD fatality: minimize the death, question the timing, attack the investigators. What’s missing is any acknowledgment that experimental software killed someone.

Investigations Branded “Conspiracies”

The National Highway Traffic Safety Administration (NHTSA) didn’t investigate Tesla from 2016 to 2020 because of political corruption.

Then, after Trump no longer could press his thumb on regulators, it opened investigation into Tesla’s FSD technology and started documenting multiple crashes involving the system. This is standard procedure – when a particular technology shows up repeatedly in fatal crashes, safety regulators investigate.

Tesla’s online defenders see coordination where none exists:

  • Weird how the Washington Post, Reuters, and CNN all covered this crash within 24 hours. Almost like they’re all reading from the same script 🤔
  • Amazing how NHTSA always finds ‘new evidence’ right when Tesla stock is doing well. These people are compromised.

The fact that multiple commenters are saying “it’s coordinated” while following an identical, predictable response pattern is pure irony. They’re the ones who are coordinated – coordinated in their denial.

Their response also treats routine journalism as unusual or suspicious. When a Tesla crashes and kills someone, that’s news. When multiple outlets report on it, that’s how news works. The alternative would be a media blackout on Tesla fatalities, which is apparently what Tesla would prefer. If you report, you violate their implied censorship. It’s kind of like how lynchings worked in 1830s “cancel culture” America.

Source: My presentation at MindTheSec 2021

The Dumb Theseus Game

Perhaps the most cynical aspect of the Tesla fan response is “version” gaming by dismissing every crash as irrelevant because software is updated constantly. This is basically the ship of Theseus, but weaponized. Every nail pounded into a plank means the entire boat is different and thus can’t be compared anymore to any other boat.

  • Literally nobody is running v11 anymore. This crash tells us nothing about current FSD capabilities. Yellow journalism at its finest.
  • By the time this investigation concludes, we’ll be 3 versions ahead. NHTSA investigating 2-year-old software like it’s current. Completely useless.

This logic renders every Tesla crash investigation meaningless by the dumbest design in history. Since Tesla continuously updates software, no crash can ever be considered relevant evidence of the technology’s safety problems. It’s a perfect unfalsifiability shield.

Most critically, this approach demonstrates that the company is learning nothing from its data. Genuine learning becomes impossible when each software version is treated as a completely fresh start, with no connection to previous iterations.

Source: FSD Tracker

While Tesla’s defenders seize on the legitimate point that new releases include bug fixes, they distort this into a narrative where each update absolves the company of responsibility for past failures.

This fundamentally misunderstands how software safety works and actively undermines the improvement process. Rather than seeing progressive enhancement, we should expect Tesla’s later versions to perform worse than better—a prediction that appears validated by the company’s accelerating fatality rates.

Deflection Addiction

The online response follows a clear hierarchy in the Tesla playbook of deflection and deception. It should be recognized as a planned and coordinated private intelligence group attack pattern on government regulators and public sentiment:

Level 1: Blame the victim

  • “Motorcycle was probably speeding”
  • “Driver should have taken over”
  • “Darwin Award winner”

Level 2: Dismiss prior software version as unrelated (invalidate learning process)

  • “That’s yesterday’s FSD”
  • “Tomorrow’s version fixes everything”
  • “Misleading headline”

Level 3: Attack independent assessors

  • “Government bias”
  • “Media conspiracy”
  • “Coordinated FUD campaign”

Level 4: Claim victimhood

  • “They’re trying to kill our beliefs”
  • “Legacy media hates us for disrupting”
  • “Short sellers jealously spreading lies”

Notice what’s absent: any substantive discussion of why FSD beta software failed to detect and avoid a car, a pole, a person, a motorcycle… and more people are being killed in “veered” Tesla crashes than ever.

Human Cost Gets Tossed

Fundamental to the beliefs of Peter Thiel and Elon Musk, being raised under southern African white supremacist doctrines by their pro-Nazi families hiding after Hitler was defeated, is their callous disregard for human lives in a quest for white men holding absolute power.

They couldn’t become engineers lawyers or bankers because of ethical boundaries, yet in the computer industry they found zero resistance to murder. Technological enthusiasm is ruthlessly manipulated by them both as a moral blind spot (Palantir and Tesla) where victims are made comfortable with a level of harm absolutely unconscionable in any other context. Each Palantir missile, each Tesla missile, represents real people whose lives have been permanently altered and not some kind of acceptable loss in service of some greater good.

Palantir has likely murdered hundreds of innocent people using extrajudicial assassinations (“self licking ISIS-cream cone“) without any transparency or accountability. Tesla similarly has hundreds of deaths in its careless wake, yet insufficiently examined for predictable causes.

While Tesla fans debate software versions and media bias, the human impact disappears from the conversation. The Washington state crash left a motorcyclist dead and a family grieving. The Tesla driver will likely carry psychological trauma from the incident. These consequences don’t get undone by a software update.

This is just another hit piece trying to slow down progress. How many lives will be saved when FSD is perfected? The media never mentions that.

This utilitarian argument – that current deaths are acceptable in service of future safety – would be remarkable coming from any other industry. Imagine if pharmaceutical companies dismissed drug deaths as necessary sacrifices for medical progress, or if airlines shrugged off crashes as the price of innovation.

Do I need to go on? As an applied-disinformation expert and historian, allow me to explain how Tesla social media techniques reveal tip of a very large and dangerous iceberg in information warfare. The typical response above reveals several problematic patterns of thinking:

  • False dilemma: framing this as either “accept current deaths” or “reject all progress,” when there are clearly middle paths involving more rigorous testing, better safety protocols, or different deployment strategies.
  • Whataboutism: deflecting from the specific incident by pointing to hypothetical future benefits or comparing to other risks, rather than addressing the actual harm that occurred.
  • Sunk cost reasoning: the implication that because development has progressed this far, we must continue regardless of current costs.
  • Abstraction bias: treating real human deaths as statistics or necessary data points rather than acknowledging the irreversible loss of actual lives and the trauma inflicted on families and communities.

This is not random, as much of what I see being spread by Palantir and Tesla propaganda reads to me like South African apartheid military intelligence tactics of the 1970s. Palantir appears to be setup as South Africa’s Bureau of State Secrecy (BOSS) updated to the digital age, with Silicon Valley’s “disruption” rhetoric providing moral cover that anti-communism provided for apartheid-era operations.

BOSS operated from 1968 to 1980 for the systematic torture and murder of Blacks around the world

Whereas BOSS required extensive human networks to track and eliminate targets, Palantir has evolved to process vast datasets to identify patterns and individuals for elimination with far fewer human operators. Tesla fits into this as a domestic terror tactic to dehumanize and even kill people in transit, which should be familiar to those who study the rise and abrupt fall of AWB terror campaigns in South Africa.

A single police officer in 1994 killed Nazi domestic terrorists (AWB) who had been driving around shooting at Black people. It was headline news at the time, because AWB promised civil war to forcibly remove all Blacks from government and instead ended up dead on the side of a road.

The same mentality that drove around South Africa shooting at Black people, now refined into “self-driving” cars that statistically target vulnerable road users. The technology evolves but the underlying worldview remains: certain lives are expendable.

Real-Time Regulatory Capture

The Tesla fan community has effectively created a form of regulatory capture through social media pressure. Any investigation into Tesla crashes gets immediately branded as biased, coordinated, or motivated by anti-innovation sentiment.

  • The same people investigating Tesla crashes drive Ford and GM cars. Conflict of interest much? These investigations are jokes.
  • NHTSA investigators probably have shorts positions in Tesla. Someone should check their portfolios.

This creates an environment where egregious unsafe lies are allowed for years, yet legitimate safety concerns can’t be raised for a minute without facing coordinated hordes of accusations of conspiracy or bias.

The CEO of Tesla boasted very publicly every year since at least 2016 that he is done solving driverless since his products will eliminate all crashes in the next years.

The result is that Tesla’s experimental software gets treated as beyond criticism, even when it kills people. The disinformation apparatus serves the same function as apartheid-era propaganda – creating a false parallel reality where obvious and deadly violence either didn’t happen, was justified, or was someone else’s fault.

Broken Pattern Continues

BOSS set the model for Palantir, hiding behind ironic national security claims (generating the very terrorists it sponges up massive federal funding to find). Tesla similarly inverts logic, spinning bogus intellectual property claims about a need to bury their safety data with people being killed by their lack of transparency.

…Tesla’s motion argues that crash data collected through its advanced driver-assistance systems (ADAS) is confidential business information. The automaker contends that the release of this data, which includes detailed logs on vehicle performance before and during crashes, could reveal patterns in its Autopilot and Full Self-Driving (FSD) systems.

The patterns Tesla doesn’t want revealed are related to FSD causing deaths. There’s unfortunately a well-documented history of companies in America trying to withhold data under claims of trade secrets or proprietary information, particularly when it could expose dangerous patterns affecting public safety. The most notorious precedent, of course, is the tobacco industry hiding data for 50 years that allegedly caused over 16 million deaths in America alone.

All too often in the choice between the physical health of consumers and the financial well-being of business, concealment is chosen over disclosure, sales over safety, and money over morality. (US Judge Sarokin, Haines v Liggett Group, 1992)

Palantir and Tesla thus follow a playbook to avoid any accountability for unjust deaths using a simple loophole, weaponizing technological enthusiasm and huge corporate legal teams. Each court case iteration drives their extremist political violence apparatus to be invisible, yet embedded deeper into state security control. Tesla, despite failing at basic safety for over a decade, still pushes FSD onto customers who do not understand they’re participating in a public beta test of authoritarian white supremacist warfare where the stakes are measured in human lives rather than software bugs. In 2025 Tesla FSD was reportedly still unable to recognize a school bus and small children directly in front of it on a clear day.

The lawn darts were banned in 1988 after a single child died. They didn’t get software updates. They didn’t have online defenders explaining why each injury was actually the fault of an outdated dart version. They just got pulled from the market because toys aren’t supposed to kill people.

Tesla’s FSD is suspected of killing dozens if not hundreds of people, based on the two proven cases. The response however has been deflection, software updates and social media disinformation. Elon Musk bought Twitter and rebranded it with a swastika, overtly messaging that normal ethical standards don’t apply as long as he can digitally mediate horrible societal harms he’s promoting and even causing himself.

…have you noticed a certain pattern used in Tesla marketing?

  • Charge Plugs: 88
  • Model Cost: 88
  • Average Speed: 88
  • Engine Power: 88
  • Voice commands: 88
  • Taxi launch date: 8/8

A South African Afrikaner Weerstandsbeweging (AWB) member in 2010 (left) and a South African-born member of MAGA in the U.S. on 20 January 2025 (right). Source: The Guardian. Photograph: AFP via Getty Images, Reuters

The pattern will continue until the regulatory response changes, or until Tesla’s online disinformation army can no longer hide the body count in their unmarked digital mass grave like it’s 1921 in Tulsa again.

Tulsa officials immediately moved to competely erase a racist massacre of Blacks from records, going so far as to build a new white supremacist meeting center (“Klavern”) directly on top of Black business and homes that had been napalmed.

Massive US/Russian Election Big Tech Spy Operation Exposed by EU

European security researchers have exposed a cross-platform scheme by US and Russian tech companies to secretly spy on billions of Android users worldwide.

In the disclosure released this week called “Covert Web-to-App Tracking via Localhost on Android“, researchers from European institutions revealed that Facebook/Instagram/Yandex operated covert tracking to completely bypass Android’s privacy protections.

The Smoking Port

The evidence is damning: both companies were using hidden “localhost” connections to link users’ anonymous web browsing to their real identities in mobile apps.

That means when users visited websites—even in private or anonymous browsing mode—Meta Pixel or Yandex tracking scripts would secretly communicate with the Facebook, Instagram, and Yandex apps running in the background on their phones like American/Russian spies, bypassing all privacy protections.

Meta WebRTC techniques would send the `_fbp` cookie from websites to Facebook/Instagram apps listening on UDP ports 12580-12585. Yandex used HTTP/HTTPS requests to send data to Yandex apps listening on ports 29009, 29010, 30102, and 30103. This was possible because Android doesn’t restrict localhost access, as a bridge between the web and local apps.

Scale of the Operation

  • Meta’s tracking on 5.8+ million websites
  • Yandex tracking on nearly 3 million sites
  • Likely harming billions of Android users
  • Worked even when users cleared cookies or used incognito mode

Same Violation by America and Russia

While there’s no evidence yet the American and Russian method was coordinated, both developed exploits for the same Android vulnerability. Here’s the technical evolution that shows the linked progression:

Yandex (Russia) used HTTP-based tracking since February 2017—running for 8 years undetected using obfuscated domains that resolve to localhost (yandexmetrica.com → 127.0.0.1)

Meta (US) then went through a sudden rapid evolution starting September 2024

  • HTTP requests (Sep-Oct 2024) — Same as Yandex
  • WebSocket connections (Nov 2024-Jan 2025)
  • WebRTC STUN with SDP Munging (Nov 2024-present)
  • WebRTC TURN without SDP Munging (May 2025-present)

The fact that Meta started last September with the exact same HTTP method that Russia had been using since 2017 raises obvious questions about a knowledge transfer, shared intelligence, or reverse engineering of Android vulnerabilities.

More to the point, the fact that this surveillance infrastructure was deployed just 2 months before the US Presidential election, using Russian methods, certainly raises questions about whether this was Meta again implicated in election-related surveillance and interference.

When the European researchers went public with these findings both companies immediately ceased the spying operation.

Never Leave a Meta App on Your Phone

This goes beyond privacy rights and into issues of digital sovereignty. Two countries were using private companies for surveillance operations on domestic and foreign citizens’ devices, willfully circumventing consent or disclosure during crucial US elections.

The tracking defeated every privacy protection users thought they had. Given WhatsApp’s massive European user base and end-to-end encryption promises, its omission from this operation raises questions about whether Meta was trying to maintain plausible deniability for their messaging platform while using their social media apps for covert tracking. More likely is that WhatsApp is already so compromised, it doesn’t need another backdoor.

The EU’s researchers didn’t just expose spies—both US and Russian tech giants immediately stopped covert operations after initial public exposure. A business level privacy violation would have had a completely different footprint and reaction, further suggesting this was digital espionage by private tech companies for state control or capture.

We are in discussions with Google to address a potential miscommunication regarding the application of their policies,” a Meta spokesperson told The Register. “Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”

Of course Meta, a company founded on the principle of unaccountable abuse, would try to get reporters to blame Google instead of documenting criminals were committing a clear crime.

When you’re secretly listening on localhost ports to harvest browsing data, there’s no “miscommunication” about whether that violates user expectations. Localhost tracking required deliberate technical implementation through apps developed to listen on specific ports, scripts deployed to send data through those channels, and evolving the methods when detection risks increased. There was no policy misunderstanding; only intentional infrastructure for spying.

Most companies fight disclosure or defend practices as legitimate, whereas the instant shutdown suggests they knew this crossed lines… and that the US presidential elections are over.

Europe continues to prove global leadership in digital rights, where advocates and regulators protect and enhance innovation. Independent researchers forcing transparency remain the best allies to regulators, holding Big Tech accountable, because they do not fear whatever flag these corporations fly.

Related:

New America’s Maginot Line of Military Deception

Why a Think Tank Report on Deception Misses the Point—And Makes States More Vulnerable

I was excited to watch the presentation yesterday of a recent New America report on “The Future of Deception in War“. It seemed throughout the talk, however, that a certain perspective (the “operator”, the quiet professional) was missing. I even asked what the presenters thought about Soviet use of disinformation that was so excessive it hurt the Soviets.

They didn’t answer the question, but I asked because cultural corruption is a serious problem, like accounting for radiation when dealing with nuclear weapons. When deception is unregulated and institutionalized, it dangerously corrodes internal culture. Soviet officers learned that career advancement came through convincing lies rather than operational competence. This created military leadership that was excellent at bureaucratic maneuvering but terrible at actual warfare, as evidenced in Afghanistan and later Chechnya. Worse, their over-compartmentalization meant different parts of their centralized government couldn’t coordinate—creating the opposite of effective deception.

This isn’t the first time I’ve seen academic approaches miss the operational realities of information warfare. As I wrote in 2019 about the CIA’s origins, effective information operations have always required understanding that “America cannot afford to resume its prewar indifference” to the dangerous handling of deception.

What’s invisible, cumulative, and potentially catastrophic if not carefully managed by experts with hands on experience? Deception.

Then I read the report and, with much disappointment, found that it exemplifies everything wrong with how military institutions approach deception. Like French generals building elaborate fortifications while German tanks rolled through the Ardennes, the analysis comes through as theoretical frameworks for warfare that no longer exists.

As much as Mr. Singer loves to pull historical references, even citing the Bible and Mossad in the same breath, he seems to have completely missed Toffler, let alone Heraclitus: the river he wants to paint us a picture of was already gone the moment he took out his brush.

The report’s fundamental flaw isn’t in its details—it’s in treating deception as a problem that can be solved through systematic analysis rather than understood through practice. This is dangerous because it creates the illusion of preparation while actually making us more vulnerable.

Academia is a Hallucination

The authors approach deception like engineers design bridges: detailed planning, formal integration processes, measurable outcomes, systematic rollout procedures. They propose “dedicated doctrine,” “standardized approaches,” and “strategic deception staffs.” This is waterfall methodology applied to a domain that requires agile thinking.

Real deception practitioners—poker players, con artists, intelligence officers who’ve operated in denied areas—know something the report authors don’t: deception dies the moment you systematize it.

Every successful military deception in history shared common characteristics the report ignores:

  • They were improvisational responses to immediate opportunities
  • They exploited enemy assumptions rather than following friendly doctrine
  • They succeeded because they violated expectations, including their own side’s expectations
  • They were abandoned the moment they stopped working

Consider four deceptions separated by nuance yet united by genius: the Haversack Ruse at Beersheba (1917), Ethiopia Mission 101 (1940), Operation Bertram (1942) and Operation Mincemeat (1943). Each succeeded through what I warned over a decade ago is Big Data vulnerability to “seed set theory” – an unshakeable core of truth, dropped by a relative influencer, spreading with improvised lies around it.

The haversack was covered in real (horse) blood with convincing photos, military maps and orders. Mission 101 took a proven WWI artillery fuse design and used 20,000 irregular African troops with a bottle of the finest whiskey to rout 300,000 heavily armed and armored fascists. Mincemeat was an actual corpse with meticulously authentic personal effects.

None of these could have emerged from systematic planning processes. Each required someone to intuitively grasp what truth would be most convincing to a particular enemy in a unique moment, then place the right seed with human creativity into the right soil, that no doctrine could capture.

It’s no coincidence that Orde Wingate, founder of Commando doctrine, considered Laurence of Arabia a flamboyant self-important bureaucrat. One of them delivered an operations guideline that we use to this day around the world and in every conflict, the other created Saudi Arabia.

The Emperor of Abyssinia (modern day Ethiopia) with Brigadier Daniel Arthur Sandford on his left and Colonel Wingate on his right, in Dambacha Fort after it had been captured, 15 April 1941

The Wealthy Bureaucrat Trap

The report’s emphasis on “integrating deception planning into normal tactical planning processes” reveals profound misunderstanding. You cannot bureaucratize deception any more than you can bureaucratize jazz improvisation. The qualities that make effective military officers—following doctrine, systematic thinking, institutional loyalty—are precisely opposite to the qualities that make effective deceivers.

Consider the report’s proposed “principles for military deception”:

  • “Ensure approaches are credible, verifiable, executable, and measurable”
  • “Make security a priority” with “strictest need-to-know criteria”
  • “Integrate planning and control”

This is exactly how NOT to do deception. Real deception is:

  • Incredible until it suddenly makes perfect sense
  • Unverifiable by design
  • Unmeasurable in traditional metrics
  • Shared widely enough to seem authentic
  • Chaotic and loosely coordinated

Tech Silver Bullets are for Mythological Enemies

The report’s fascination with AI-powered deception systems reveals another blind spot. Complex technological solutions create single points of catastrophic failure. When your sophisticated deepfake system gets compromised, your entire deception capability dies. When your simple human lies get exposed, you adapt and try different simple human lies.

Historical successful deceptions—from D-Day’s Operation Fortitude to Midway’s intelligence breakthrough—succeeded through human insight, not technological sophistication. They worked because someone understood their enemy’s psychology well enough to feed them convincing lies.

The Meta-Deception Problem

Perhaps worth noting also is how the authors seem unaware, or make no mention of the risk, that they might be targets of deception themselves. They cite Ukrainian and Russian examples without consideration and caveat that some of those “successful” deceptions might actually be deceptions aimed at Western analysts like them.

Publishing detailed sharp analysis of deception techniques demonstrates the authors don’t fully appreciate their messy and fuzzy subject. Real practitioners know that explaining your methods kills them. This report essentially advocates for the kind of capabilities that its own existence undermines. Think about that for a minute.

Alternative Agility

What would effective military deception actually look like? Take lessons from domains that really understand deception:

  • Stay Always Hot: Maintain multiple small deception operations continuously rather than launching elaborate schemes. Like DevOps systems, deception should be running constantly, not activated for special occasions.
  • Fail Fast: Better to have small lies exposed quickly than catastrophic ones discovered later. Build feedback loops that tell you immediately when deceptions stop working.
  • Test in Production: You cannot really test deception except against actual adversaries. Wargames and simulations create false confidence.
  • Embrace Uncertainty: The goal isn’t perfect deception—it’s maintaining operational effectiveness while operating in environments where truth and falsehood become indistinguishable.
  • Microservices Over Monoliths: Distributed, loosely-coupled deception efforts are more resilient than grand unified schemes that fail catastrophically.

Tea Leaves from Ukraine

The report celebrates Ukraine’s “rapid adaptation cycles” in deception, but misses the deeper lesson. Ukrainian success comes not from sophisticated planning but from cultural comfort with improvisation and institutional tolerance for failure.

Some of the best jazz and rock clubs of the Cold War were in musty basements of Prague, fundamentally undermining faith in Soviet controls. West Berlin’s military occupation during the Cold War removed all curfews just to force the kinds of “bebop” freedom of thought believed to destroy Soviet narratives.

Ukrainian tank commanders don’t follow deception doctrine—they lie constantly, creatively, and without asking permission. When lies stop working, they try different lies. This isn’t systematizable because it depends on human judgment operating faster than institutional processes.

Important Strategic Warning

China and Russia aren’t beating us at deception because they have better doctrine or technology. They’re succeeding because their institutions are culturally comfortable with dishonesty and operationally comfortable with uncertainty.

Western military institutions trying to compete through systematic approaches to deception are like French generals in 1940—building elaborate defenses against the last war while their enemies drive around them.

Country Boy Cow Path Techniques

Instead of trying to bureaucratize deception, military institutions should focus on what actually matters:

  • Cultural Adaptation: Create institutional tolerance for failure, improvisation, and calculated dishonesty. This requires changing personnel systems that punish risk-taking.
  • Human Networks: Invest in education of people to curiously understand foreign cultures well enough to craft believable lies, not technologies that automate deception.
  • Rapid Feedback Systems: Build capabilities that tell you immediately when your deceptions are working or failing, not elaborate planning systems.
  • Operational Security Through Simplicity: Use simple, hard-to-detect deceptions rather than sophisticated, fragile technological solutions.
  • Embrace the Unknown: Accept that effective deception cannot be measured, systematized, or fully controlled. This is a feature, not a bug.

A Newer America

The New America report represents the militarization of management consulting—sophisticated-sounding solutions that miss fundamental realities. By treating deception as an engineering problem rather than a human art, it creates dangerous overconfidence while actually making us more vulnerable.

Real military advantage comes not from better deception doctrine but from institutional agility that lets you operate effectively when everyone is lying to everyone else—including themselves.

The authors end with: “We should not deceive ourselves into thinking that change is not needed.” They’re right about change being needed. They’re wrong about what kind of change.

Instead of building a Maginot Line of deception doctrine (the report’s recommendations are dangerously counterproductive), we need the institutional equivalent of Orde Wingate’s Chindits: fast, flexible, and comfortable with uncertainty. Because in a world where everyone can deceive, the advantage goes to whoever can adapt fastest when their lies inevitably fail.

Wingate’s fleet of Waco “Hadrian” Gliders in 1944 Operation Thursday were deployed to do the “impossible”.