In WWI just outside of the trenches of Gaza, a British officer deliberately fired at Ottomans and then dropped a haversack containing false battle plans covered in (horse) blood for their scouts to “find”. This 1918 ruse not only worked—decieving the entrenched Ottomans and routing their positions—it led to Britain seizing the whole Middle East, carving out new states we know to this day.
Elyanna’s big new hit “Ganeni” reminded me of this with the video’s opening scenes. The catchy Arabic pop song is all about relationship drama, or is it? She perhaps also offers us old historians of information war a commentary on colonial manipulation and authoritarianism beyond gender dynamics.
“Ganeni” means “drive me crazy.” The song describes control resurfacing just when you think you’re free—like reeling in a fish, close then pulling away, creating an exhausting cycle. This mirrors both danger in relationship patterns and colonialism: engagement followed by an abandonment, promises followed by betrayals.
Harmless relationship advice is also a resistance anthem to sing openly. The genius of encoded music (e.g. General Tubman) is the deniability—universal enough to chart internationally, specific enough to speak directly to those who understand.
The progression from confusion (“What brought you back?”) to decisive refusal (“I don’t want this anymore”) maps both personal healing and political awakening. The recognition that inconsistent treatment (e.g. Nazi permanent improvisation doctrine) is control, and that saying “enough” is de oppresso liber.
Translation
جارا إيه؟ مش كنت ناسيني؟
What happened? Hadn’t you forgotten me?
إيه اللي جابك تاني يا عيني؟
What brought you back again, my dear?
أيوة زمان كنت حبيبي
Yes, once upon a time you were my love
بس خلاص دانا مش عايزة
But enough—I don’t want this anymore
أرجع لأ، فكر آه، سيبك لأ
Go back? No. Think? Yes. Let you be? No.
مش عارفة دماغي لفة
I don’t know, my mind is spinning
وأنا دايماً يا يا يا يا
And I’m always going ya ya ya ya
جنني، جنني
Drive me crazy, drive me crazy
على دقة ونص رقصني
You made me dance to your rhythm and a half
جنني، جنني
Drive me crazy, drive me crazy
قربه وبعده بيتعبني
Your closeness and distance exhaust me
يا حبيبي أغنيتي خلصانة
My darling, my song is finished
ما تعرفش تلعب معانا
You don’t know how to play with us
الموزيكا بقى تمشي إزاي
How does the music even flow now?
بس بتلعب طبلة ونای
You just play drums and flute
آه جنني
Ah, drive me crazy
يا معذبني
Oh, you who torment me
إيه ده اللي إنت بتعمل بيا
What is this that you’re doing to me?
يوم جايبني يوم ماخدني
One day bringing me close, one day taking me away
بس خلاص دانا مش عايزة
But enough—I don’t want this anymore
Australia’s ambassador to Israel Chris Cannan (left) and Aboriginal Light Horseman Jack Pollard’s grandson Mark Pollard unveil a Haversack Ruse statue in Israel
Like the Beersheba Ruse carrying a carefully curated message to the Ottomans in 1918, followed by a daring stampede by Aboriginal horsemen to overwhelm machine guns and artillery, “Ganeni” calls upon reason with its rhymes. The British couldn’t directly confront Gaza fortifications, so they crafted a clever ruse to go around.
The Beersheba campaign in WWI has been called “Australia’s first big achievement on the world stage”. Source: SMH
In regions where direct political expression is dangerous, art continues as the vehicle for truth. The song charts internationally in Arabic, promoting messages of resistence to oppression.
Another Tesla “Full Self-Driving” (FSD) crash. Another death. Another round of online social media denials that follow a script as predictable as a Pinto’s design failures.
Take for example a Tesla Model Y that struck and killed a motorcyclist in Washington state while operating in FSD mode. Within hours of the news breaking, Tesla’s online community had deployed its standard “no true Scotsman” fallacy army: this wasn’t real FSD, the investigation is biased, and anyone reporting on it is part of a coordinated attack.
This was running 11 which had known issues. We’re on 12 now which completely fixes this problem. Misleading to call this an FSD issue when it’s ancient software.
Convenient how all these ‘investigations’ happen right before earnings calls. Follow the money – who benefits from Tesla’s stock price dropping?
The NHTSA has been gunning for Tesla since day one. They never investigate Ford or GM crashes this aggressively. Pure bias.
The pattern of information warfare is consistent across every Tesla FSD fatality: minimize the death, question the timing, attack the investigators. What’s missing is any acknowledgment that experimental software killed someone.
Investigations Branded “Conspiracies”
The National Highway Traffic Safety Administration (NHTSA) didn’t investigate Tesla from 2016 to 2020 because of political corruption.
Then, after Trump no longer could press his thumb on regulators, it opened investigation into Tesla’s FSD technology and started documenting multiple crashes involving the system. This is standard procedure – when a particular technology shows up repeatedly in fatal crashes, safety regulators investigate.
Tesla’s online defenders see coordination where none exists:
Weird how the Washington Post, Reuters, and CNN all covered this crash within 24 hours. Almost like they’re all reading from the same script 🤔
Amazing how NHTSA always finds ‘new evidence’ right when Tesla stock is doing well. These people are compromised.
The fact that multiple commenters are saying “it’s coordinated” while following an identical, predictable response pattern is pure irony. They’re the ones who are coordinated – coordinated in their denial.
Their response also treats routine journalism as unusual or suspicious. When a Tesla crashes and kills someone, that’s news. When multiple outlets report on it, that’s how news works. The alternative would be a media blackout on Tesla fatalities, which is apparently what Tesla would prefer. If you report, you violate their implied censorship. It’s kind of like how lynchings worked in 1830s “cancel culture” America.
Source: My presentation at MindTheSec 2021
The Dumb Theseus Game
Perhaps the most cynical aspect of the Tesla fan response is “version” gaming by dismissing every crash as irrelevant because software is updated constantly. This is basically the ship of Theseus, but weaponized. Every nail pounded into a plank means the entire boat is different and thus can’t be compared anymore to any other boat.
Literally nobody is running v11 anymore. This crash tells us nothing about current FSD capabilities. Yellow journalism at its finest.
By the time this investigation concludes, we’ll be 3 versions ahead. NHTSA investigating 2-year-old software like it’s current. Completely useless.
This logic renders every Tesla crash investigation meaningless by the dumbest design in history. Since Tesla continuously updates software, no crash can ever be considered relevant evidence of the technology’s safety problems. It’s a perfect unfalsifiability shield.
Most critically, this approach demonstrates that the company is learning nothing from its data. Genuine learning becomes impossible when each software version is treated as a completely fresh start, with no connection to previous iterations.
While Tesla’s defenders seize on the legitimate point that new releases include bug fixes, they distort this into a narrative where each update absolves the company of responsibility for past failures.
This fundamentally misunderstands how software safety works and actively undermines the improvement process. Rather than seeing progressive enhancement, we should expect Tesla’s later versions to perform worse than better—a prediction that appears validated by the company’s accelerating fatality rates.
Deflection Addiction
The online response follows a clear hierarchy in the Tesla playbook of deflection and deception. It should be recognized as a planned and coordinated private intelligence group attack pattern on government regulators and public sentiment:
Level 1: Blame the victim
“Motorcycle was probably speeding”
“Driver should have taken over”
“Darwin Award winner”
Level 2: Dismiss prior software version as unrelated (invalidate learning process)
“That’s yesterday’s FSD”
“Tomorrow’s version fixes everything”
“Misleading headline”
Level 3: Attack independent assessors
“Government bias”
“Media conspiracy”
“Coordinated FUD campaign”
Level 4: Claim victimhood
“They’re trying to kill our beliefs”
“Legacy media hates us for disrupting”
“Short sellers jealously spreading lies”
Notice what’s absent: any substantive discussion of why FSD beta software failed to detect and avoid a car, a pole, a person, a motorcycle… and more people are being killed in “veered” Tesla crashes than ever.
Human Cost Gets Tossed
Fundamental to the beliefs of Peter Thiel and Elon Musk, being raised under southern African white supremacist doctrines by their pro-Nazi families hiding after Hitler was defeated, is their callous disregard for human lives in a quest for white men holding absolute power.
They couldn’t become engineers lawyers or bankers because of ethical boundaries, yet in the computer industry they found zero resistance to murder. Technological enthusiasm is ruthlessly manipulated by them both as a moral blind spot (Palantir and Tesla) where victims are made comfortable with a level of harm absolutely unconscionable in any other context. Each Palantir missile, each Tesla missile, represents real people whose lives have been permanently altered and not some kind of acceptable loss in service of some greater good.
Palantir has likely murdered hundreds of innocent people using extrajudicial assassinations (“self licking ISIS-cream cone“) without any transparency or accountability. Tesla similarly has hundreds of deaths in its careless wake, yet insufficiently examined for predictable causes.
While Tesla fans debate software versions and media bias, the human impact disappears from the conversation. The Washington state crash left a motorcyclist dead and a family grieving. The Tesla driver will likely carry psychological trauma from the incident. These consequences don’t get undone by a software update.
This is just another hit piece trying to slow down progress. How many lives will be saved when FSD is perfected? The media never mentions that.
This utilitarian argument – that current deaths are acceptable in service of future safety – would be remarkable coming from any other industry. Imagine if pharmaceutical companies dismissed drug deaths as necessary sacrifices for medical progress, or if airlines shrugged off crashes as the price of innovation.
Do I need to go on? As an applied-disinformation expert and historian, allow me to explain how Tesla social media techniques reveal tip of a very large and dangerous iceberg in information warfare. The typical response above reveals several problematic patterns of thinking:
False dilemma: framing this as either “accept current deaths” or “reject all progress,” when there are clearly middle paths involving more rigorous testing, better safety protocols, or different deployment strategies.
Whataboutism: deflecting from the specific incident by pointing to hypothetical future benefits or comparing to other risks, rather than addressing the actual harm that occurred.
Sunk cost reasoning: the implication that because development has progressed this far, we must continue regardless of current costs.
Abstraction bias: treating real human deaths as statistics or necessary data points rather than acknowledging the irreversible loss of actual lives and the trauma inflicted on families and communities.
This is not random, as much of what I see being spread by Palantir and Tesla propaganda reads to me like South African apartheid military intelligence tactics of the 1970s. Palantir appears to be setup as South Africa’s Bureau of State Secrecy (BOSS) updated to the digital age, with Silicon Valley’s “disruption” rhetoric providing moral cover that anti-communism provided for apartheid-era operations.
BOSS operated from 1968 to 1980 for the systematic torture and murder of Blacks around the world
Whereas BOSS required extensive human networks to track and eliminate targets, Palantir has evolved to process vast datasets to identify patterns and individuals for elimination with far fewer human operators. Tesla fits into this as a domestic terror tactic to dehumanize and even kill people in transit, which should be familiar to those who study the rise and abrupt fall of AWB terror campaigns in South Africa.
A single police officer in 1994 killed Nazi domestic terrorists (AWB) who had been driving around shooting at Black people. It was headline news at the time, because AWB promised civil war to forcibly remove all Blacks from government and instead ended up dead on the side of a road.
The same mentality that drove around South Africa shooting at Black people, now refined into “self-driving” cars that statistically target vulnerable road users. The technology evolves but the underlying worldview remains: certain lives are expendable.
Real-Time Regulatory Capture
The Tesla fan community has effectively created a form of regulatory capture through social media pressure. Any investigation into Tesla crashes gets immediately branded as biased, coordinated, or motivated by anti-innovation sentiment.
The same people investigating Tesla crashes drive Ford and GM cars. Conflict of interest much? These investigations are jokes.
NHTSA investigators probably have shorts positions in Tesla. Someone should check their portfolios.
This creates an environment where egregious unsafe lies are allowed for years, yet legitimate safety concerns can’t be raised for a minute without facing coordinated hordes of accusations of conspiracy or bias.
The CEO of Tesla boasted very publicly every year since at least 2016 that he is done solving driverless since his products will eliminate all crashes in the next years.
The result is that Tesla’s experimental software gets treated as beyond criticism, even when it kills people. The disinformation apparatus serves the same function as apartheid-era propaganda – creating a false parallel reality where obvious and deadly violence either didn’t happen, was justified, or was someone else’s fault.
Broken Pattern Continues
BOSS set the model for Palantir, hiding behind ironic national security claims (generating the very terrorists it sponges up massive federal funding to find). Tesla similarly inverts logic, spinning bogus intellectual property claims about a need to bury their safety data with people being killed by their lack of transparency.
…Tesla’s motion argues that crash data collected through its advanced driver-assistance systems (ADAS) is confidential business information. The automaker contends that the release of this data, which includes detailed logs on vehicle performance before and during crashes, could reveal patterns in its Autopilot and Full Self-Driving (FSD) systems.
The patterns Tesla doesn’t want revealed are related to FSD causing deaths. There’s unfortunately a well-documented history of companies in America trying to withhold data under claims of trade secrets or proprietary information, particularly when it could expose dangerous patterns affecting public safety. The most notorious precedent, of course, is the tobacco industry hiding data for 50 years that allegedly caused over 16 million deaths in America alone.
All too often in the choice between the physical health of consumers and the financial well-being of business, concealment is chosen over disclosure, sales over safety, and money over morality. (US Judge Sarokin, Haines v Liggett Group, 1992)
Palantir and Tesla thus follow a playbook to avoid any accountability for unjust deaths using a simple loophole, weaponizing technological enthusiasm and huge corporate legal teams. Each court case iteration drives their extremist political violence apparatus to be invisible, yet embedded deeper into state security control. Tesla, despite failing at basic safety for over a decade, still pushes FSD onto customers who do not understand they’re participating in a public beta test of authoritarian white supremacist warfare where the stakes are measured in human lives rather than software bugs. In 2025 Tesla FSD was reportedly still unable to recognize a school bus and small children directly in front of it on a clear day.
The lawn darts were banned in 1988 after a single child died. They didn’t get software updates. They didn’t have online defenders explaining why each injury was actually the fault of an outdated dart version. They just got pulled from the market because toys aren’t supposed to kill people.
Tesla’s FSD is suspected of killing dozens if not hundreds of people, based on the two proven cases. The response however has been deflection, software updates and social media disinformation. Elon Musk bought Twitter and rebranded it with a swastika, overtly messaging that normal ethical standards don’t apply as long as he can digitally mediate horrible societal harms he’s promoting and even causing himself.
…have you noticed a certain pattern used in Tesla marketing?
Charge Plugs: 88
Model Cost: 88
Average Speed: 88
Engine Power: 88
Voice commands: 88
Taxi launch date: 8/8
A South African Afrikaner Weerstandsbeweging (AWB) member in 2010 (left) and a South African-born member of MAGA in the U.S. on 20 January 2025 (right). Source: The Guardian. Photograph: AFP via Getty Images, Reuters
The pattern will continue until the regulatory response changes, or until Tesla’s online disinformation army can no longer hide the body count in their unmarked digital mass grave like it’s 1921 in Tulsa again.
Tulsa officials immediately moved to competely erase a racist massacre of Blacks from records, going so far as to build a new white supremacist meeting center (“Klavern”) directly on top of Black business and homes that had been napalmed.
Why a Think Tank Report on Deception Misses the Point—And Makes States More Vulnerable
I was excited to watch the presentation yesterday of a recent New America report on “The Future of Deception in War“. It seemed throughout the talk, however, that a certain perspective (the “operator”, the quiet professional) was missing. I even asked what the presenters thought about Soviet use of disinformation that was so excessive it hurt the Soviets.
They didn’t answer the question, but I asked because cultural corruption is a serious problem, like accounting for radiation when dealing with nuclear weapons. When deception is unregulated and institutionalized, it dangerously corrodes internal culture. Soviet officers learned that career advancement came through convincing lies rather than operational competence. This created military leadership that was excellent at bureaucratic maneuvering but terrible at actual warfare, as evidenced in Afghanistan and later Chechnya. Worse, their over-compartmentalization meant different parts of their centralized government couldn’t coordinate—creating the opposite of effective deception.
This isn’t the first time I’ve seen academic approaches miss the operational realities of information warfare. As I wrote in 2019 about the CIA’s origins, effective information operations have always required understanding that “America cannot afford to resume its prewar indifference” to the dangerous handling of deception.
What’s invisible, cumulative, and potentially catastrophic if not carefully managed by experts with hands on experience? Deception.
Then I read the report and, with much disappointment, found that it exemplifies everything wrong with how military institutions approach deception. Like French generals building elaborate fortifications while German tanks rolled through the Ardennes, the analysis comes through as theoretical frameworks for warfare that no longer exists.
As much as Mr. Singer loves to pull historical references, even citing the Bible and Mossad in the same breath, he seems to have completely missed Toffler, let alone Heraclitus: the river he wants to paint us a picture of was already gone the moment he took out his brush.
The report’s fundamental flaw isn’t in its details—it’s in treating deception as a problem that can be solved through systematic analysis rather than understood through practice. This is dangerous because it creates the illusion of preparation while actually making us more vulnerable.
Academia is a Hallucination
The authors approach deception like engineers design bridges: detailed planning, formal integration processes, measurable outcomes, systematic rollout procedures. They propose “dedicated doctrine,” “standardized approaches,” and “strategic deception staffs.” This is waterfall methodology applied to a domain that requires agile thinking.
Real deception practitioners—poker players, con artists, intelligence officers who’ve operated in denied areas—know something the report authors don’t: deception dies the moment you systematize it.
Every successful military deception in history shared common characteristics the report ignores:
They were improvisational responses to immediate opportunities
They exploited enemy assumptions rather than following friendly doctrine
They succeeded because they violated expectations, including their own side’s expectations
They were abandoned the moment they stopped working
Consider four deceptions separated by nuance yet united by genius: the Haversack Ruse at Beersheba (1917), Ethiopia Mission 101 (1940), Operation Bertram (1942) and Operation Mincemeat (1943). Each succeeded through what I warned over a decade ago is Big Data vulnerability to “seed set theory” – an unshakeable core of truth, dropped by a relative influencer, spreading with improvised lies around it.
The haversack was covered in real (horse) blood with convincing photos, military maps and orders. Mission 101 took a proven WWI artillery fuse design and used 20,000 irregular African troops with a bottle of the finest whiskey to rout 300,000 heavily armed and armored fascists. Mincemeat was an actual corpse with meticulously authentic personal effects.
None of these could have emerged from systematic planning processes. Each required someone to intuitively grasp what truth would be most convincing to a particular enemy in a unique moment, then place the right seed with human creativity into the right soil, that no doctrine could capture.
It’s no coincidence that Orde Wingate, founder of Commando doctrine, considered Laurence of Arabia a flamboyant self-important bureaucrat. One of them delivered an operations guideline that we use to this day around the world and in every conflict, the other created Saudi Arabia.
The Emperor of Abyssinia (modern day Ethiopia) with Brigadier Daniel Arthur Sandford on his left and Colonel Wingate on his right, in Dambacha Fort after it had been captured, 15 April 1941
The Wealthy Bureaucrat Trap
The report’s emphasis on “integrating deception planning into normal tactical planning processes” reveals profound misunderstanding. You cannot bureaucratize deception any more than you can bureaucratize jazz improvisation. The qualities that make effective military officers—following doctrine, systematic thinking, institutional loyalty—are precisely opposite to the qualities that make effective deceivers.
Consider the report’s proposed “principles for military deception”:
“Ensure approaches are credible, verifiable, executable, and measurable”
“Make security a priority” with “strictest need-to-know criteria”
“Integrate planning and control”
This is exactly how NOT to do deception. Real deception is:
Incredible until it suddenly makes perfect sense
Unverifiable by design
Unmeasurable in traditional metrics
Shared widely enough to seem authentic
Chaotic and loosely coordinated
Tech Silver Bullets are for Mythological Enemies
The report’s fascination with AI-powered deception systems reveals another blind spot. Complex technological solutions create single points of catastrophic failure. When your sophisticated deepfake system gets compromised, your entire deception capability dies. When your simple human lies get exposed, you adapt and try different simple human lies.
Historical successful deceptions—from D-Day’s Operation Fortitude to Midway’s intelligence breakthrough—succeeded through human insight, not technological sophistication. They worked because someone understood their enemy’s psychology well enough to feed them convincing lies.
The Meta-Deception Problem
Perhaps worth noting also is how the authors seem unaware, or make no mention of the risk, that they might be targets of deception themselves. They cite Ukrainian and Russian examples without consideration and caveat that some of those “successful” deceptions might actually be deceptions aimed at Western analysts like them.
Publishing detailed sharp analysis of deception techniques demonstrates the authors don’t fully appreciate their messy and fuzzy subject. Real practitioners know that explaining your methods kills them. This report essentially advocates for the kind of capabilities that its own existence undermines. Think about that for a minute.
Alternative Agility
What would effective military deception actually look like? Take lessons from domains that really understand deception:
Stay Always Hot: Maintain multiple small deception operations continuously rather than launching elaborate schemes. Like DevOps systems, deception should be running constantly, not activated for special occasions.
Fail Fast: Better to have small lies exposed quickly than catastrophic ones discovered later. Build feedback loops that tell you immediately when deceptions stop working.
Test in Production: You cannot really test deception except against actual adversaries. Wargames and simulations create false confidence.
Embrace Uncertainty: The goal isn’t perfect deception—it’s maintaining operational effectiveness while operating in environments where truth and falsehood become indistinguishable.
Microservices Over Monoliths: Distributed, loosely-coupled deception efforts are more resilient than grand unified schemes that fail catastrophically.
Tea Leaves from Ukraine
The report celebrates Ukraine’s “rapid adaptation cycles” in deception, but misses the deeper lesson. Ukrainian success comes not from sophisticated planning but from cultural comfort with improvisation and institutional tolerance for failure.
Some of the best jazz and rock clubs of the Cold War were in musty basements of Prague, fundamentally undermining faith in Soviet controls. West Berlin’s military occupation during the Cold War removed all curfews just to force the kinds of “bebop” freedom of thought believed to destroy Soviet narratives.
Ukrainian tank commanders don’t follow deception doctrine—they lie constantly, creatively, and without asking permission. When lies stop working, they try different lies. This isn’t systematizable because it depends on human judgment operating faster than institutional processes.
Important Strategic Warning
China and Russia aren’t beating us at deception because they have better doctrine or technology. They’re succeeding because their institutions are culturally comfortable with dishonesty and operationally comfortable with uncertainty.
Western military institutions trying to compete through systematic approaches to deception are like French generals in 1940—building elaborate defenses against the last war while their enemies drive around them.
Country Boy Cow Path Techniques
Instead of trying to bureaucratize deception, military institutions should focus on what actually matters:
Cultural Adaptation: Create institutional tolerance for failure, improvisation, and calculated dishonesty. This requires changing personnel systems that punish risk-taking.
Human Networks: Invest in education of people to curiously understand foreign cultures well enough to craft believable lies, not technologies that automate deception.
Rapid Feedback Systems: Build capabilities that tell you immediately when your deceptions are working or failing, not elaborate planning systems.
Operational Security Through Simplicity: Use simple, hard-to-detect deceptions rather than sophisticated, fragile technological solutions.
Embrace the Unknown: Accept that effective deception cannot be measured, systematized, or fully controlled. This is a feature, not a bug.
A Newer America
The New America report represents the militarization of management consulting—sophisticated-sounding solutions that miss fundamental realities. By treating deception as an engineering problem rather than a human art, it creates dangerous overconfidence while actually making us more vulnerable.
Real military advantage comes not from better deception doctrine but from institutional agility that lets you operate effectively when everyone is lying to everyone else—including themselves.
The authors end with: “We should not deceive ourselves into thinking that change is not needed.” They’re right about change being needed. They’re wrong about what kind of change.
Instead of building a Maginot Line of deception doctrine (the report’s recommendations are dangerously counterproductive), we need the institutional equivalent of Orde Wingate’s Chindits: fast, flexible, and comfortable with uncertainty. Because in a world where everyone can deceive, the advantage goes to whoever can adapt fastest when their lies inevitably fail.Wingate’s fleet of Waco “Hadrian” Gliders in 1944 Operation Thursday were deployed to do the “impossible”.
Disinformation was originally a World War I term, having been first applied to the Disinformation Service of the German General Staff. The Russian Bolshevik Cheka adopted the term (as dezinformatsiya) and the technique in the early 1920’s, and it has been in use by the Soviet state security (OGPU, NKVD, KGB, etc.) and military intelligence (GRU) services ever since. Current Soviet Russian intelligence parlance uses this term in a sense so broad that U.S. Government translators sometimes translate it as “deception,” although the Russians are careful to distinguish it from physical camouflage (maskirovka). The term, as borrowed from the Russian, is now also common in U.S. intelligence parlance, but is used in a less comprehensive sense.
false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth. […]
Etymology: dis- + information, after Russian dezinformácija
Note: Russian dezinformácija and the adjective derivative dezinformaciónnyj can be found in Soviet military science journals published during the 1930’s. The Malaja Sovetskaja Ènciklopedija (1930-38) defines the word as “information known to be false that is surreptitiously passed to an enemy” (“dezinformacija, t.e., zavedomo lživaja informacija podkidyvaemaja protivniku”; vol. 3, p. 585). The verb dezinformírovat’ “to knowingly misinform” is attested earlier, no later than 1925, and may have been the basis for the noun. In more recent decades claims have been made about the origin of the word that are dubious and cannot be substantiated. […] First Known Use: 1939, in the meaning defined above
And, as an example of why that matters, Cyber Defense Review (quoting Merriam-Webster) then says this:
The word disinformation did not appear in English dictionaries until the 1980s. Its origins, however, can be traced back as early as the 1920s when Russia began using the word in connection with a special disinformation office whose purpose was to disseminate “false information with the intention to deceive public opinion.”
“The word disinformation did not appear in English dictionaries until the 1980s…“?
Hold that thought. With this dubious claim in mind, given we know that WWI Germans methods were copied by the Soviets, a most interesting version of all comes from a LSE blog post by Manchester University scholars.
Contrary to claims that the term disinformation entered English via Russian, conceived deceptively to sound like a word derived from a West European language to camouflage its Soviet origin, it had been in use in English from the turn of the twentieth century. For example, US press outlets accused their rivals of disinformation back in the 1880s and a British MP accused local authorities of using disinformation to justify their improper implementation of a parliamentary bill in 1901.
While not inventing the term ‘disinformation’, the Soviet authorities did pioneer its rather unusual usage. In 1923, the Bolshevik Party Politburo approved the establishment of the Disinformation Bureau (Dezinformburo) within the Soviet security service. The initiative, including its title, was suggested by an officer with close ties to German-speaking European Marxist revolutionaries (and this connection probably explains the Russian transliteration of the term in Russian from the German, rather than the English, spelling.)
Russians copied the Germans who copied the… British and Americans.
Or not? Could the origins of disinformation be disinformation itself! Let’s pull this thread a bit more and see if we can find the ugly sweater it came from.
It’s plausible there was a potential knowledge transfer from German WWI intelligence practices to early Soviet operations, even if there doesn’t seem to be any formal “Disinformation Service” within the German General Staff structure (as claimed by Whaley).
German military intelligence during WWI ran under the Abteilung IIIb (formerly Sektion IIIb, established 1889, achieving departmental status June 1915). Colonel Walter Nicolai led it from 1913-1918, which is crucial to tracing origins. His comprehensive intelligence service conducted foreign espionage, counterintelligence, media censorship, and propaganda coordination, which included disinformation. The German War Press Office (Kriegspresseamt) was established in October 1915 to coordinate civilian agencies like the Military Section of the Foreign Office (established July 1916), which clearly focused on disinformation.
In the case of Germany, the press maintained a triumphalist approach, suppressing stories about the military disasters of the summer of 1918 and running uninterrupted editorials that victory was near. Throughout the war, troops who had just suffered massive losses of men and territory were dismayed to read optimistic accounts of battles unrecognizable to those that had participated in them. As the saying went, in portraying wars in the press, truth was the first casualty.
For as much as that sounds like coordinated efforts ran under the federal state, multiple German agencies worked at cross-purposes, lacking effective centralization until late in the war. Distributed and legacy structural problems limited effectiveness of information operations compared to Allied efforts (especially President Wilson’s Office of Propaganda, driven by his America First platform rooted in the KKK well-honed methods of racist disinformation).
Firtz Schönpflug: “D’Annunzio über Wien”, Karikatur aus: Die Muskete, Ausgabe vom 29. August 1918. Copyright: Wienbibliothek im RathausThe paramilitary wing of “America First” in 1921 used bi-planes to firebomb black neighborhoods and businesses in Tulsa, OK. They also dropped racist propaganda leaflets across America. Note the swastika was their symbol as well as the X.
Notably Nicolai’s own wartime diaries and correspondence, recently published after being strategically hidden in Moscow’s archives since 1945, do not seem to have the exact word desinformation. Nicolai’s personal records were hidden in 1945 by Moscow’s “Special Archive”
His post-war memoir “Nachrichtendienst, Presse und Volksstimmung im Weltkrieg” (1920) also doesn’t seem to use the word when describing the propaganda run by “Aufklärung” (intelligence) and “Nachrichtendienst” (intelligence services).
I’ve written before about the “dumb as rocks” German agent networks that infiltrated America, especially San Francisco (preparedness day bombing, heavily laced by federal disinformation). The evidence is unmistakable that Wilson’s administration restarting the KKK and sympathetic to Germany in WWI, was fundamentally on the side of German espionage as a means of ruthlessly suppressing domestic American dissent. This undermines any and all claims that Wilson’s wartime propaganda and surveillance were security measures, as he established them primarily as tools of racist political control that established dangerous precedents for future administrations. Calculated use of fabricated external threats to justify real domestic repression has since become a mainstay of American government communications during conflicts.
The targeting was systematic and coordinated by groups operating clandestinely as domestic paramilitary terrorists under President Wilson’s hand. Federal prosecutors routinely argued that opposing the war equated to aiding Germany without requiring evidence of actual German connections, while Wilson himself was aligned with German objectives. The administration setup “hyphenated Americans” rhetoric to justify surveillance of non-whites and political leaders while actual German agents continued unimpeded operations through established diplomatic channels.
Wilson was using explicitly nativist rhetoric while simultaneously enabling foreign spy operations, linked to domestic terror paramilitary groups, crushing domestic opposition. His “America First” campaigns makes “hyphenated Americans” targeting (e.g. calling non-whites Asian American, Black American to emphasize being born non-white prevents America being First) even more sinister in context.
Woodrow Wilson adopted the 1880s nativist slogan “America First” and soon after began promoting paramilitant domestic terrorism in constumes based on the film “Birth of a Nation”.
Wilson’s 1915 selective enforcement (like Trump and ICE today ignoring actual foreign spies while crushing American political opposition through paramilitary terror campaigns) provides crucial context for understanding how propaganda techniques were really developed and refined.
We can easily see how Wilson’s 1917 official government-run propaganda apparatus could directly influence the 1923 Soviet Dezinformburo through the German-speaking Marxist networks (the same ones Wilson used to terrorize America). This makes knowledge transfer much more plausible than Whaley’s phantom “Disinformation Service”, which lacks any evidence.
The entire WWI propaganda period is best understood not as developing intelligence for national defense but rather pioneering techniques for domestic political control.
1914-1917: German operations under Nicolai
1917: Wilson’s CPI established
1918-1923: Post-war period with German-Marxist networks
1923: Soviet Dezinformburo creation
The Mata Hari case is perhaps the best documented example of Nicolai’s methods, where agent H-21 was deliberately exposed to French authorities through radio messages transmitted in codes the Germans knew had been broken, a sophisticated termination operation designed to protect German intelligence methods. For what it’s worth, this is the kind of historical knowledge that gives crucial context for the 1980s CIA disinformation operation that blew up Soviet gas pipelines.
“In order to disrupt the Soviet gas supply, its hard currency earnings from the West, and the internal Russian economy, the pipeline software that was to run the pumps, turbines, and valves was programmed to go haywire, after a decent interval, to reset pump speeds and valve settings to produce pressures far beyond those acceptable to pipeline joints and welds,” [Thomas C. Reed, a former Air Force secretary who was serving in the National Security Council at the time] writes. “The result was the most monumental non-nuclear explosion and fire ever seen from space,” he recalls, adding that U.S. satellites picked up the explosion. Reed said in an interview that the blast occurred in the summer of 1982. […] In January 1982, Weiss said he proposed to Casey a program to slip the Soviets technology that would work for a while, then fail. Reed said the CIA “would add ‘extra ingredients’ to the software and hardware on the KGB’s shopping list.”
The sophisticated deception operation of agent H-21 was to protect real capabilities while feeding the enemy (at home or elsewhere) information that serves strategic political purposes. From 1917 paper and radio deception to 1982 software sabotage the technology changed, and yet American operatives maintained the same fundamental principles.
Perhaps now we see the real reason English dictionaries in the 1980s would publish a claim that Soviets invented “disinformation”. This was likely yet another CIA disinformation operation itself.
During the height of the Cold War, when American intelligence agencies were perfecting the art of feeding false narratives into academic and media channels, what better way to obscure the true American origins of modern propaganda techniques than to credit them to the enemy?
The irony is sickly sweet: the CIA, having inherited and refined Wilson’s domestic control methods and Nicolai’s sophisticated deception techniques, then deployed those same methods to rewrite the historical record. By the 1980s, American intelligence had become so adept at manipulating information flows that they could plant false etymologies in authoritative reference works, ensuring that future researchers would trace “disinformation” back to Soviet Russia rather than to America’s own pioneering propaganda apparatus.
The fact that a false origin story has persisted unchallenged for decades demonstrates just how effective these techniques are—the ultimate disinformation campaign was convincing the world that America learned disinformation from the Soviets, when in reality the Soviets had learned it from techniques pioneered and perfected by German spies deployed to suppress political dissent under “America First”.
We’re not just talking about historical artifacts when we do crucial history, but at the foundations of techniques being actively deployed today. The progression from Wilson’s “America First” domestic terror campaigns through Cold War disinformation to current “America First” domestic terror campaigns shows the proper through-line that explains the true meaning of present-day disinformation.
a blog about the poetry of information security, since 1995