Category Archives: History

Can You Spot AI? The Redneck Problem in Synthetic Face Detection

When an outsider gets off a plane in Nepal for the first time, all the faces in the airport crowd blur together. A month later, they see Tibetans, Indians, Chinese, Nepalese. Mountain faces, valley faces. Nobody teaches the outsider what to look for. They just experience exposure and the human perceptual system builds the categories.

When a mountain village Maoist teenager points an AK-47 at that outsider, the out-of-place hostile appearance becomes obvious, yet is identified far too late. The outsider arrived with a collapsed face-space for South Asians. A month later, the axes develop to distinguish Sherpa from Tamang from Newar, friendly from hostile. Perceptual learning creates differentiation where statistical exposure builds out reliable dimensions.

Boy with automation technology wows the ladies in Butwal, Nepal. Look at his face, and what do you see? Source: AP

As someone who grew up in the most rural prairie in Kansas, I can tell you this is the redneck problem: someone whose environment didn’t provide the data to build certain distinctions is vulnerable.

We knew as kids we weren’t supposed to shoot signs. Wasn’t that the whole point of shooting the signs? Our red neck was a physical marker of the environmental conditions that predicted an isolation leading to perceptual poverty.

The person who “can’t tell them apart” isn’t lazy or hostile as much as they are a product of their (often fear based) isolation. They’re accurately reporting self-imposed limited perceptual reality. Their pattern recognition system, stuck out in the fields alone, never benefited from human training data. They could identify lug nuts and clay soil yet not a single tribe of Celts.

The same problem is crippling Western synthetic face detection research related to deepfakes. And it’s a problem I’ve seen before.

Layer Problem

My mother, a linguistic anthropologist, and I published research on Nigerian 419 scams, starting nearly twenty years ago. We said intelligence brings vulnerability and published papers as such. We even presented it at RSA Conference in 2010 under the title “There’s No Patch for Social Engineering.

One of our key findings: intelligence is not a reliable defense against social engineering.

The victims of advance fee fraud weren’t stupid. They were, disproportionately, well-educated professionals such as university professors, doctors, lawyers, financial planners. People who trusted their reasoning.

I remember one day I was training law enforcement investigators and being told by them, in a windowless square room of white men bathed in drab colors and cold florescent lighting, that this concept of wider exposure would be indispensable to their fraud cases.

A 2012 study in the Journal of Personality and Social Psychology then proved our work and found the same pattern more broadly: “smarter people are more vulnerable to these thinking errors.”

These researchers, without reference to our prior work, found that higher SAT scores correlated with greater susceptibility to certain cognitive biases, partly because intelligent people trust their own reasoning from their past success and don’t notice when it’s being disrupted, being bypassed.

The attack works because it targets a layer below conscious analysis. You can’t defend against bias attacks with intelligence, because intelligence operates at the wrong layer. The defense has to match the attack surface.

I’m watching the synthetic face detection literature make the same mistake again.

Puzzle This

A paper published last month in Royal Society Open Science tested whether people could learn to spot AI-generated faces.

The results were striking but confusing.

Without training, typical observers performed below chance as they actually rated synthetic faces as more real than real ones.

This isn’t incompetence. It’s a known phenomenon called AI hyperrealism: GAN-generated faces are statistically too average, too centered in face-space, and human perception reads that as trustworthy and familiar.

Super-recognizers, the top percentile on face recognition tests, performed at chance without training. Not good, but at least not fooled by the hyperrealism effect.

Then both groups got five minutes of training on rendering artifacts: misaligned teeth, weird hairlines, asymmetric ears. The kind of glitches GANs sometimes leave behind.

Training helped, unlike the study I examined back in 2022. Trained super-recognizers hit 64% accuracy. So here’s the puzzle: the training effect was identical in both groups. Super-recognizers didn’t benefit more or less than typical observers.

The researchers’ conclusion:

SRs are using cues unrelated to rendering artefacts to detect and discriminate synthetic faces.

Super-recognizers are detecting something the researchers could not identify and therefore can’t train. The artifact training adds a second detection channel on top of whatever super-recognizers are already doing. But what they’re already doing is presented as a black box.

Wrong Layer, Again

The researchers are trying to solve a perceptual problem with instruction. “Look for the misaligned teeth” is asking the cognitive layer to do work that needs to happen at the perceptual layer.

It’s why eugenics is fraud; selecting at the genetic layer for traits that develop at the exposure layer.

It’s also the same structural error as trying to defend against social engineering with awareness training. Watch out for urgency tactics. Be suspicious of unsolicited requests. Except, of course, you still need to allow urgent unsolicited communication. Helpful, yet not helpful enough.

The instruction targets conscious reasoning. The attack targets intuition and bias. The defense operates at the wrong layer, so it fails easily, especially where attackers hit hidden bias such as racism.

The banker who never went to Africa is immensely more vulnerable to fraud with an origination story from Africa. The intelligence lacking diversity opens the vulnerability, and also explains the xenophobic defense mechanism.

Radiologists don’t learn to read X-rays by memorizing a checklist of tumor features. They look at thousands of X-rays with feedback. The pattern recognition becomes implicit. Ask an expert radiologist what they’re seeing and they’ll often say “it just looks wrong” before they can articulate the specific features.

A surgeon with training will look at hundreds of image slices of the brain on a light board in one room and know where to cut in another room down the hallway.

Japanese speakers learning English don’t acquire the /r/-/l/ distinction by being told where to put their tongue. They acquire it through exposure. Hundreds of hours of hearing the sounds in context, and their perceptual system eventually carves a boundary where none existed before.

Chicken “sexers” are the canonical example in the perceptual learning literature. They can’t tell you how they distinguish gender of day-old chicks. They just do it accurately, after enough supervised practice.

This is the pattern everywhere that humans develop perceptual expertise: data first, implicit learning, explicit understanding (maybe) later.

Five minutes of “look for the weird teeth” gets you artifact-spotting as a conscious strategy. It doesn’t build the underlying statistical model that makes synthetic faces feel wrong before you can say why. And just like with social engineering, the people who think they’re protected because they learned what to look for may be the most confidently wrong.

But the dependence in artifact-spotting also tells you something about the people who believe in it. They seek refuge in easy, routine, minimal judgement fixes for a world that requires identification, storage, evaluation and analysis. The former without the latter is just snake-oil, like placebos during a pandemic.

Compounding Vulnerability

The other-race effect is well-documented: people are worse at distinguishing faces from racial groups they haven’t had exposure to. The paper even found it in their data as participants were better at detecting synthetic faces when those faces were non-white, likely because the GANs were trained primarily on white faces and rendered other ethnicities less convincingly.

“My friend is not a gorilla”. Google trained only on Asian and white faces to prevent confusion with animals, with disastrous results. Don’t you want to know who discovered the bias in their engineering and when?

But here’s where it gets dark, like the racism of a 1930s Kodak photograph, let alone the radioactive corn Kodak secretly detected in 1946.

If you have less exposure to faces from other groups, you’re worse at distinguishing individuals within those groups. And if you’re worse at distinguishing real faces, you’re certainly worse at detecting synthetic ones.

Deepfakes may be a racism canary.

The populations most susceptible to disinformation using AI-generated faces are precisely the populations with the least perceptual defense. Isolated communities. Homogeneous environments. Places where “they all look alike” is an accurate description of perceptual reality.

An adversary running a disinformation attack campaign knows this. Target the isolationists, because of their isolation. “America First”, historically a nativist racist platform of hate, signals perception poverty.

If you’re generating fake faces to manipulate a target population, you generate faces from groups the target population has the least exposure to. The attack surface is largest where perceptual poverty runs deepest.

The redneck who “can’t tell them apart” isn’t just failing a social sensitivity test. They’re a soft target. Their impoverished face-space makes them maximally vulnerable to synthetic faces from unfamiliar groups. They can’t detect the fakes because they never learned to see the reals.

This compounds with the social engineering vulnerability. The same isolated populations are targets for both perceptual attacks (fake faces they can’t distinguish) and cognitive bias attacks (scams that bypass reasoning). The defenses being offered like artifact instruction and awareness training both fail because they target the wrong layer.

Prejudice is Perceptual Poverty

The foundation of certain kinds of hate is ignorance. Not ignorance as moral failing – ignorance as literal absence of data.

The perceptual system builds categories from exposure. Dense exposure creates fine-grained distinctions. Sparse exposure leaves regions of perceptual space undifferentiated. The person who grew up in a homogeneous environment doesn’t choose to see other groups as undifferentiated. Their visual system never got the training data to do otherwise.

This reframes prejudice, or at least a big component of it. Not attitude to be argued with. Not moral failure to be condemned. Perceptual poverty to be remediated.

And here’s the hope: the human system is plastic.

A month in Nepal fixes the Nepal problem. A year in a diverse environment builds cross-racial perceptual richness. The same neural architecture that fails to distinguish unfamiliar faces can learn to distinguish them. It just needs data.

Diversity training programs typically target attitudes. “Stereotyping is wrong and here’s why.” But you can’t lecture someone into seeing distinctions their visual system isn’t configured to make, or maybe even is damaged by years of America First rhetoric. The intervention is at the wrong layer.

What if you could train the perceptual layer directly?

The Experiment Nobody Has Run

The synthetic face detection literature keeps asking “what should we tell people to look for?” The question they should be asking is “how much exposure produces implicit detection?”

Here are the study designs, for those looking to leap ahead:

For AI detection:

  • Recruit typical observers (not super-recognizers)
  • Expose them to 500+ synthetic and real faces per day, randomly intermixed
  • Provide only real/fake feedback after each trial, no instruction on features
  • Continue for 4-6 weeks
  • Test detection accuracy at baseline, weekly during training, and post-training
  • Compare to control group receiving standard artifact instruction
  • Test whether training transfers to faces from a new GAN architecture

For deeper questions of safety:

  • Use stimuli that include faces from multiple racial/ethnic groups
  • Test whether exposure-based training improves detection equally across groups
  • Test whether it also improves cross-racial face discrimination (telling individuals apart) as a side effect
  • Measure implicit bias before and after

My prediction: exposure-based training will work for synthetic face detection, producing super-recognizer-like implicit expertise in typical observers. And as a side effect, it will build cross-racial perceptual richness.

The transfer test matters. If exposure-trained observers can detect synthetic faces from novel generators they’ve never seen, they’ve learned something general about real versus synthetic faces. If they can only detect faces similar to their training set, they’ve just memorized one architecture’s failure modes.

The cross-racial test matters more. If diverse exposure simultaneously improves AI detection and reduces perceptual other-race effects, you’ve found an intervention that works at the right layer.

Yoo Hoo Over Here

I’ve been watching security research make this mistake for twenty years.

Social engineering attacks bias. The defense offered: awareness training. Wrong layer.

Synthetic faces attack perception. The defense offered: artifact instruction. Wrong layer.

Prejudice operates partly at the perceptual level. The defense offered: diversity lectures about attitudes. Wrong layer.

In each case, the intervention appeals to conscious reasoning to solve a problem that operates below conscious reasoning. In each case, smart people are not protected – and may be more vulnerable because they trust their analysis.

The defense has to match the attack surface. You can’t patch social engineering with intelligence. You can’t patch perceptual poverty with instruction.

You patch it with data. Structured, extended, high-volume exposure that trains the layer actually under attack.

The redneck problem isn’t moral failure. It’s data deprivation.

The fix isn’t instruction. It’s exposure. The term redneck describes a remediable data deprivation, not moral defect.

So, come on inside from the deadly heat of fieldwork that killed Catherine Greene’s husband, choose three scoops out of 31 ice cream flavors, and let’s have a chat about how American pineapples and bananas really got to Kansas.

Somebody should run the study.

And hopefully this time, we will see better citations.

The Nazis Wore Red: A Curious Case of Color-Correction in Contemporary Fascist Cinema

One does not typically expect to find oneself arguing with a film’s color palette for Nazis. Yet here we are. A new Italian film isn’t making just a palette mistake, however, it’s systematically reconstructing fascism as its exact opposite.

Silvio Soldini’s Le assaggiatrici (2025) is based on Rosella Postorino’s bestselling 2018 Italian novel by the same name about Hitler’s food tasters at the Wolfsschanze. In German it’s titled Die Vorkosterinnen.

The book cover features a seductive red butterfly that obscures an Aryan model, as imposed red lipstick defines her identity. The red of Nazi ideology appears to be consuming her, in a book about forced consumption or death.

It has arrived to generally favourable notices. The performances are creditable. The tension is effectively sustained. The director has stated, in interviews with Deutsche Welle and elsewhere, that he prioritises “emotional truth” over historical precision, which seems like a defensible artistic position, and one that accounts for certain liberties taken with the source material.

What it does not account for is the film’s extraordinary disinformation decision to wash the entire Nazi apparatus in petrol (teal).

Chromatic History of National Socialism

Adolf Hitler was many things. Indifferent to visual propaganda definitely was not among them.

His very particular selection of red, white, and black for the visual identity of a Nazi was not accidental. Hitler addressed the question directly in Mein Kampf, explaining that Imperial German red was deliberately chosen for psychological impact. He wanted its association with revolution, its capacity to command attention, its physiological effect on the blood and nerves. The Nuremberg rallies were intentionally seas of red. The swastika banner was designed, by Hitler’s own account, to be impossible to ignore.

This was, one must acknowledge, a propaganda achievement from the lessons of WWI (e.g. Woodrow Wilson’s belief in spectacle as a weapon, leading to Edward Bernay’s publication of a propaganda bible). The Nazis understood from the last war, if not many before them, that militant power and rapid disruption comes not merely through argument but through aesthetic experience. The red was aggressive, confident, seductive. It promised antithesis, rupture, transformation. It stirred.

Historians have documented this extensively, leaving zero doubt. The visual architecture of fascism was Albert Speer’s Cathedral of Light, Leni Riefenstahl’s geometric masses of uniformed bodies, and most of all the omnipresent crimson banners.

1939 Nazi red banners contrasted sharply and covered everything, like the MAGA hat today. Source: Hugo Jaeger/Life Pictures/Shutterstock

The threat of burgundy covering Europe was not incidental to National Socialism but constitutive of it.

The Fiction of a Teal Reich

In Soldini’s film, none of this exists.

The SS uniforms, which on set were presumably some variant of field grey, have been color-graded into a cold greenish blue. This is what Europeans might call petrol, or an American teal. The train carriages are teal. The Wolfsschanze shadows are teal. The very air of occupied Poland appears to have been filtered through Caribbean seawater.

Americans thinking of azure blue vacations of peace and tranquility will be shocked to find this movie painting SS officers in the wrong palette.

Meanwhile, the women who are the victims, unwilling food tasters conscripted into service under threat of death, are dressed almost uniformly in burgundy and brown.

Warm tones. The color family of the swastika banner is applied to the victims, as if to invoke and rehydrate the Hitler propaganda of young beautiful Aryan women in danger. Even the protagonist’s name is Rose!

The shallow symbolic intention seems transparent: teal is meant to convey cold machinery of death versus flushed cheeks of red as a warm human vulnerability. Petroleum versus blood. It is the sort of color theory one encounters in undergraduate film studies seminars, and it is executed competently enough.

The difficulty is that it ends up ironically being fascist propaganda because it is precisely backwards.

Hitler Was an Inversion Artist

Consider what the audience is being taught.

A viewer encountering this film, especially the younger viewer for whom the Second World War is ancient history, absorbs the following visual grammar: Fascism is cold. Fascism is teal and grey and clinical. Fascism looks like a hospital corridor, or a Baltic winter, or an industrial refrigeration unit.

Die Vorkosterinnen depicts Nazi uniforms and machinery only in hues of teal. The SA literally were called “Brownshirts” when they seized power and destroyed democracy along with black-clad SS. An earth grey (erdgrau) shift was later during war.

False.

This is not what fascism looked like. It rose, in fact, as the exact opposite.

Source: “Hitler and the Germans” exhibit at the German Historical Museum, Berlin.

Fascism in Germany was always meant by Hitler to be red hot. It was his vision of Imperial red, white and black for stirring reactions and emotive attachment. It was torchlight and drums and the intoxication of abrupt mass belonging and sudden purpose. It was institutional drug and drink abuse to dispense rapid highs.

The Nazis did not present themselves as slow and precise, bureaucrats of byzantine rules. That was how they aspired to operate, but not how they recruited or actually functioned. They presented themselves as easy vitality, as rapid revolution, as blood and fire and national resurrection.

They were the cheap promise and marketing of Red Bull, Monster drink, 5 hour energy shot, not bowls of slow cooked hearty soup and vegetables with cream. “Fanta” was the Nazi division of Coca Cola, marketed like a Genozid Fantasie in a bottle.

Fanta was created by Coca-Cola to profit from Nazi Germany, avoiding sanctions. It was industrial food byproducts (apple waste, milk waste), marketed as a health drink using a word short for “fantasy”, because it was all about swallowing lies.

The women, meanwhile, would not have dressed in coordinated burgundy. They were rural conscripts and Berlin refugees. They wore what they had. But even setting aside questions of costume accuracy, there is something perverse about rendering victims in the color palette of the perpetrator’s own propaganda. Notably the women also are portrayed as the smoking, drinking and promiscuous ones, while the Nazis are falsely described as teetotalers.

This reversal is painful to see, as Nazis are played in the film as completely inverted to what makes Nazism so dangerous.

“Emotional Truth” and Its Discontents

Director Soldini has explained that historical precision matters less to him than achieving an emotional resonance. One sympathises with the artistic impulse to generate ticket sales. The film is definitely not a documentary, and accuracy is a burden that can produce its own distortions that don’t translate well to audience growth.

But “emotional truth” is not a free pass to rehydrate Nazism. If your emotional symbolism teaches audiences to look for the wrong visual signatures, if it trains them to associate fascism with cold clinical teal rather than seductive aggressive red, then your emotional truth is propagating a functional falsehood that is dangerous.

This disinformation risk matters far more today than it might have in 1995 or 2005. We are presently surrounded by political movements that borrow freely from the fascist playbook whilst their critics struggle to name what they are seeing. A large part of that struggle is visual.

People have been taught, through decades of erroneously toxic films like this one, that fascism is ugly, grey uniforms and clinical efficiency and cold industrial murder. It was not.

They have not been taught that it looks like rallies of red hats and the intoxication of belonging to something larger than oneself.

Every member of Huntington Beach City Council pose for a photo wearing red “Make Huntington Beach Great Again” hats at a swearing-in ceremony on 3 Dec 2024.

They have not been taught to recognize the aesthetic of hot, rapid seduction and “day one” promises of disruption.

Hollywood Teal

One must also note that Soldini is operating within a system. The teal-and-orange color grade has become so pervasive in contemporary cinema that it functions as a kind of default reference.

He pulled the visual equivalent of scoring every emotional beat with swelling orchestra strings. Teal is what films lean on for tension, ignoring the fact that many people dream of holidays in a typical Caribbean blue scene like a Corona ad.

This creates a particular problem for historical cinema. When every thriller, every dystopia, every prestige drama reaches for the same cool teal palette to signal “this is danger,” the color loses its actual meaning.

It becomes mere convention.

And when that convention is misleadingly applied to the Third Reich, it overwrites the actual chromatic signature of the period with a contemporary aesthetic that signifies nothing more than “this film is a color-by-number for cinematic bad things.”

The Nazis were not teal.

But teal is the reduced palette of what serious films dip into, so the Nazis get rehydrated as such. And viewers start embracing Nazism again while thinking the cool, calm drab good guys are the enemy (as targeted by hot-headed attention seeking rage lords).

White nationalist Nick Fuentes has said repeatedly the racist MAGA is the racist America First and that is exactly what he wants.

We Train Eyes to See the Train

One of the most annoying aspects of the film (SPOILER ALERT) is the director abruptly kills the Jew for trying to board the train of freedom. Of course in history the Nazi trains actually symbolize concentration camps, where anyone boarding faced almost certain death. Yet here’s a film that shows the inversion with trains as the freedom trail for the idealized Aryan woman working for Hitler, while the Jew was denied the ride.

The inspiration for the love story between Rosa and [SS leader] Ziegler stems from Woelk’s statement that an officer put her on a train to Berlin in 1944 to save her from the advancing Red Army, the armed forces of the Soviet Union. She later learned that all the other food tasters had been shot by Soviet soldiers.

That’s Nazi propaganda pulled forward, pure and unadulterated.

The love story in the film frames the SS leader as kind hearted savior, as he is shooting a Jew in the back so she couldn’t be liberated by approaching Allied soldiers, yet “saving” the Aryan girl by gifting her a rare spot on a Nazi train.

The film covers the protagonist’s hands in the blood of the Jewish woman murdered by her SS lover, blood she stares at on the train, perhaps to emphasize how the Swastika was believed to be a symbol of being lucky at birth. She lived to be 91 thanks to the SS, who made sure that a Jewish woman didn’t get a spot on that train, just a bullet in the back.

And just to be clear, Judenhilfe (hiding or even befriending a Jew) was a capital crime for years, eliminating all doubt by killing anyone who doubted. An Aryan woman caught running beside the Jewish woman she was helping and defending would not have been spared when a SS officer opened fire. In the worsening Nazism logic over time, and thus especially by 1945, it would be like a policeman shooting the passenger in a criminal getaway car and then offering the driver a can of gas.

There is a reason disinformation historians care about such visual culture. Political movements are recognised, and hidden, partly through their weaponization of aesthetics. The person who knows that fascism comes wrapped in red flags of instant vitality and promises of national greatness is better equipped to identify it than the person who has been taught to feel disgust for cool grey of law and order, to hate calm bureaucrats in clinical blue corridors.

Soldini’s film, whatever its other merits, trains eyes to see the exact wrong thing. The good guy palette in reality is flipped to evil, audiences are pushed to embrace the palette of Hitler’s violent hate.

  • Chromatic inversion (blueish Nazis, reddish victims)
  • Behavioral inversion (abstemious Nazis, hedonistic women)
  • Logical inversion (Murderous SS as loving saviors)

Soldini color-corrects and codifies fascism into something unrecognisable, antithetical. In doing so, it makes the real thing far harder to recognize correctly today when it flashes itself all around us, signaling as it always has.

The Nazis wore red for a reason.

Red was how they poisoned power.

It would be useful if we remembered this.

The Spanish edition’s cover designer understood something Soldini didn’t. The RED APPLE is the focal point as the danger, the temptation, the poison risk. It sits against cool grey tones. The red is what threatens. The grey is the safety and institutional backdrop.

Historiography of Operation Cowboy, April 1945

Operation Cowboy timing in WWII is damning, when you look at the calendar:

  • April 23, 1945: Flossenbürg concentration camp liberated by Patton’s Third Army. Most of the prisoners were sent out on death marches throughout Bavaria as Allied troops approached. American soldiers found only 1,500 survivors amid mass graves; 30,000 had died there and many more in the marches to prevent their liberation. Dietrich Bonhoeffer was executed April 9, 1945, just two weeks before Patton rolled up.
  • April 28, 1945: Operation Cowboy launches for Patton to rush ahead and liberate a Nazi veterinarian center with 600 captured Russian horses, alongside aristocratic breeds, before the Soviets would.
  • April 29, 1945: The 42nd and 45th Infantry Divisions and the 20th Armored Division of the US Army liberate approximately 32,000 prisoners at Dachau, far fewer than the 67,000 registered there just three days before.

The same Army, the same week.

One order under Patton got a romantic mission name, a special task force authorization, all the artillery barrages it needed to clear a path, and the unprecedented decision to arm surrendered Wehrmacht soldiers. It’s been repackaged ever since as a feel-good story of liberation from Nazism.

The other efforts were… happening.

Why the delta? Patton’s postwar diary entries about Jewish displaced persons are notorious. He described humans in subhuman terms, complained about having to allocate resources to concentration camps, and compared survivors unfavorably to the Germans he was occupying. If only they had been aristocratic horses instead. Patton even came away from death camps wanting to rearm Germany almost immediately, which is damning on its own.

Not a show horse. Eugen Plappert, ca. 1930, with his many athletic medals. Imprisoned in Flossenbürg under 1938 Nazi “preventive detention,” he was told in 1942 by the SS they were sending him to a country estate for health. It was a lie, they killed him with gas on May 12.

Operation Cowboy wasn’t an aberration or exception, as it was perfectly consistent with Patton’s worldview of who and what mattered.

  • Aristocratic horses? European civilization worth a rush to preserve.
  • Wehrmacht officers? Professionals to work with.
  • Death camp survivors? A logistics problem.

The tell is in what gets reported by Military.com as “beautiful”:

We were so tired of death and destruction, we wanted to do something beautiful.

They were surrounded by death and destruction that week. They chose which rescue operation would be counted as “beautiful.”

Historiography in Plain Sight

This story gets retold as heartwarming.

Military.com now runs it as holiday content. Disney made a bizarre movie called Miracle of the White Stallions (1963).

The feel-good framing launders what is being revealed about genocide and what Patton considered worth saving; command attention and priority signaling matter.

Stills from Disney’s 1963 movie: the Habsburg aristocratic pageantry worth preserving (top) and the “sympathetic” enemy general in uniform with Nazi eagle (bottom). The film is not currently available on Disney+.

Military.com notably tells us Patton approved horse rescue “immediately” with “Get them. Make it fast.” The urgency for hundreds of aristocratic show horses is documented. The urgency for tens of thousands of human prisoners was not.

“Special prisoner barracks. Drawing from memory by Colonel Hans M. Lunding, head of the Danish intelligence service and cellmate of Admiral Canaris.” Source: dietrich-bonhoeffer.net

2025 Blackout: Waymo Shit on San Francisco Because That Was Always the Plan

Digital Manure Is the Point of Technocratic Debauchery

On Saturday, December 20, the infamously unreliable PG&E infrastructure again failed to prevent a fire, and a third of San Francisco was left in darkness.

Traffic lights didn’t light. The city’s Department of Emergency Management naturally urged restraint, asking everyone to stay home so emergency responders could be unfettered.

Instead, across the city, Waymo’s fleet of 800 to 1,000 robotaxis blocked streets by doing the exact opposite: stopping in intersections, impeding buses, and according to city officials, delaying fire department response to the substation blaze itself and a second fire in Chinatown.

Waymo logic was to make the disaster worse, and become another disaster itself, as if massive PG&E failures aren’t a known and expected annual pain in California.

…Waymo vehicles didn’t pull over to the side of the road or seek out a parking space. Nor did they treat intersections as four-way stops, as a human would have. Instead, they just … sat there with their hazard lights on, like a student driver freezing up before their big parallel-parking test. Several Waymo vehicles got stuck in the middle of a busy intersection, causing a traffic jam. Another robotaxi blocked a city bus.

The company then dropped a disinformation bomb from the corporate blog three days later.

Waymo described itself as “the world’s most trusted driver” in the aftermath of being completely untrustworthy. It tried to distract readers how it “successfully traversed more than 7,000 dark signals.” That would be like saying only look at all the cats and dogs it hasn’t murdered yet. Finally, Waymo promised that it was “undaunted”, in a post explaining why it just manually pulled its entire fleet off the streets. Sounds daunted to me.

The ironies are irresistible, as evidence of cognitive dissonance driving their groundless rhetoric, but the deeper story is why their technology exists, who it serves, and what its presence on public streets reveals about political corruption in California.

Big Tech’s Prancing Princes

In the eighteenth and nineteenth centuries, aristocratic carriages were an unfortunate and toxic fixture of European cities. The overly large and heavy boxes moved unaccountably through streets built and maintained by public taxation and peasant labor. They spread manure for public workers or the conscripted poor to chase and clean. When the elites ran over pedestrians, which happened regularly, there was no legal recourse. Commoners stepped in fear and bore costs, as the privileged passed without paying.

The carriage was far more than transportation. It was the physical assertion of hierarchy in public spaces. Its externalities of the manure, the danger, the congestion, and the noise were all pushed out to be absorbed by everyone else. Its benefits accrued to one among many.

The term “car” is rooted in this ideology of inconvenience to others as status, where overpriced cages for the few consume public space with wasteful and harmful outcomes to the many.

The urban carriage (car) concept, of exclusive wasted public space, is as absurd as it looks and operates.

Waymo’s cars are a mistake in history pulled forward, to function identically to the worst carriages. They move through streets built and maintained by public funds. When they fail, when they kill, emergency responders and traffic police manage the consequences. When they impede ambulances and fire trucks, the costs are measured in delayed response times and lives lost. The company operates under regulatory protection from a captured state agency, the California Public Utilities Commission (CPUC), which forced corporate control and expansion into public spaces over the unanimous objection of public servants including San Francisco’s fire chief, city attorney, and transportation officials.

The parallel is not metaphorical.

It is structural.

Algorithms That Can’t Predict Probable Disasters Should Be Accountable for Causing Disasters

Waymo’s blog post falsely tries to spin PG&E disaster as “a unique challenge for autonomous technology.”

Unique? A power outage in California?

This is false.

It’s so false, it makes Waymo look like they can’t be trusted with automation, let alone operations.

Imagine Waymo saying they see stop signs and moisture as unique challenges. Yeah, Tesla has literally failed at both of those “challenges”, to give you an idea of how clueless the tech “elites” are these days.

Mountain View Police stopped a driverless car in 2015 for being too slow. Google engineers publicly excused themselves by announcing they had not read the state traffic laws. A year later the same car became stuck in a roundabout. Again, the best and brightest engineers at Google, fresh out of Berkeley, claimed they were simply ignorant of traffic laws and history.

Power outages in California are not unique or rare. They are a heavily documented and often recurring feature of the state’s privatized infrastructure. Enron very cruelly, as you may remember, intentionally caused brownouts in California in order to artificially raise prices and spike profits. Silicon Valley veterans build disaster recovery plans around outages, due to recurring events like this, which absolutely forecast PG&E failures. It’s engineering 101.

In October 2019, PG&E cancelled power to over three million Californians in a cynically titled “Public Safety Power Shutoff”. This is the equivalent of someone telling you to pull the power plug on your computer if you are worried about malware. The utility has conducted such shutoffs every year since, as a means of delaying and avoiding basic safety upgrades to infrastructure. A federal judge overseeing PG&E’s criminal probation explained the situation with candor that isn’t even shocking anymore:

…cheated on maintenance of its grid—to the point that the grid became unsafe to operate during our annual high winds.

A year later, with this environmental reality of annual outages in full view, Waymo opened operations in San Francisco. It received approval for commercial service in August 2023. In that time, the company boasted that it accumulated what it calls “100 million miles of fully autonomous driving experience.”

Uh-huh. It created a calculator that could do 2+2=4 around 100 million times. But apparently the engineers never thought about subtraction. Outcomes are different than just measuring outputs.

When the lights went out, in an event that happens so regularly in California it’s become normal, the vehicles froze, requiring human remote confirmation to navigate dark traffic signals.

Dare I ask if Waymo ever considered an earthquake as a possibility in the state famous for its earthquakes and… power outages?

The company describes its baseline confirmation requirements as if it made “an abundance of caution during our early deployment.”

Anyone can now plainly see that, after seven years, such a disinformation phrase strains all credulity. What it reveals is Waymo is deploying an ivory tower system designed for a few gilded elites expecting idealized infrastructure, which doesn’t exist in the state where it operates.

Waymo Benefits, But Who Pays

Waymo is as unprofitable as a San Francisco cheese store in December.

“PG&E fucked us,” Lovett said. “We’re not talking about 120 bucks worth of cheese, dude. We’re talking about anywhere between 12 and 15 grand in sales that I’m not getting back.”

Bank of America estimates Waymo burned $1.5 billion in 2024 against a revenue stream of $50 to $75 million. Alphabet’s “Other Bets” segment reported losses of $1.12 billion in Q3 2024 alone across all their gambling with public safety. The company is valued at $45 billion because of investor confidence in privatisation and monopolization of public mobility (rent seeking streets, taxation without representation), and not current earnings.

The beneficiaries are no mystery: Alphabet shareholders, venture capital, the 2,500 employees at Waymo, and well-heeled lazy riders seeking exclusion and novelty.

The payers are equally clear: emergency responders forced to deal with stalled vehicles, taxpayers who fund infrastructure and regulatory agencies, other road users stuck in gridlock, and potential victims of delayed emergency response.

This is like the days of the giant double-decker empty idling Google Bus parked at public bus stops blocking transit and delaying city workers, because… Google execs DGAF about the public.

Google buses throughout the 2010s regularly blocked public stops and bike lanes, creating transit denial of service. The company pitched the system to staff as a velvet roped shelter from participation in community, similar to their infamous cafeterias.

Research on Big Tech privatization of streets as harmful to the public is unambiguous. Fire damage doubles every 30-60 seconds per NFPA research on active suppression delay. For cardiac emergencies, each minute of delayed response translates to significant additional hospital costs and measurable increases in mortality.

During Saturday’s blackout, Waymo vehicles contributed to delays at two active fires and created gridlock that impeded bus transit for hours. It’s the Google Bus denial of service disaster all over again, yet hundreds of times worse because a distributed denial of service.

No penalty has been assessed yet. The CPUC, which regulates both PG&E and Waymo, responded by saying it has “staff looking into both incidents.” Water is apparently wet. The agency has a history of “looking” that doesn’t look good. It is most known for unanimously waiving a $200 million fine against PG&E in 2020 for electrical equipment failures linked to fires that killed more than 100 people.

Waymo could murder a hundred people in the streets and no one would be surprised if the CPUC said hopes and prayers, looking into it, ad nauseam.

Reverse Exclusion

Traditional exclusion takes the form of barriers. It would be a control that says you cannot enter a space, access a service, or participate in an opportunity. Reverse exclusion operates, well, in reverse. It is the imposition of externalities onto public spaces that cannot be escaped.

Before California’s 1998 bar smoking ban, 74% of San Francisco bartenders had respiratory symptoms of wheezing, coughing, and shortness of breath from being forced to inhale secondhand smoke for 28 hours a week. They couldn’t opt out. The smoke wasn’t their choice. The JAMA study that documented this found symptoms resolved in 59% of workers within weeks of the ban taking effect. The externality was measurable, the harm was real, and the workers had no escape until the government intervened.

Waymo operates the same structure of harm. You can decline to patronize an elite restaurant. You cannot opt out of the surveillance conducted by Waymo mapping every neighborhood, and all the objects around it, for private profit. You cannot choose to have a fire truck not blocked by a Waymo. At this point expect Waymo radar to start looking inside of homes, moving beyond cameras already recording everything outside 24 hours a day.

Waymo claims there is a mobility problem it wants to fix. This is misdirection.

San Francisco is tiny and has BART, Muni, buses, bike lanes, and walkable neighborhoods. A walk across the entire city is a reality, honored for decades by events that do exactly that. The population most likely to use Waymo are outsiders who work in tech, visiting tourists who don’t walk where they come from, and affluent residents who want exclusivity of a carriage experience. The city already has abundant transportation options that would deliver far more than Waymo can for less cost. What Waymo provides is not mobility but class conflict, imposing an ability to move through the city without employing human labor, without negotiating social interaction, without participating in inherently shared efficiency experience of urban spaces.

The royal carriage served a similar function of class conflict. It was never an efficient or safe way to move through a city. It was the physical demonstration of superiority, displaying who mattered and who didn’t.

CPUC Legalized Negligent Homicide

The truth is the technology is not a transportation product. It is a class technology that externalizes harms for its users by privatizing spaces and removing accountability.

Its function was never to move people efficiently or safely since trams and buses do that far better for lower cost. Its function is to assert elitism into physical spaces to redistribute power, extract data from public commons, establish monopoly position for future rent extraction, normalize corporate sovereignty over democratic governance, and transfer huge risk from private capital into public infrastructure.

It’s a ruse. Waymo is murder.

The CPUC approved Waymo’s expansion over explicit warnings from San Francisco officials that the vehicles “drove over fire hoses, interfered with active emergency scenes, and otherwise impeded first responders more than 50 times” as of August 2023. The agency imposed no requirements to track, report, avoid, or limit such incidents.

The captured corrupted CPUC simply approved unsafe expansion against the public interest and moved on.

The structure is the same one that allowed nobles to deposit loads of manure on public streets while commoners cleaned, if they didn’t die from being run over. Privatized benefits, socialized costs, and institutional arrangements that prevent accountability.

In the 1890s, major cities faced a “horse manure crisis” as the predictable consequence of a transportation system designed to serve elite mobility at public expense. The crisis could have easily been solved by regulating carriages and building public waste infrastructure (e.g. Golden Gate Park is literally horse manure from San Francisco dumped onto the huge empty Ocean Beach neighborhood). Instead it was switched to automobiles, which created far more unsafe externalities: highways intentionally built through Black neighborhoods to destroy prosperity, pedestrian deaths, pollution, congestion from suburban isolation.

Waymo is marketed falsely against human error, when it is the biggest example of human error. It delivers opaque surveillance, regulatory capture, emergency response interference, and reduced quality of life due to physical assertion of tech supremacy (externality) into public space.

The royal carriage existed because it demonstrated hierarchy. The ride share system (e.g. “hackers”) that evolved from it by the 1800s was so toxic it forced evolution of street lights to maintain public safety. The frozen Waymo blocking a fire truck isn’t a bug. It’s the mistake in history rising again, a flawed system returning when it shouldn’t: elite mobility, public costs, zero accountability.

Notice to Public Carriages

On Monday, Supervisor Bilal Mahmood called for a hearing into Waymo’s blackout failures. Mayor Daniel Lurie disclosed that he had personally called a Waymo executive on Saturday demanding the company remove its vehicles from the streets. That’s the right call.

Waymo’s roadmap for 2026 includes expansion of harms to more freeways, airports, and cities across the United States.

This post was written during the 5:00AM widespread PG&E power outages of Thursday, December 25, 2025.

…we are now estimating to have power on by Dec 26 11:00PM.

Source: PG&E