Category Archives: History

In 2023 Ai Wei Wei Called Out Elon Musk as a Nazi

July 2023 an artist famous for political commentary dropped his work on social media.

Source: Twitter

Ai Wei Wei’s artwork hit different from most – this was someone already who had been jailed and banned from Twitter for speaking truth to power.

Yet when he called out clear Nazi connections, there was no denial and barely a whisper of restriction (Elon Musk censored Wei Wei’s animated X by deleting it). The silence spoke volumes.

He’s particularly scathing about Elon Musk, who received multiple favours from the CCP to set up his Tesla factory in Shanghai and sings the praises of the Chinese government. Musk owns X, the platform that used to be Twitter, and Ai has on his phone an animation he created, the X spinning and turning into a swastika. It was deleted from X but was still available on Instagram. ‘It’s so creepy. I mean it looks so ugly,’ he said.

This artistic rendering of the X brand was deleted by self-promoting “free speech extremist” Elon Musk. Source: Ai Wei Wei

Fast forward to 2025 and the pattern is painfully clear. While some still debate whether to call a spade a spade, Musk has moved from dog whistles to bullhorns, now openly making Hitler salutes at political rallies that spark “we’re back” celebrations across social media.

“Maybe woke really is dead,” white nationalist Keith Woods posted on X.

“Did Elon Musk just Heil Hitler …” right-wing commentator Evan Kilgore posted on X. “We are so back.”

Today’s Nazi groups aren’t hiding anymore – they’re celebrating how their messages have gone mainstream, just as Ai Wei Wei warned us through his art years ago. The path from his Twitter critique to American political rallies is as straight as it is terrifying.

And here’s where history rhymes with a vengeance. Our languageexperts” stand in rising floodwaters, watching the dam crack, telling us to wait for “concrete evidence” of danger. By the time they admit the obvious, the flood will have already swept us all away.

“I’m skeptical it was on purpose,” said Jared Holt, a senior research analyst at the Institute for Strategic Dialogue, which tracks online hate. “It would be an act of self-sabotage that wouldn’t really make much sense at all.”

Self-sabotage doesn’t make sense? My Jared, it’s the very definition of Nazism. Do you not know history?

To understand Nazis is to understand self-destruction because it’s their entire endgame. Every time. The proof is found in history, as artists painting on Musk’s own factory have depicted so simply:

Elon Musk has been a frequent promoter of an AfD (Nazi) Party in Germany, sparking protests like this graffiti outside the Tesla factory.

This isn’t about academic caution – it’s about the deadly paralysis of overthinking while fascists build real power. They don’t need your perfect analysis. They just need your hesitation.

The march of fascism through Europe, leaving millions dead in its wake before World War II finally stopped it.

Remember: While scholars polish their dissertations on “the nature of rising authoritarianism,” extremists are seizing actual power. They don’t play by academic rules. They never have.

We’ve seen this playbook before. We know how it ends. When people say “we’re not Germany in the ’30s,” they’re reporting something today potentially even more dangerous — classic racist “show me your papers” mixed with modern technology that we need to call out as immoral and illegal. Millions of explosive racist killer robots descending on cities is not out of the question right now.

Teslas are known for their unexplained sudden “veered” crashes into people and infrastructure causing widespread suffering from intense chemical fires.

After all, the Nazis themselves studied and borrowed from American systemic and industrialized racism to build the European genocide machine. History doesn’t repeat, but it echoes – and right now, those echoes are extremely clear.

The only question is who among us will act on the warning signs this time.

Wei Wei was right.

Related: I coincidentally wrote on this blog two days after Wei Wei’s tweet, how the Twitter rebrand is a swastika on top of Tesla already being steeped in Nazi messaging.

Trump Repeals AI Innovation Rules, Declares No Limits for Big Tech to Hurt Americans

The Great AI Safety Rollback:
When History Rhymes with Catastrophe

The immediate and short-sighted repeal of AI oversight regulations threatens America with a return to some of the most costly historical mistakes: prioritizing quick profits over sustainable innovation.

Like the introduction of leaded gasoline in the 1920s, we’re watching in real-time as industry leaders push for unsafe deregulation that normalizes reckless behavior under the banner of innovation. What happens when AI systems analyzing sensitive data are no longer required to log their activities? When ‘proprietary algorithms‘ become a shield for manipulation? When the same companies selling AI tools are also controlling critical infrastructure?

The leaded gasoline parallel is stark because industry leaders actively suppressed research showing devastating health impacts for decades, all while claiming regulations would ‘stifle innovation.’ Now we face potentially graver risks with AI systems that could be deployed to influence everything from financial markets to allegedly rigged voting systems, with even less transparency. Are we prepared to detect large-scale coordination between supposedly independent AI systems? Can we afford to wait decades to discover what damage was done while oversight was dismantled?

Deregulation Kills Innovation

Want proof? Look no further than SpaceX – the poster child of deregulated “innovation.” In 2016, Elon Musk promised Mars colonies by 2022. In 2017, he promised Moon tourism by 2018. In 2019, he promised robotaxis by 2020. In 2020, he promised Mars cargo missions by 2022. Now it’s 2025 and SpaceX hasn’t delivered on any of these promises – not even close. Instead of Mars colonies, we got exploding rockets, failed launches, and orbital debris fields that threaten functioning satellites.

This isn’t innovation – it’s marketing masquerading as engineering. Reportedly SpaceX took proven 1960s rocket technology, rebranded it with flashy CGI videos and bold promises, then used public money and regulatory shortcuts to build an inferior version of what NASA achieved decades ago. Their much-hyped reusable rockets? They’re still losing them at an alarming rate. Their promised Mars missions? Apparently they haven’t even reached orbit yet without creating hazardous space debris and being grounded. Their “breakthrough” Starship? It’s years behind schedule and still exploding on launch.

Yet because deregulation has lowered the bar so far, SpaceX gets celebrated for achievements that would have been considered failures by 1960s standards. This same pattern of substituting marketing for engineering produced Cybertrucks unable to be exposed to water, increasingly in the news for unexplained deadly crashes.

Boeing’s 737 MAX disaster stands as another stark warning. As oversight weakened, Boeing didn’t innovate – they took deadly shortcuts that killed hundreds and vaporized billions in value. When marketing trumps engineering and systems get a similar free pass, we read about unmistakable tragedy more than any real triumph.

History teaches us that true innovation thrives not in the absence of oversight, but in the presence of clear, meaningful, measured standards especially related to safety from harm.

Consider how American scientific innovation operated under intense practical pressures for results in WWII. Early radar systems like the SCR-270 (which detected the Japanese at Pearl Harbor but was ignored) and MIT’s Rad Lab developments faced complex challenges with false echoes, ground clutter, and atmospheric interference.

The MIT Radiation Laboratory, established in October 1940, marked a crucial decision point – Vannevar Bush and Karl Compton insisted on civilian scientific oversight rather than pure military control, believing innovation required both rigorous standards and academic freedom. This approach led to the February 1940 cavity magnetron breakthrough by John Randall and Harry Boot that revolutionized radar capabilities. Innovations like the cavity magnetron and H2X ground-mapping radar demonstrated remarkable progress through regulations that enforced rigorous testing and iteration.

Contrast the success of heavily regulated outcomes in WWII with the vague approaches in the Vietnam War, such as Operation Igloo White (1967-1972) – burning $1.7 billion yearly on an opaque ‘electronic battlefield’ of seismic sensors (ADSID), acoustic detectors (ACOUSID), and infrared cameras monitored from Nakhon Phanom, Thailand. The system’s sophisticated IBM 360/65 computers processed thousands of sensor readings but couldn’t reliably distinguish between North Vietnamese supply convoys and local farming activity along the Ho Chi Minh Trail, leading to massive waste in random bombing missions. It was such a failure that President Nixon ordered the same system installed around the White House and on American borders. Why? He opposed regulations that made it clear the system didn’t work.

This mirrors today’s AI companies selling us a new generation of ‘automated intelligence’ – expensive systems making bold claims while struggling with basic contextual understanding, their limitations obscured behind proprietary metrics and classification barriers rather than being subjected to transparent, real-world validation.

Critics have said nothing proves this point better than the horrible results from Palantir – just as Igloo White generated endless bombing missions based on misidentified targets, Palantir’s systems have perpetuated endless cycles of conflict by generating flawed intelligence that creates more adversaries than it eliminates. Their algorithms, shielded from oversight by claims of national security, have reportedly misidentified targets and communities, creating the very threats they promised to prevent – a self-perpetuating cycle of algorithmic failure marketed as success: the self-licking ISIS-cream cone.

The sudden rushed push for AI deregulation is most likely to accelerate failures such as Palantir and lower the bar so far anything can be rebranded as success. By removing basic oversight requirements, we’re not unleashing innovation – we’re creating an environment where “breakthrough developments” require no real capability or safety, and may even be demonstrably worse than before.

Might as well legalize snake-oil.

The Real Cost of an American Leadfoot

The parallels with the tragic leaded gasoline saga are particularly alarming. In the 1920s, General Motors marketed tetraethyl lead as an innovative solution for engine knock. In reality, it was an extremely toxic shortcut as a coverup that avoided addressing fundamental engine design issues. The result? Fifty years of widespread lead pollution, untold human and animal suffering, that we’re still cleaning up today.

When GM pushed leaded gasoline, they funded fake studies, attacked critics as ‘anti-innovation,’ and claimed regulation would ‘kill the auto industry.’ It took scientists like Patterson and Needleman 50 years of blood samples, soil tests, and statistical evidence before executive orders could mature into meaningful enforcement – and by then, nearly irreversible massive damage was done. Now AI companies run the same playbook with a crucial difference. We need to scientifically define ‘AI manipulation’ before we can regulate it. We need updated ways to measure evolving influence operations despite no physical traces. Without executive level regulation requiring transparent logging and testing standards now, we’re not just delaying accountability – we’re ensuring manipulation will be undetectable by design.

Clair Patterson’s initial discoveries about lead contamination came in 1965, but it took until 1975 for the EPA to announce the phase-out, and until 1996 for the full ban. This was an intentionally corrupted 31-year gap between scientific evidence and regulatory action. The counter-campaign by the Ethyl Corporation (created by GM and Standard Oil) included attacking Patterson’s funding and trying to get him fired from Caltech.

While it took 31 years to ban leaded gasoline despite clear scientific evidence, today’s AI deregulation is happening virtually overnight – removing safeguards before we’ve even finished designing them. This isn’t just regression; it’s willful blindness to history.

Removing AI safety regulations doesn’t solve any of the fundamental challenges of developing reliable, useful and beneficial AI systems. Instead, it allows companies to regress towards shortcuts and crimes, potentially building fundamentally flawed systems unleashing harms that we’ll spend decades trying to recover from.

When we mistake the absence of standards for freedom to innovate, we enable our own decline – just as Japanese automakers dominated by focusing on quality (enforced under anti-fascist post-WWII Allied occupation) as American manufacturers oriented instead around marketing and took engineering shortcuts. Countries that maintain rigorous AI development standards ultimately will leap ahead of those that don’t.

W. Edwards Deming’s statistical quality control methods, introduced to Japan in 1950 through JUSE (Japanese Union of Scientists and Engineers), became mandatory under occupation reforms. Toyota’s implementation through the Toyota Production System (TPS) starting in 1948 under Taiichi Ohno proved how regulation could drive rather than stifle innovation – creating manufacturing processes so superior that American companies spent decades trying to catch up.

For AI to develop sustainably, just like any technology in history, we need to maintain safety standards that can’t be gamed or spun away from measured indicators. Proper regulatory frameworks reward genuine innovation rather than hype, the same way a good CEO rewards productive staff who achieve goals. Our development processes should be incentivized to build in safety from the ground up, with international standards and cooperation to establish meaningful benchmarks for progress.

False Choice is False

The choice between regulation and innovation is a false one. Its like saying choose between having a manager and figuring out what to work on. The real choice is between sustainable progress versus shortcuts that cost us dearly in the long run — penny wise, pound foolish. As we watch basic AI oversight being dismantled, we must ask ourselves: are we willing to repeat known mistakes of the past, or will we finally learn from them?

The elimination of basic oversight requirements creates an environment where:

  • Companies can claim “AI breakthroughs” based on vague probably misleading marketing rather than measurable results
  • Critical safety issues can be downplayed or ignored until they cause major problems and get treated as fait accompli
  • Technical debt accumulates as systems are deployed without proper safety architecture, ballooning maintenance overhead that slows or even stops innovation
  • America’s competitive position weakens as other nations develop more regulated and therefore sustainable approaches

True innovation doesn’t fear oversight – it thrives on it. The kind of breakthrough development that put America at the forefront of aviation, computing, and space exploration came from environments with clear standards and undeniable metrics of success.

The cost of getting this wrong isn’t just economic – it’s existential. We spent decades cleaning up the incredibly difficult aftermath of leaded gasoline that easily could have been avoided. We might spend far longer dealing with the privacy and integrity consequences of unsafe AI systems deployed in the current unhealthy rush for quick extraction of value.

The time to prevent this is now, before we create a mess that future generations will bear.

Today Elon Musk Gave the Hitler Salute: Twittler of the Digital Reich

Elon Musk used the unmistakable Hitlergruß “Sieg Heil” (Nazi) salute today at a political rally.

This Nazi salute is banned in many countries, including Germany, Austria, Slovakia, and the Czech Republic as a criminal offense. The gesture remains inextricably linked to the Holocaust, genocide, and crimes of Nazis. Such illegal use or mimicry of Nazi gestures continues to be a serious matter that can result in criminal charges due to their connection with hate speech and extremist ideologies.

Elon Musk’s calculated public display of Nazi symbolism has been a long road culminating in this “Sieg Heil” gesture on a political stage. And it represents a disturbing parallel to historical patterns of media manipulation and democratic erosion. The following analysis, based on this blog warning readers for years about Musk’s growing displays of Nazism, examines his very clear Nazi salute through a lens of historical scholarship on propaganda techniques and media control.

As noted by Ian Kershaw in “Hitler: A Biography” (2008), the Nazi seizure of control over German media infrastructure occurred with remarkable speed.

Within three months of Hitler’s appointment as Chancellor, the Reich Ministry of Public Enlightenment and Propaganda under Joseph Goebbels had established near-complete control over radio broadcasting. This mirrors the rapid transformation of Twitter following Musk’s acquisition, where content moderation policies were dramatically altered within a similar timeframe to promote Nazism.

Many people were baffled why American and Russian oligarchs would give Elon Musk so much money to buy an unprofitable platform and drive it towards extremist hate speech. Today we see that was simply political campaign tactics to destroy democracy. Of course it sunk money. Of course it was a business disaster. Does anyone really think calculating the value of a bomb dropped on democracy is only calculated by Russia in terms of the explosive materials lost on impact?

Copious reporting informs us how the Reich Broadcasting Corporation achieved dominance through both technological and editorial control:

To maximize influence, formerly independent broadcasters were combined under the policy of Gleichschaltung, or synchronization, which brought institutions in line with official policy points. Goebbels made no secret that “radio belongs to us.” The only two programs were national and local information. They began with the standard “Heil Hitler” greeting and gave plenty of airtime to Adolf Hitler.

This parallels the documented surge in hate speech on Twitter post-acquisition. Under the thumb of Elon Musk, the platform exploded with Nazism as researchers noted increased even in the first months. His response to those who cite evidence of this has been to angrily threaten those researchers and erect velvet ropes and paywalls. Staff remaining at Twitter who moderated speech or otherwise respected human life were quickly fired and replaced with vulnerable sycophants, just a few roles left designed to be mere cogs in a digital reich.

The Nazis understood that controlling the dominant communication technology of their era was crucial to reshaping public discourse, as Jeffrey Herf argues in “The Jewish Enemy” (2006). Radio represented a centralized broadcast medium that could reach millions simultaneously. Herf notes:

The radio became the voice of national unity, carefully orchestrated to create an impression of spontaneous popular consensus.

The parallel with social media platform control is striking. However, as media historian Victoria Carty observes in “Social Movements and New Technology” (2018), modern platforms present even greater risks due to:

  1. Algorithmic amplification capabilities
  2. Two-way interaction enabling coordinated harassment
  3. Global reach beyond national boundaries
  4. Data collection enabling targeted manipulation

The normalization of extremist imagery often comes within a shrewd pattern of “plausible deniability” through supposedly accidental or naive usage.

The 2018 incident of Melania Trump wearing a pith helmet – a potent symbol of colonial oppression – in Kenya provides an instructive parallel. Just as colonial symbols can be deployed with claims of ignorance about their historical significance, modern extremist gestures and symbols are often introduced through claims of misunderstanding or innocent intent.

So too Elon Musk denies understanding any symbolism or meaning to words and actions, while also regularly signaling he is the smartest man in any room. This contradiction is not accidental, as it supercharges the notion of normalization by someone who uses his false authority to promote Nazism.

Martin M. Winkler’s seminal work “The Roman Salute: Cinema, History, Ideology” (2009) provides crucial insight into how fascist gestures became normalized through media and entertainment. The “Roman salute,” which would later become the Nazi salute, was actually a modern invention popularized through theatrical productions and early cinema, demonstrating how mass media can legitimize and normalize extremist symbols by connecting them to an imagined historical tradition.

Winkler’s research shows how early films about ancient Rome created a fictional gesture that was later appropriated by fascist movements precisely because it had been pre-legitimized through popular culture. This historical precedent is particularly relevant when examining how social media can similarly normalize extremist symbols through repeated exposure and false claims of historical or cultural legitimacy.

Perhaps most concerning is Elon Musk’s pattern of normalization that emerges, right on cue. Richard Evans’ seminal work “The Coming of the Third Reich” (2003) details how public displays of extremist symbols followed a predictable progression:

  1. Initial testing of boundaries
  2. Claims of misunderstanding or innocent intent
  3. Gradual escalation
  4. Open displays once sufficient power is consolidated

The progression from Musk’s initial “jokes” and coded references (Tesla opens 88 charging stations, Tesla makes 88 kWh battery, Tesla recommends 88 K/h speed, Tesla offers 88 screen functions, Tesla promotes 88 ml shot cups, lightning bolt imagery… did you hear the dog whistles?) to rebranding Twitter with a swastika and giving open Nazi salutes follows this pattern with remarkable fidelity.

Modern democratic institutions face unique challenges in responding to these threats.

Unlike 1930s Germany, today’s media landscape is dominated by transnational corporations operating beyond traditional state control. As Hannah Arendt presciently noted in “The Origins of Totalitarianism” (1951), the vulnerability of democratic systems often lies in their inability to respond to threats that exploit their own mechanisms of openness and free discourse.

The key difference between historical radio control and modern social media manipulation lies in the speed and scale of impact, similar to how radio rapidly and completely displaced prior media. Hitler poured state money into making radios as cheap as possible to collapse barriers to his hateful, violent incitement propaganda spreading rapidly.

Yet radio still had reach constraints in physical infrastructure that could be managed and countered by state authorities. Social media platforms are on an Internet designed to route around such obstacles, which Russia bemoans as it clocks over 120 “national security” takedown notices sent every day to YouTube. Internet platforms can be transformed almost instantly through policy changes and algorithm adjustments both for and against democracy. This makes the current situation of extreme course change potentially even more dangerous than historical precedents. Information warfare long ago shifted from the musket to the cluster bomb, but defensive measures for democratic governments have been slow to emerge.

Source: Twitter

The parallel between Hitler’s exploitation of radio and Musk’s control of Twitter raises crucial questions about platform governance and democratic resilience. As political scientist Larry Diamond argues in “Democracy in Decline” (2016), social media platforms have become fundamental infrastructure for democratic discourse, making their governance a matter of urgent public concern.

The progression from platform acquisition to public displays of extremist symbols suggests that current regulatory frameworks are inadequate for protecting democratic institutions from technological manipulation. This indicates a need for new approaches to platform governance that can respond more effectively to rapid changes in ownership and policy.

But it is maybe too late for America, like Hearst realized in 1938 on Kristallnacht how it was too late for Germany and he never should have been promoting Nazism in his papers.

The historical parallels between 1930s media manipulation and current events are both striking and concerning. While the technological context has changed, the fundamental pattern of using media control to erode democratic norms remains consistent. The speed with which Twitter was transformed following its acquisition, culminating in its owner’s public display of Nazi gestures, suggests that modern democratic institutions may be even more vulnerable to such manipulation than their historical counterparts.

Of particular concern is how social media’s visual nature accelerates the normalization process that Winkler documented in early cinema. Just as early films helped legitimize what would become fascist gestures by presenting them as historical traditions, social media platforms can rapidly normalize extremist symbols through viral sharing and algorithmic amplification, often stripped of critical context or warnings.

Future research should focus on developing frameworks for platform governance (e.g. DoJ for laws, FCC for wireless) that can better protect democratic discourse while respecting fundamental rights. As history demonstrates, the window for effective response to such threats may be remarkably brief.

Rome Feared Female Leaders of Britain: Ancient DNA Reveals Why

Boudica was an Iceni queen who led a Celtic rebellion against invading Romans in AD 60

An interesting new dig suggests matrilocality was widespread in Britain around the time that Romans complained about women having too much authority.

Roman writers found the relative empowerment of Celtic women remarkable. In southern Britain, the Late Iron Age Durotriges tribe often buried women with substantial grave goods. Here we analyse 57 ancient genomes from Durotrigian burial sites and find an extended kin group centred around a single maternal lineage, with unrelated (presumably inward migrating) burials being predominantly male.

The report says essentially wealth and power centered around women. Men would enter into the extended families of these women. Romans characterized this matrilocal system as “barbaric”, in one of history’s great ironies. It’s a clear case of propaganda serving political ends rather than any objective assessment of societal sophistication.

Archaeological evidence now suggests the powerful women of Celtic societies possessed sophisticated features that Rome actually lacked and thus was jealous and fearful.

Consider how these two societies handled wealth and power. Rome’s system was brutally simple: the eldest male (paterfamilias) held absolute power over family, property and even life itself. By contrast, the genetic evidence from Durotrigian graves reveals something far more sophisticated: extended families built around powerful maternal lineages, with complex networks distributing wealth and influence through daughters and granddaughters while strategically incorporating talented male newcomers through marriage.

This differed from Rome’s oppressive patriarchy in its remarkable stability. While Roman families regularly battled and tore themselves apart in inheritance disputes, the archaeological record tells a different story for Celtic Britain: generations of wealthy female burials in the same locations, with consistent grave goods suggesting unbroken lines of power and influence. These Celtic “matriarchies” achieved this stability through thoughtful power-sharing between blood relatives and married-in males, avoiding the messy bloody succession crises that plagued Rome’s male-dominated system.

Rome’s dismissal of these sophisticated systems as “barbaric” served multiple ends. At a basic level, painting conquered peoples as uncivilized made conquest easier to justify to their own population. But there was likely a deeper fear at work: the Durotrigian system represented a sophisticated competing model of social organization that directly threatened Rome’s patriarchal power structure. Rather than acknowledge or learn from it, they chose to deliberately mischaracterize it as primitive. It’s a strategy that would be repeated countless times in later colonial encounters, as advanced indigenous systems were painted as “savage” to justify ruthless extraction before destruction.

The archaeological evidence from Britain forces us to confront an uncomfortable truth: Rome’s accusations of barbarism often masked their own limitations and insecurities when faced with more sophisticated social systems.

These ancient DNA findings both rewrite our understanding of Celtic Britain and they invite us to question how many other advanced social structures throughout history were deliberately mischaracterized and destroyed, taking with them valuable lessons in human organization that we’re only now beginning to rediscover.