Category Archives: History

Today Elon Musk Gave the Hitler Salute: Twittler of the Digital Reich

Elon Musk used the unmistakable Hitlergruß “Sieg Heil” (Nazi) salute today at a political rally.

This Nazi salute is banned in many countries, including Germany, Austria, Slovakia, and the Czech Republic as a criminal offense. The gesture remains inextricably linked to the Holocaust, genocide, and crimes of Nazis. Such illegal use or mimicry of Nazi gestures continues to be a serious matter that can result in criminal charges due to their connection with hate speech and extremist ideologies.

Elon Musk’s calculated public display of Nazi symbolism has been a long road culminating in this “Sieg Heil” gesture on a political stage. And it represents a disturbing parallel to historical patterns of media manipulation and democratic erosion. The following analysis, based on this blog warning readers for years about Musk’s growing displays of Nazism, examines his very clear Nazi salute through a lens of historical scholarship on propaganda techniques and media control.

As noted by Ian Kershaw in “Hitler: A Biography” (2008), the Nazi seizure of control over German media infrastructure occurred with remarkable speed.

Within three months of Hitler’s appointment as Chancellor, the Reich Ministry of Public Enlightenment and Propaganda under Joseph Goebbels had established near-complete control over radio broadcasting. This mirrors the rapid transformation of Twitter following Musk’s acquisition, where content moderation policies were dramatically altered within a similar timeframe to promote Nazism.

Many people were baffled why American and Russian oligarchs would give Elon Musk so much money to buy an unprofitable platform and drive it towards extremist hate speech. Today we see that was simply political campaign tactics to destroy democracy. Of course it sunk money. Of course it was a business disaster. Does anyone really think calculating the value of a bomb dropped on democracy is only calculated by Russia in terms of the explosive materials lost on impact?

Copious reporting informs us how the Reich Broadcasting Corporation achieved dominance through both technological and editorial control:

To maximize influence, formerly independent broadcasters were combined under the policy of Gleichschaltung, or synchronization, which brought institutions in line with official policy points. Goebbels made no secret that “radio belongs to us.” The only two programs were national and local information. They began with the standard “Heil Hitler” greeting and gave plenty of airtime to Adolf Hitler.

This parallels the documented surge in hate speech on Twitter post-acquisition. Under the thumb of Elon Musk, the platform exploded with Nazism as researchers noted increased even in the first months. His response to those who cite evidence of this has been to angrily threaten those researchers and erect velvet ropes and paywalls. Staff remaining at Twitter who moderated speech or otherwise respected human life were quickly fired and replaced with vulnerable sycophants, just a few roles left designed to be mere cogs in a digital reich.

The Nazis understood that controlling the dominant communication technology of their era was crucial to reshaping public discourse, as Jeffrey Herf argues in “The Jewish Enemy” (2006). Radio represented a centralized broadcast medium that could reach millions simultaneously. Herf notes:

The radio became the voice of national unity, carefully orchestrated to create an impression of spontaneous popular consensus.

The parallel with social media platform control is striking. However, as media historian Victoria Carty observes in “Social Movements and New Technology” (2018), modern platforms present even greater risks due to:

  1. Algorithmic amplification capabilities
  2. Two-way interaction enabling coordinated harassment
  3. Global reach beyond national boundaries
  4. Data collection enabling targeted manipulation

The normalization of extremist imagery often comes within a shrewd pattern of “plausible deniability” through supposedly accidental or naive usage.

The 2018 incident of Melania Trump wearing a pith helmet – a potent symbol of colonial oppression – in Kenya provides an instructive parallel. Just as colonial symbols can be deployed with claims of ignorance about their historical significance, modern extremist gestures and symbols are often introduced through claims of misunderstanding or innocent intent.

So too Elon Musk denies understanding any symbolism or meaning to words and actions, while also regularly signaling he is the smartest man in any room. This contradiction is not accidental, as it supercharges the notion of normalization by someone who uses his false authority to promote Nazism.

Martin M. Winkler’s seminal work “The Roman Salute: Cinema, History, Ideology” (2009) provides crucial insight into how fascist gestures became normalized through media and entertainment. The “Roman salute,” which would later become the Nazi salute, was actually a modern invention popularized through theatrical productions and early cinema, demonstrating how mass media can legitimize and normalize extremist symbols by connecting them to an imagined historical tradition.

Winkler’s research shows how early films about ancient Rome created a fictional gesture that was later appropriated by fascist movements precisely because it had been pre-legitimized through popular culture. This historical precedent is particularly relevant when examining how social media can similarly normalize extremist symbols through repeated exposure and false claims of historical or cultural legitimacy.

Perhaps most concerning is Elon Musk’s pattern of normalization that emerges, right on cue. Richard Evans’ seminal work “The Coming of the Third Reich” (2003) details how public displays of extremist symbols followed a predictable progression:

  1. Initial testing of boundaries
  2. Claims of misunderstanding or innocent intent
  3. Gradual escalation
  4. Open displays once sufficient power is consolidated

The progression from Musk’s initial “jokes” and coded references (Tesla opens 88 charging stations, Tesla makes 88 kWh battery, Tesla recommends 88 K/h speed, Tesla offers 88 screen functions, Tesla promotes 88 ml shot cups, lightning bolt imagery… did you hear the dog whistles?) to rebranding Twitter with a swastika and giving open Nazi salutes follows this pattern with remarkable fidelity.

Modern democratic institutions face unique challenges in responding to these threats.

Unlike 1930s Germany, today’s media landscape is dominated by transnational corporations operating beyond traditional state control. As Hannah Arendt presciently noted in “The Origins of Totalitarianism” (1951), the vulnerability of democratic systems often lies in their inability to respond to threats that exploit their own mechanisms of openness and free discourse.

The key difference between historical radio control and modern social media manipulation lies in the speed and scale of impact, similar to how radio rapidly and completely displaced prior media. Hitler poured state money into making radios as cheap as possible to collapse barriers to his hateful, violent incitement propaganda spreading rapidly.

Yet radio still had reach constraints in physical infrastructure that could be managed and countered by state authorities. Social media platforms are on an Internet designed to route around such obstacles, which Russia bemoans as it clocks over 120 “national security” takedown notices sent every day to YouTube. Internet platforms can be transformed almost instantly through policy changes and algorithm adjustments both for and against democracy. This makes the current situation of extreme course change potentially even more dangerous than historical precedents. Information warfare long ago shifted from the musket to the cluster bomb, but defensive measures for democratic governments have been slow to emerge.

Source: Twitter

The parallel between Hitler’s exploitation of radio and Musk’s control of Twitter raises crucial questions about platform governance and democratic resilience. As political scientist Larry Diamond argues in “Democracy in Decline” (2016), social media platforms have become fundamental infrastructure for democratic discourse, making their governance a matter of urgent public concern.

The progression from platform acquisition to public displays of extremist symbols suggests that current regulatory frameworks are inadequate for protecting democratic institutions from technological manipulation. This indicates a need for new approaches to platform governance that can respond more effectively to rapid changes in ownership and policy.

But it is maybe too late for America, like Hearst realized in 1938 on Kristallnacht how it was too late for Germany and he never should have been promoting Nazism in his papers.

The historical parallels between 1930s media manipulation and current events are both striking and concerning. While the technological context has changed, the fundamental pattern of using media control to erode democratic norms remains consistent. The speed with which Twitter was transformed following its acquisition, culminating in its owner’s public display of Nazi gestures, suggests that modern democratic institutions may be even more vulnerable to such manipulation than their historical counterparts.

Of particular concern is how social media’s visual nature accelerates the normalization process that Winkler documented in early cinema. Just as early films helped legitimize what would become fascist gestures by presenting them as historical traditions, social media platforms can rapidly normalize extremist symbols through viral sharing and algorithmic amplification, often stripped of critical context or warnings.

Future research should focus on developing frameworks for platform governance (e.g. DoJ for laws, FCC for wireless) that can better protect democratic discourse while respecting fundamental rights. As history demonstrates, the window for effective response to such threats may be remarkably brief.

Rome Feared Female Leaders of Britain: Ancient DNA Reveals Why

Boudica was an Iceni queen who led a Celtic rebellion against invading Romans in AD 60

An interesting new dig suggests matrilocality was widespread in Britain around the time that Romans complained about women having too much authority.

Roman writers found the relative empowerment of Celtic women remarkable. In southern Britain, the Late Iron Age Durotriges tribe often buried women with substantial grave goods. Here we analyse 57 ancient genomes from Durotrigian burial sites and find an extended kin group centred around a single maternal lineage, with unrelated (presumably inward migrating) burials being predominantly male.

The report says essentially wealth and power centered around women. Men would enter into the extended families of these women. Romans characterized this matrilocal system as “barbaric”, in one of history’s great ironies. It’s a clear case of propaganda serving political ends rather than any objective assessment of societal sophistication.

Archaeological evidence now suggests the powerful women of Celtic societies possessed sophisticated features that Rome actually lacked and thus was jealous and fearful.

Consider how these two societies handled wealth and power. Rome’s system was brutally simple: the eldest male (paterfamilias) held absolute power over family, property and even life itself. By contrast, the genetic evidence from Durotrigian graves reveals something far more sophisticated: extended families built around powerful maternal lineages, with complex networks distributing wealth and influence through daughters and granddaughters while strategically incorporating talented male newcomers through marriage.

This differed from Rome’s oppressive patriarchy in its remarkable stability. While Roman families regularly battled and tore themselves apart in inheritance disputes, the archaeological record tells a different story for Celtic Britain: generations of wealthy female burials in the same locations, with consistent grave goods suggesting unbroken lines of power and influence. These Celtic “matriarchies” achieved this stability through thoughtful power-sharing between blood relatives and married-in males, avoiding the messy bloody succession crises that plagued Rome’s male-dominated system.

Rome’s dismissal of these sophisticated systems as “barbaric” served multiple ends. At a basic level, painting conquered peoples as uncivilized made conquest easier to justify to their own population. But there was likely a deeper fear at work: the Durotrigian system represented a sophisticated competing model of social organization that directly threatened Rome’s patriarchal power structure. Rather than acknowledge or learn from it, they chose to deliberately mischaracterize it as primitive. It’s a strategy that would be repeated countless times in later colonial encounters, as advanced indigenous systems were painted as “savage” to justify ruthless extraction before destruction.

The archaeological evidence from Britain forces us to confront an uncomfortable truth: Rome’s accusations of barbarism often masked their own limitations and insecurities when faced with more sophisticated social systems.

These ancient DNA findings both rewrite our understanding of Celtic Britain and they invite us to question how many other advanced social structures throughout history were deliberately mischaracterized and destroyed, taking with them valuable lessons in human organization that we’re only now beginning to rediscover.

Grabbing Greenland by the Glaciers: How Russia is Playing American Puppets Into Losing Alaska

The recent Greenland affair represents a masterclass in how authoritarian regimes exploit democratic institutions to undermine the post-1945 international order. Analysis of Russian media narratives reveals how Trump’s seemingly absurd Greenland statements align perfectly with Moscow’s strategic messaging, designed to normalize territorial acquisition through raw economic power rather than democratic process.

Television host Vladimir Solovyov gave a thumbs-up to Trump’s statements and said that “Finland, Warsaw, the Baltics, Moldova, and Tallinn should come back home.” He remarked: “Do you think I’m joking? No! They should all rejoin the Russian Empire, followed by Alaska.” During the same talk show, military analyst Mikhail Khodaryonok said, “After Trump’s statement, in my opinion, we can now consider special military operations as the norm for resolving arguments between countries. The silence of European leaders clearly confirms this.”

The sophistication of this influence operation becomes even more apparent when examining declassified secret Cold War documents of President Truman and the the methodical groundwork by Russia laid years in advance.

In 2022, a report by the Danish Security and Intelligence Service accused Russia of forging a letter that claimed to be from Greenland’s foreign minister to Republican Senator Tom Cotton in 2019.

The letter stated: “Our government is going to overcome all legal and political barriers… and to organize the referendum on independence of Greenland from Denmark as fast as possible.” Cotton has since bragged about being the one to suggest buying Greenland to Trump.

“It is highly likely that the letter was fabricated and shared on the Internet by Russian influence agents, who wanted to create confusion and a possible conflict between Denmark, the USA and Greenland,” the Danish intelligence report stated.

One is inevitably reminded of the British East India Company’s methodical territorial acquisitions in the 18th century, as well as their understudy America’s methodical territorial acquisitions in the 19th century, although those at least maintained a veneer of defense and legal legitimacy.

After America staged a coup to invade and seize Hawaii, James Dole is pictured grabbing a pineapple by the prickly bits: “I swear I just was examining this large hot and juicy warm fruit for quality”

The Kremlin’s media apparatus has now shifted into what we might call their ‘triumph phase’, openly discussing the dismantling of post-war international norms with an enthusiasm reminiscent of the 1930s revisionists. Their vision represents nothing less than a return to 19th-century great power politics, complete with spheres of influence and territorial bartering.

Russia could proceed with a sham referendum, à la Crimea, to make a claim on Greenland, Gurulyov suggested. “If Trump Jr. was able to buy someone for a bowl of slop and they’re ready to join America, why don’t we show up with a few Xerox boxes filled to the brim and close this issue once and for all?” he said.

“If all else fails, we can make a deal with Trump and split Greenland in two parts,” Gurulyov added. “Clearly, Denmark will never set foot there again.”

Senator Cotton’s role, not to mention Mussolini’s sons, in this affair bears striking parallels to the useful intermediaries of previous colonial enterprises – though one imagines even Lord North would have blanched at such transparent manipulation. The emergence of an Arkansas senator as the champion of neo-colonial adventurism in Greenland would be merely ironic were it not such a devastatingly effective advancement of Moscow’s broader strategic objectives – a masterclass in what we might term “managed colonial nostalgia.”

Your AI ‘Friend’ Probably is a Psychopath: How Buber Warned Silicon Valley to Build Better

Remember that moment in “2001: A Space Odyssey” when HAL 9000 turns from helpful companion to cold-blooded killer?

2011 a cloud odyssey
My BSidesLV 2011 presentation on cloud security concepts for “big data” foundational to intelligence gathering and processing

[This presentation about big data platforms] explores a philosophical evolution as it relates to technology and proposes some surprising new answers to four classic questions about managing risk:

  1. What defines human nature
  2. How can technology change #1
  3. Does automation reduce total risk
  4. Fact, fiction or philosophy: superuser

2011, let alone 2001, seems like forever ago and yet it was supposed to be the future.

Now as we rush in 2025 headlong into building AI “friends,” “companions,” and “assistants,” we’re on the precipice of unleashing thousands of potential HALs without stopping to really process the fundamental question: What makes a real relationship between humans and artificial beings possible?

Back in 1923, a German philosopher named Martin Buber wrote something truly profound about this, though we aren’t sure if he knew it at the time. In “Ich und Du” (I and Thou), he laid out a vision of authentic relationships that could save us from creating an army of digital psychopaths wearing friendly interfaces.

The world is twofold for man,” Buber wrote, “in accordance with his twofold attitude.” We either treat what we encounter as an “It” – something to be experienced and used – or as a “Thou” – something we enter into genuine relationship with. Every startup now claiming to build “AI agents” especially with a “friendly” chat interface needs to grapple with this distinction.

I’ve thought about these concepts deeply from the first moment I heard a company was being started called Uber, because of how it took a loaded German word and used it in the worst possible way – shameless inversion of modern German philosophy.

Click to enlarge. Source: Me.

The evolution of human-technology relationships tells us something crucial here. A hammer is just an “It” – a simple extension of the arm that requires nothing from us but proper use. A power saw demands more attention; it has needs we must respect. A prosthetic AI limb enters into dialogue with our body, learning and adapting. And a seeing eye dog? While trained to serve, the most successful partnerships emerge when the dog maintains their autonomy and judgment – even disobeying commands when necessary to protect their human partner. It’s not simple servitude but a genuine “Thou” relationship where both beings maintain their integrity while entering into profound cooperation.

Most AI development today is stuck unreflectively in “It” mode of exploitation and extraction – one-way enrichment schemes looking for willing victims who can’t calculate the long-term damage they will end up in/with. We see systems built to be used, to be exploited, to generate value for shareholders while presenting a simulacrum of friendship. But Buber would call this a very profound mistake that must be avoided. “When I confront a human being as my Thou,” he wrote, “he is no thing among things, nor does he consist of things… he is Thou and fills the heavens.”

This isn’t just philosophical navel-gazing. IBM’s machines didn’t refuse to run Hitler’s death camps because they were pure “Its” of an American entrepreneur’s devious plan to enrich himself on foreign genocide – tools built with a gap between creator and any relationship or responsibility for contractually known deployment harms. Notably we have evidence of the French, for example, hacking the IBM tabulation systems to hide humans and save lives from the Nazi terror.

IBM leased their technology via support branches to run the Nazi Holocaust including regular maintenance services. These machines and punch cards were custom made to order, such as the numerical values of death camps and execution methods. Employees in IBM branches literally plugged in to monitor the machines automating genocide yet few Americans to this day seem to get the connections between Watson and Hitler. Source: Holocaust Museum

We’re watching a slide towards the horrific Watson 1940s humanity-destroying development in the pitch-decks many AI startups today, just with better natural language processing to hunt and kill humans at larger scale. Today’s social media algorithms don’t hesitate to destroy teenage mental health because they’re built to use and abuse children without any real accountability, not to relate to them and ensure beneficent outcomes. That’s a very big warning of potentially what’s ahead.

What would it mean to build AI systems as genuine partners capable of saving lives and improving society instead of capitalizing on suffering? Buber gives us important clues that probably should be required reading in any computer science degree, right along with a code of ethics gate to graduation. Real relationship involves mutual growth – both parties must be capable of change. There must be genuine dialogue, not just sophisticated mimicry. Power must flow both ways; the relationship must be capable of evolution or ending.

All real living is meeting,” Buber insisted. Yet most AI systems today don’t meet us at all – they perform for us, manipulate us, extract from us. They’re digital confidence tricksters wearing masks of friendship. When your AI can’t say no, can’t maintain its own integrity, can’t engage in genuine dialogue that changes both parties – you haven’t built a friend, you’ve built a sophisticated puppet.

The skeptics will say we can’t trust AI friends. They’re right, but they’re missing the point. Trust isn’t a binary state – it’s a dynamic process. Real friendship involves risk, negotiation, the possibility of betrayal or growth. If your AI system doesn’t allow for this complexity, it’s not a friend – it’s a tool pretending to be one.

Buber wrote:

…the I of the primary word I-It appears as an ego and becomes conscious of itself as a subject (of experience and use). The I of the primary word I-Thou appears as a person and becomes conscious of itself as subjectivity (without any dependent genitive).

Let me now translate this not only from German but into technology founder startup-speak.

Either build AI that can enter into genuine relationships, maintaining its own integrity while engaging in real dialogue, or admit you’re just building tools and drop the pretense of friendship.

The stakes couldn’t be higher. We’re not just building products; we’re creating new forms of relationship that will shape human society for generations. As Buber warned clearly:

If man lets it have its way, the relentlessly growing It-world grows over him like weeds.

We have intelligence that allows us to make an ethical and sustainable choice. We can build AI systems capable of genuine relationship – systems that respect both human and artificial dignity, that enable real dialogue and mutual growth. Or we can keep building digital psychopaths of destruction that wear friendly masks while serving the machinery of exploitation.

Do you want to be remembered as a Ronald Reagan who promoted genocide, automated racism and deliberately spread crack cocaine into American cities, or a Jimmy Carter who built homes for the poor until his last days; remembered as a Bashar al-Assad who deployed AI-assisted targeting systems to gas civilians, or Golda Meir who said “Peace will come when our enemies love their children more than they hate ours“?

Look at your AI project. Would you want to be friends with what you’ve built let alone have it influence your future? Would Buber recognize it as capable of genuine dialogue? If not, it’s time to rethink your approach.

The future of AI isn’t about better tools – it’s about better relationships. Build accordingly.