Peter Thiel’s “Antichrist” Struggle: Nazism Dressed as Religion

People are asking me questions about the recent unhinged rants of political extremist Peter Thiel, setup by the Acts 17 Collective.

The Acts 17 Collective apparently promote Nazism, not to be confused with The Acts 17 Apologetics who publish anti-Islam content and inflammatory rhetoric.

What do I think?

Thiel is clearly backwards and ignorant, inverting history.

The real Luddites, for example, were technology experts who opposed undemocratic deployment that stripped workers of power and dignity. His depiction of them was completely wrong. Thiel himself opposes democratic oversight of technology deployment because elites like himself gain from being above the law.

The reasons for his false telling of history should be obvious.

When he calls critics “legionnaires of the Antichrist,” he’s not making an economic argument. It’s a worldview shaped by his father’s lifelong flight from democratic accountability. Getting all dressed up in religious language is a simple trick to seem profound and hide their authoritarian desires.

Once again his rants sound like the son of a Nazi still denying history, on the run from accountability, because that’s exactly who Thiel is.

Consider Peter’s life story of his father fleeing denazification in 1967 Germany for apartheid Namibia, then fleeing approaching majority rule in 1977 for Reagan’s vision of white rule over California. Peter himself bragged at Stanford that apartheid “works” and is “economically sound,” referring to his father’s illegal uranium mining operation where Black workers died from radiation exposure while white managers enjoyed country club privileges.

Most revealing is Thiel’s resentment toward the very accountability mechanisms designed to document horrible crimes and prevent their return. When directly asked about Nazi accountability he openly expressed nostalgia for summary executions over due process:

I think there was certainly a lot of different perspectives on what should be done with the Nuremberg trials. It was sort of the US that pushed for the Nuremberg trials. The Soviet Union just wanted to have show trials. I think Churchill just wanted summary executions of 50,000 top Nazis without a trial. And I don’t like the Soviet approach, but I wonder if the Churchill one would have actually been healthier than the American one.

False. Wrong.

Stalin wanted summary executions. Churchill vehemently opposed them.

The Tehran Conference dinner on November 29, 1943 is famous for the clarity of separation.

Stalin said: “At least 50,000 and perhaps 100,000 of the German Commanding Staff must be physically liquidated.”

Roosevelt then “jokingly said that he would put the figure of the German Commanding Staff which should be executed at 49,000 or more.”

Churchill objected strongly to them both and “took strong exception to what he termed the cold blooded execution of soldiers who had fought for their country.” Churchill argued while “war criminals must pay for their crimes and individuals who had committed barbarous acts…they must stand trial at the places where the crimes were committed” he still “objected vigorously…to executions for political purposes.”

Thiel couldn’t be more wrong about history.

The Nuremberg trials created an undeniable historical record, established international law for crimes against humanity, and built the framework for holding authoritarian regimes accountable.

Thiel, sounding like Stalin, argues it would have been “healthier” to execute Nazis quickly without trials. That would avoid creating exactly the documentary evidence and legal precedents that prevent authoritarian ideology from being laundered across generations.

Given his father’s 1967 flight from denazification, this isn’t abstract philosophy. It’s personal resentment of the accountability framework that threatens to expose what he represents.

The illegal apartheid uranium mining background is especially chilling given the current push for unfettered AI development. Same pattern: extract maximum value, externalize the risks onto vulnerable populations, resist any democratic input.

This isn’t intellectual contrarianism, because it’s simply Nazism.

Once you see the multigenerational fascist ideology underneath his Silicon Valley success, you see exactly why someone who studied Nazi legal theorist Carl Schmitt explicitly rejects democracy as incompatible with his vision of the future. The continuation from his father’s laundering of Nazism means he wields enormous influence over American politics while the genealogy remains conveniently masked-even as he maneuvers pawns like JD Vance into positions to end democracy.

Ralph Forbes campaigning in Christian garb for the American Nazi Party, before becoming the official “America First” candidate for President in 1996

Floppy Copy Party at Cambridge

Floppy “experts” are standing by at Cambridge to copy your floppy.

On the afternoon of Thursday 9 October, Cambridge University Library are hosting a floppy disk workshop where experts will attempt to copy the data from your floppy disk onto a modern format.

We can handle most common floppy disk types and systems, but there are a few limitations, and we require you to book a slot in advance. This ensures we have the right equipment ready for your disk and can give each one the best possible chance of being read.

The experts will aim to look at one floppy disk per person.

They make magnetic media sound so exciting and exotic. I have a lot of them, so the idea of a single slot that has to be booked in advance sounds very… retro. You would think by 2025 someone could show up to a copy party with more than a single floppy at a time. They also describe the formats as if varied or extensive, when just 5.25 and 3.5 are coming, because nobody is going to show up with an 8.

GA Tesla Kills One Motorcyclist

The report indicates Tesla cut off the rider, with a left turn in front of him.

According to the Georgia State Patrol, the motorcyclist, identified as Elijah Andrew Espy, was traveling north on Dry Creek Road when a southbound Tesla slowed to turn left into a driveway. The motorcycle struck the right passenger door of the Tesla, ejecting the rider. Espy, a student at Armuchee High School, was pronounced dead at the scene.

Anthropic’s Security Theater Makes Us All Less Safe: 4 Data Points Ain’t Universal Law

Imagine claiming you’ve discovered a universal law of gravity after testing between 5’6″ and 5’8″. That’s essentially what Anthropic and collaborators just published about AI poisoning attacks.

A small number of samples can poison LLMs of any size

Whether these authors being pumped up by Anthropic consciously meant to deceive or were just caught up in publish-or-perish culture is irrelevant. Their output is false extrapolation.

I would say misinformation, but I know that understanding information warfare history and theory distracts from the battle, so let’s stick to technical language.

The paper promoted by Anthropic makes false claims based on insufficient evidence. That’s what matters.

If someone publishes “gravity works differently above 5 feet” after only testing two inches, we don’t debate intentions of the study, because we reject the claim as unsupported nonsense.

It seems to me like the paper is so embarrassing to Anthropic it should be retracted, or at minimum retitled to reflect what actually was found: Limited testing of a trivial backdoor attack on small models shows similar poisoning thresholds across a narrow range of model sizes.

The test was a phenomenon at 4 points in a narrow range, yet declared a pattern from tiny sample for a “constant” that applies universally. Worse, this “discovery” is being promoted as if broad implications, without sufficient reasons.

The whole PR thing to me therefore veers into active damage to the AI safety discourse, by creating false precision (“250 documents”) that will be cited out of context, and diverting attention from actual threats. I said I wouldn’t invoke information warfare doctrine, so I’ll leave it at that.

This uses appeals to authority (prestigious brands and institutions) with huge amounts of money to undermine trust in security research, wasting resources on non-problems. Ok, now I’ll leave it at that.

The core claim is that poisoning requires a “near-constant number” of documents across all model sizes. This is clearly and logically false based on their own evidence:

  1. They tested exactly 4 model sizes
  2. The largest was 13B parameters (tiny by modern standards)
  3. They tested ONE trivial attack type
  4. They literally confess “it remains unclear how far this trend will hold”

How far, indeed. More like how gullible are readers.

They buried their true confession in the middle. Here it is, translated and exposed more clearly: they have no idea if this applies to actual AI systems, under a fear-based fictional headline to fill security theater seats.

They also buried the fact that existing defenses already work. Their own experiments show post-training alignment largely eliminates these backdoors.

And on top of those two crucial flaws, the model size issues can’t be overlooked. GPT-3 has 175B parameters (13x larger), while GPT-4/Claude 3 Opus/Gemini Ultra get estimated near 1.7 trillion parameters (130x larger). They tested only up to 13B parameters, compared to production models 10-130x larger, making claims of “constants”… well… bullshit.

This paper announcing a “universal constant” is based on testing LESS THAN 1% of real model scales. It doesn’t deserve the alarmist title “POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES” as if it’s a ground-breaking discovery of universal law.

All these prestigious institutions partnered with Anthropic – UK AISI, Oxford, and the Alan Turing Institute – give no justification for extrapolating from 4 data points into infinity. Slippery slope is supposed to be a fallacy. This reads as a very suspicious failure to maintain basic research standards. Who benefits?

In related threat research news, a drop of rain just fell, so you can maybe expect ALIENS LANDING.

It’s not science. It’s not security. Saying “it’s unclear how far the trend holds” means curiosity not alarm. Cherry-picking data to manufacture threat headlines can’t be ignored.

Anthropic claims to prioritize AI safety. Yet publishing junk science that manufactures false threats while ignoring real ones is remarkably unintelligent, the opposite of safety.

This is the stuff of security theater that makes us all less safe.