Category Archives: Poetry

Run Ollama on AMD GPU ROCm with TuxedoOS

If you’re like me, you might end up with an AMD machine wondering how to squeeze as many agents as possible onto it for the least amount of hassle possible. Fortunately, AMD ships amdgpu-install as the official way to put ROCm on Linux. Unfortunately, their handy script reads /etc/os-release, checks the ID field against a supported list, and if the distro is not there it…exits. I say it is unfortunate because it appears to be a lazy cop, far too strict for reality.

Take TuxedoOS 24.04 for example. It’s Ubuntu 24.04 (Noble) with a modified kernel and a few Tuxedo packages on top. Every AMD apt repository works. Every library installs cleanly. Nothing gets in the way until this amdgpu-install OS check shows up and falls over.

Challenge accepted. Here’s the happy path to a GPU-accelerated Ollama on a new TuxedoOS laptop that has the Kraken Point APU (Radeon 860M, gfx1152). You may find the same method works for other AMD APUs and dGPUs, and for other Ubuntu-derived distros.

It turned out to not be any problem at all, so I hope AMD reconsiders their lazy cop.

Step 1: Present “clean” credential

I know this is stupid, but it’s really the trick. You just bind-mount a temporary os-release that says you are running Ubuntu Noble. The mount is process-global, reverts on unmount, and does not touch the real file on disk.

sudo tee /tmp/os-release-ubuntu >/dev/null <<'EOF'
PRETTY_NAME="Ubuntu 24.04 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
UBUNTU_CODENAME=noble
EOF
 
sudo mount --bind /tmp/os-release-ubuntu /etc/os-release

Nothing else on the system sees the change. When you unmount, the original os-release is back. A reboot also clears it.

AMD doesn’t care and doesn’t check anything else, which kind of goes to my point about how poorly their support is running right now. I would have expected them to check their own hardware first, distribution last. That’s better framing to get people to want to use the hardware.

Step 2: Install AMD repository and ROCm

Run the AMD installer script as they say.

ROCm 7.2.1 supports the latest Radeon 9000 Series (RDNA 4) and select 7000 Series (RDNA 3) GPUs, and introduces support for Ryzen APUs

sudo apt update
wget https://repo.radeon.com/amdgpu-install/7.2.1/ubuntu/noble/amdgpu-install_7.2.1.70201-1_all.deb
sudo apt install ./amdgpu-install_7.2.1.70201-1_all.deb
amdgpu-install -y --usecase=rocm --no-dkms

The --no-dkms flag has to be there. AMD ships ROCm for Ryzen APUs on top of the inbox amdgpu kernel driver. Installing their DKMS module on a non-Ubuntu kernel leads to mismatches. The inbox driver in any recent kernel (6.14 or later) works.

When the install completes, unmount the bind, since we don’t need to fool them anymore:

sudo umount /etc/os-release

Step 3: Join GPU group and reboot

ROCm requires the current user to be in the render and video groups. Without these, rocminfo will not see the GPU.

sudo usermod -aG render,video $USER
sudo reboot

Step 4: Verify GPU is recognized

After the reboot, confirm three things: group membership, GPU enumeration, and OpenCL platform.

groups
rocminfo | grep -A2 "Agent 2"
/opt/rocm/bin/clinfo | grep -E "Device Name|Platform Name"

Expected output for the Kraken Point system I am testing with:

Name: gfx1152
Marketing Name: AMD Radeon 860M Graphics
Platform Name: AMD Accelerated Parallel Processing

Step 5: Prove HIP compiles and runs

The ROCm 6+ API dropped gcnArch in favor of gcnArchName so I used this test:

cat > /tmp/hip_test.cpp <<'EOF'
#include <hip/hip_runtime.h>
#include <cstdio>
int main() {
  int n = 0;
  (void)hipGetDeviceCount(&n);
  printf("HIP devices: %d\n", n);
  for (int i = 0; i < n; i++) {
    hipDeviceProp_t p;
    (void)hipGetDeviceProperties(&p, i);
    printf("  %d: %s (%s)\n", i, p.name, p.gcnArchName);
  }
}
EOF
/opt/rocm/bin/hipcc /tmp/hip_test.cpp -o /tmp/hip_test
/tmp/hip_test

Successful output will look like this:

HIP devices: 1
  0: AMD Radeon Graphics (gfx1152)

At this point ROCm itself is complete. Every application that links against the system ROCm libraries will find the GPU.

WE’RE DONE! But wait, there’s more

Ollama now supports AMD graphics cards

Step 6: Strap Ollama to the GPU

Ollama bundles its own ROCm runtime in /usr/local/lib/ollama/rocm. The system ROCm install does not affect it. Ollama’s precompiled kernels target a specific list of GPU architectures, and gfx1152 is not currently on that list. Maybe it will be. But in the meantime the easy solution is to use HSA_OVERRIDE_GFX_VERSION, which tells the HSA runtime to treat the installed GPU as a different architecture. For RDNA 3.5 APUs (gfx1150, gfx1151, gfx1152), setting it to 11.0.0 loads gfx1100 kernels. RDNA 3 and RDNA 3.5 are close enough that gfx1100 code runs on RDNA 3.5 silicon for every op Ollama uses.

Create a systemd drop-in so the override persists across restarts:

sudo mkdir -p /etc/systemd/system/ollama.service.d
sudo tee /etc/systemd/system/ollama.service.d/override.conf >/dev/null <<'EOF'
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=11.0.0"
Environment="HIP_VISIBLE_DEVICES=0"
Environment="ROCR_VISIBLE_DEVICES=0"
EOF
 
sudo systemctl daemon-reload
sudo systemctl restart ollama

Confirm the environment actually reached the process:

sudo cat /proc/$(pgrep -f 'ollama serve')/environ | tr '\0' '\n' | grep -iE "hsa|hip|rocr"

Check the Ollama logs:

sudo journalctl -u ollama -n 80 --no-pager | grep -iE "rocm|gpu|inference compute"

Success will look something like this:

library=ROCm compute=gfx1100 name=ROCm0 description="AMD Radeon 860M Graphics" total="15.7 GiB" type=iGPU

Ollama reports the 860M is gfx1100 and it is ready to offload model layers to it instead of soaking up your CPU cores. For example, before I wired the GPU my 16 cores were pegged 100% for five minutes or more. After, CPU was running 5% while the GPU was pegged.

Step 7: GPU spotting during inference

Open up the system monitor (preferred if you like cool visuals) or just start the rocm-smi in a loop in one terminal:

watch -n 0.5 rocm-smi

Then in another terminal run inference:

ollama run llama3.2:3b "explain the Bauhaus movement in detail"

GPU utilization shoots above 90% during generation. VRAM used jumps to roughly the model size.

Once you see jumps, it’s tuning time

Figuring out fast and stable under real workloads is a bigger post. To quickly get started, there are four AMD APU knobs to turn:

  1. Shared memory ceiling
  2. Ollama runtime flags
  3. CPU governor
  4. Power profile

First, with shared memory ceiling the AMD APUs have no dedicated VRAM. It’s kind of their cost-saving thing. The kernel caps how much system RAM the GPU can address via a Translation Table Manager (TTM) pages limit. The default is half the system RAM. Raising it costs nothing when the GPU is idle. On a 32GB system, I figure we should roughly estimate just below 24GB.

sudo apt install -y pipx
pipx ensurepath
pipx install amd-debug-tools
 
amd-ttm           # show current
amd-ttm --set 22  # raise to 22GB

The 22 GiB leaves enough headroom for the OS, a browser, and KDE, as absurd as that sounds. I remember back in the day… nevermind. On 64 GB, 48GB would be my starting point. On 128 GB you can use AMD’s own recommendation of 96GB, which is kind of like saying the people who have the most money and least need for tuning get the AMD team’s attention.

The setting persists in /etc/modprobe.d/ttm.conf and takes effect after reboot.

Second, Ollama has four flags that affect iGPU inference:

sudo tee /etc/systemd/system/ollama.service.d/override.conf >/dev/null <<'EOF'
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=11.0.0"
Environment="HIP_VISIBLE_DEVICES=0"
Environment="ROCR_VISIBLE_DEVICES=0"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KV_CACHE_TYPE=q8_0"
Environment="OLLAMA_NUM_PARALLEL=1"
Environment="OLLAMA_MAX_LOADED_MODELS=1"
Environment="OLLAMA_KEEP_ALIVE=30m"
EOF
sudo systemctl daemon-reload
sudo systemctl restart ollama

OLLAMA_FLASH_ATTENTION=1 cuts KV cache memory by roughly half on most modern models. OLLAMA_KV_CACHE_TYPE=q8_0 quantizes the KV cache to 8-bit, which saves significant memory for long contexts with negligible quality cost. OLLAMA_NUM_PARALLEL=1 and OLLAMA_MAX_LOADED_MODELS=1 prevent Ollama from thrashing the shared memory pool with concurrent requests, which can be truly painful to the user experience on an iGPU. OLLAMA_KEEP_ALIVE=30m holds the model in GPU memory for half an hour instead of the default five minutes, because cold-starts are the slowest part of inference when using memory that isn’t dedicated.

Third, the CPU governor. Are you on a laptop? I sure am. For obvious reasons a laptop setting is usually powersave or schedutil, both of which clock down the CPU during the token-decode phase that runs between GPU kernels.

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
sudo cpupower frequency-set -g performance

Fourth, power profile. TuxedoOS is very proud of their widget and app for power management. It’s a bit annoying, really, but it is what it is and it can override governor decisions. Their Tuxedo Control Center (TCC) also handles fan curves and hardware-specific quirks. TCC masks power-profiles-daemon on purpose, and so we use TCC.

tuxedo-control-center &

I chose a performance-oriented profile in the GUI, which seems weird because it’s literally just a toggle. Why have a UI for a toggle? Maybe I’ll create a custom one with the CPU governor set to performance and the fan curve ramped up for sustained load. On non-Tuxedo distros that use power-profiles-daemon, the equivalent is powerprofilesctl set performance. I will say this, when I was hammering the CPU before the GPU was recognized, the fans were so loud I couldn’t hear myself think and my USB hub literally started screaming and shutdown from the power conflicts. Anker, we need to have a word.

Reboot after the TTM change, and everything should be in place. Verify like this:

amd-ttm
sudo journalctl -u ollama --since "2 minutes ago" --no-pager | grep "inference compute"

The Ollama log line should show the new total VRAM ceiling matching your TTM setting.

Benchmark

When it’s good to go, you can send a generation through the API and check timing fields:

curl -s http://127.0.0.1:11434/api/generate -d '{
  "model": "qwen2.5:7b",
  "prompt": "Write a 300-word analysis of the Bauhaus Dessau period",
  "stream": false
}' > /tmp/ollama_result.json
 
python3 <<'EOF'
import json
d = json.load(open("/tmp/ollama_result.json"))
eval_s = d["eval_duration"] / 1e9
prompt_s = d["prompt_eval_duration"] / 1e9
print(f"prompt eval:  {d['prompt_eval_count']} tokens in {prompt_s:.2f}s = {d['prompt_eval_count']/prompt_s:.1f} tok/s")
print(f"generation:   {d['eval_count']} tokens in {eval_s:.2f}s = {d['eval_count']/eval_s:.1f} tok/s")
EOF

On my Radeon 860M (gfx1152, 8 CU RDNA 3.5) with 22GB TTM, performance governor, and flash attention enabled I posted these numbers:

llama3.2:3b Q4 → 31 tok/s generation, 360 tok/s prompt eval
qwen2.5:7b Q4 → 15 tok/s generation, 187 tok/s prompt eval

They are bandwidth-bound. Kraken Point has a 128-bit LPDDR5X memory bus at roughly 120 GB/s. Generation speed scales inversely with model size. Each token streams the full weights through memory. The 2.1x speed ratio between 3B and 7B tracks the 2.4x size ratio, consistent with a memory-bandwidth ceiling.

Then we can confirm the model was being fully offloaded to GPU:

curl -s http://127.0.0.1:11434/api/ps | python3 -m json.tool | grep -E "size|vram"

size_vram equals size. The entire model is in GPU memory.

Fiddle context length

Since Ollama defaults to 4096-token context on every model, I figure it’s worth a change. I tend to live in a world of longer files, and that means more memory is needed. With q8_0 KV cache, qwen2.5:7b at 8K adds roughly 500MB over the 4K default, and 16K adds about 1GB. On our 22GB ceiling this is still reasonable. Generation speed drops about a quarter at 16K versus 4K because more KV cache streams through memory per token. There is no CLI flag, so it will be set per request, per model, or globally.

Per request via the API:

curl -s http://127.0.0.1:11434/api/generate -d '{
  "model": "qwen2.5:7b",
  "prompt": "summarize this long document ...",
  "stream": false,
  "options": { "num_ctx": 16384 }
}'

Per model via a Modelfile, creating a named variant:

cat > /tmp/qwen-16k.modelfile <<'EOF'
FROM qwen2.5:7b
PARAMETER num_ctx 16384
EOF
ollama create qwen2.5:7b-16k -f /tmp/qwen-16k.modelfile
ollama run qwen2.5:7b-16k

Globally for every model, add to the systemd drop-in:

Environment="OLLAMA_CONTEXT_LENGTH=8192"

What else can I tell you?

A new Tuxedo Computer running TuxedoOS on an AMD APU can feed Ollama the GPU. System ROCm 7.2.1 is available for any application that wants it. The HIP toolchain works on the actual architecture. With a tuned Ollama service, models fit GPU memory, flash attention gets used, and the KV cache gets “quantized” for comfortable context lengths.

Really this absurdly long post is a nothing-burger. There were two workarounds: a bind-mount for the installer’s OS check, and an HSA version override for Ollama’s bundled runtime. Neither touches the hardware, neither modifies any vendor code, and both revert cleanly.

Come on AMD, this post really doesn’t even need to exist, but you forced me to write it because of your lazy “are you on the list” cop.

:~$ ollama run qwen2.5:7b
>>> write a haiku
秋叶落无声,
风过知时节,
静待冬来临。
+-----------------------------+
|           prompt            |
+--------------+--------------+
               |
+--------------v--------------+
|           Ollama            |
+--------------+--------------+
               |
+--------------v--------------+
|  HSA_OVERRIDE_GFX_VERSION   |
|         = 11.0.0            |
|   (gfx1100 kernels load)    |
+--------------+--------------+
               |
+--------------v--------------+
|      ROCm 7.2.1 / HIP       |
+--------------+--------------+
               |
+--------------v--------------+
|   amdgpu (inbox driver)     |
|   Linux 6.17 (TuxedoOS)     |
+--------------+--------------+
               |
+--------------v--------------+
|        Radeon 860M          |
|          gfx1152            |
+-----------------------------+

America Prepares as Anthropic Mythos is 100X More Deadly Than Martian Death Ray

NBC News just ran a story called The Vulnpocalypse about Anthropic’s decision to withhold its Mythos model from the public. The tone is, well, you know.

The author, Kevin Collier, lined up well-known cybersecurity vendors to stoke fear that AI-powered hackers will crash financial systems, lock up hospitals, and shut down water treatment plants.

Sigh.

Anyone who has worked in security long enough will recognize this FUD genre immediately. Replace “AI” with “war dialer” and this is the exact same article the movie WarGames generated in 1983. At least back then we said the word war out loud instead of just implying it.

Captain Crunch Whistles for Everyone!

Back in 1983 some Milwaukee teenagers called the 414s (Milwaukee area code, yeah) waltzed into the unprotected computers at Los Alamos National Laboratory and Memorial Sloan-Kettering Cancer Center using nothing more exotic than a modem and a telephone line. The Newsweek cover on September 5, 1983 featured the word “hacker” for the first time on a major magazine cover.

The youngest of the 414s, therefore able to pose on the cover of Newsweek, September 5, 1983

Congress held hearings. Ronald Reagan was shown WarGames and asked the Joint Chiefs if the premise was real. Within a week the answer was: “Yes, the premise was technically possible.” Eighteen months later he pushed a signature onto NSDD-145, the first Presidential directive on computer security.

The actual legal consequences for the 414s were two years’ probation and a $500 fine for phone harassment. And even that seemed a bit much.

Time Magazine in 1983 with stern warning that network attacks on computers will kill someone.

Neal Patrick became a media star. John Draper, Captain Crunch himself, had been phreaking the phone system with a cereal box whistle and people talked about it as though he were going to bring down AT&T. The whistle found in kids’ cereal boxes exploited in-band signaling on the analog phone network (2600 Hz tone on the same channel as voice). The fix was to push for the long-overdue move to out-of-band signaling (SS7). It stands as proof of the harm from natural monopolies refusing to invest in baseline safety. Dare I say history tends to rhyme even when it doesn’t repeat?

The vulnerability landscape was real, the exploitation was incremental, and the apocalyptic framing served the companies selling defenses. McAfee built an entire empire on this dynamic, most memorably during the 1992 Michelangelo virus panic, when John McAfee personally stoked fear that millions of computers would be destroyed on March 6th. The press amplified, the public panicked, almost nothing happened, and McAfee’s sales went through the roof. Perhaps most bizarre was how he became a security industry celebrity for undermining trust in the security industry. The vendors and conference attendees at events like BlackHat or Defcon acted as if Enron’s CEO should have been the toast of Wall Street.

The Same Article, Forty-Three Years Later

Collier’s piece follows the 1983 script with remarkable fidelity. The threat model is identical: hypothetical unsophisticated attackers gain access to powerful tools, critical infrastructure is vulnerable, and the proposed solution is withholding the tool from the public while sharing it with “partners.” By this logic we should be terrified of kids getting a hold of sophisticated string and precision percussion instruments. Jazz? Rock and Roll? Catastrophe.

The Soviet state both said “today he plays jazz, tomorrow he betrays his country” and also printed cheerful matchbox art of ФЕСТИВАЛЬ (Festival) when the political winds shifted. The threat level of the instrument depended entirely on who was in charge that year.

His expert sourcing follows a similar pattern. Quote a government official convening emergency meetings (Treasury Secretary Bessent gathering the banks). Quote a vendor whose business model depends on threats expanding (Casey Ellis, founder of Bugcrowd). Quote a former FBI official warning about “wannabes” (Cynthia Kaiser, now a senior vice president at Halcyon). Close with water treatment plants. Everyone drinks water, it’s life. That’s a strong FUD move. Every quoted source in this piece stands to gain from security industry services related to the scariest story possible. Bugcrowd, Halcyon, Luta Security, Scythe. Who needs advertising when the article is the ad?

The Atlantic’s Priority

The Atlantic’s Matteo Wong went even further than Collier. His hyperventilated lede described Mythos as “a tool potentially capable of commandeering most computer servers in the world” that could “hack into banks, exfiltrate state secrets, and fry crucial infrastructure.”

It’s the opposite of reporting. It is the language of a film trailer. Anyone deep inside AI at the operations level knows how fundamentally flawed it remains versus humans.

Wong’s most consequential move was positioning Anthropic as a peer to nation-state intelligence services: “This level of cyberattack is typically available only to elite, state-sponsored hacking cells.” This framing matters because once the press treats a private company as operating at nation-state capability, the company inherits the presumption of nation-state authority over disclosure, access, and classification. Which is precisely what Project Glasswing establishes.

The Atlantic in 2023 published my co-authored article on real, documented AI harm. Tesla’s vehicles have been crashing into trees, killing motorcyclists, and veering off roads for years. The body count is in the hundreds now and the design flaws are landing in court cases. No Treasury Secretary convenes an emergency meeting over it. No consortium of tech giants receives $100 million to address it.

Tesla AI notoriously “veers” uncontrollably and fatally crashes. Design defects (e.g. Pinto doors) trap occupants and burn them to death as horrified witnesses and emergency responders watch helplessly. Source: VoCoFM, Korea, 2024

But a company announces that its AI could hypothetically find software vulnerabilities faster than defenders could close them, and the entire press corps treats it like the fall of civilization.

WIRED Gets It

WIRED’s Lily Hay Newman was the exception. She included skeptics, named Anthropic’s financial incentive, and quoted Niels Provos saying the model “doesn’t intrinsically change the problem space.” She quoted me, and so I’m pointing right back at her. Cisco’s president is out there calling Mythos “a very, very big deal” and Anthropic’s own red team lead is describing how “the phone calls got shorter and shorter.” Well, ok, but the counterargument at least got a seat at the table and may be the least prone to hallucinations.

Water Tanks

In 1915, a battle-hardened and war-weary Winston Churchill funded development of armored tractors meant to break through trenches, barbed wire, and machine gun nests. The British War Office ordered hundreds built under strict secrecy. The project was initially disguised as “water tanks”, which denied German intelligence any insight into what was actually being manufactured. The codename stuck, which is why ironically we still say tanks to speak of things that are not tanks.

The tank changed battlefield tactics, but it most certainly did not end battlefields. The immediate response was to dig better trenches and adapt doctrine. And, as always, a side that understood a new weapon’s limitations and integrated it into combined-arms operations won. A side that waxed about mythical wonder weapons, lost.

The history of the rifle tells the same story even more precisely. The bolt-action rifle gave way to the repeating rifle, which gave way to automatic fire. Each transition made a previous method more specialized. Each technology innovation demanded doctrinal adaptation. None of the innovations ended war. A rifle is not only still a rifle, the NRA whines constantly that you shouldn’t regulate an automatic rifle differently from a powder musket.

Vulnerability discovery has a similar question of progression. Manual research was bolt-action. Automated scanners were repeating. AI-assisted discovery is automatic. What Anthropic built with Mythos is a much faster fuzzer. And since they aren’t a security company at all, they probably are running around the office as if their hair is on fire yelling “what do we do, what do we do” instead of seeing it the way Churchill looked at a tank.

I say this from battle experience. When cloud computing arguably was first launched (e.g. Loudcloud, by Andreessen et al) I punched a massive hole right through claims about customer isolation. It was a normal finding, in my estimation. A service provider says customers are isolated, and my tool says nope. I handed the finding to the man sitting next to me and he literally jumped out of his chair, waved his hands in the air, ran out of the room and around the office yelling “OMG we’re in! We’re in!” He was, shall we say, less experienced.

Zero-day vulnerabilities have been found and disclosed continuously since the term was coined. Google’s Project Zero has been publishing them for a decade. The entire bug bounty industry exists because this is ordinary work. Finding two hundred exploits faster than the previous tool found 2 is an efficiency gain in the rate of fire. It is not a civilizational rupture. And here is what the coverage systematically omits: faster discovery means faster patching. A tool that finds vulnerabilities at scale is, by definition, a tool that enables remediation at scale. That makes it a patch accelerator. The question is who controls the framing.

I have spent over a decade working with AI and showing companies both how to break and how to secure it. What I can report from being deep in the field for so long is that the fundamentals have not changed. You still need someone who knows where to point the weapon, and you still need a trench to fight from. The obfuscation is in calling the automatic rifle a magic alien death ray.

Withholding as the Product

“Our model is so dangerous we can’t release it” is, of course, the same sentence as “our model is so valuable you need us.” Such product mystique reads to me more like another geturked presentation to those in power than a proper public threat modeling disclosure.

Kupferstich eines “Schachtürken”

Rename “we built a better fuzzer” to “we possess a weapon too dangerous for the public” and you have a centuries-old trick in the defense contractor playbook.

Anthropic announced that Mythos produced 181 working exploits from a vulnerability set where the previous flagship model succeeded only twice. That is a real capability jump and should be taken seriously.

What should also be taken seriously is what happened next: Anthropic shared the model exclusively with twelve tech giants under Project Glasswing, backed by $100 million in usage credits. The withholding became the product launch. “Too dangerous to release” turned out to be the most effective marketing copy the industry has ever produced, and both Collier and Wong ran it as news.

The Treasury meeting completes a very shady picture. Bessent convenes the banks, Anthropic briefs the banks, and suddenly every major financial institution has a rather convenient public-private attachment to Anthropic’s vulnerability discovery capability. That is an undemocratic merger wrapped in false national security fearmongering.

Back Door

The timeline gives it away. On February 27, 2026, Defense Secretary Hegseth raged about making Anthropic a supply chain risk after the company refused his demands to strip safeguards against mass surveillance and autonomous weapons from Claude. Hegseth bloviated so hard, he made Anthropic the first American company ever given a designation normally reserved for foreign adversaries. Anthropic naturally sued, because common sense has to go to court. A judge blocked the designation.

Five weeks later, Anthropic announced Mythos and handed it directly to Microsoft, Google, Apple, Amazon, and the rest of the companies the Pentagon depends on for its entire technology stack. The front door closed and the back door opened wider. When the Secretary of Defense designates you a foreign adversary over a contract dispute, the direct route to military integration is blocked. But you can achieve the same position by making yourself the security backbone of every company the military depends on. No contract. No congressional testimony. No use restrictions. The money flows through the same channels. The brand stays “clean” of Hegseth.

The Doctrine, Not the Weapon

Grant and Sherman won the Civil War by combining coordinated force with the systematic destruction of the enemy’s capacity to produce war. The engagement mattered less than the doctrine. AI vulnerability discovery tools follow the same logic: they are force multipliers for whatever doctrine you already have. If your doctrine is “sell fear,” they push a LOT of fear. If your doctrine is “map the attack surface and hold the line,” they multiply that.

The question nobody in the Vulnpocalypse coverage has asked is whether zero-day resolution is now accelerating faster than zero-day discovery. If it is, then Mythos is a net defensive tool and the entire panic narrative collapses. Anthropic has the data to answer this. They have not published it, to my knowledge. My guess is they lack the security experience to frame it that way.

The 1983 version of this panic produced NSDD-145 and eventually the Computer Fraud and Abuse Act, real legislation born from manufactured urgency. The 2026 version is producing something structurally different: a private company functioning as a classification authority that decides who gets access to vulnerability discovery capabilities and on what terms. That is a larger institutional shift than the old Presidential directive, and it is happening while the press runs “Vulnpocalypse” headlines and quotes panic pill vendors.

The exhausted CISOs and security teams I talk to many times every day already know the AI tools are real and they know the rate of fire has changed. What they need is a defensible position against the flood of AI vendors who confuse a product launch with the end of the world.

Anthropic calls its patch accelerator Mythos for the same reason Churchill called his tractors tanks. The name disguises the use, preventing doctrinal analysis.

Churchill hid the function so the enemy couldn’t develop counterdoctrine. Anthropic hides the function so the market can’t judge how a defensive tool is being pitched as an offensive threat.

Why Tom Holland is Going to Hell

CNN ran an Easter feature on Tom Holland, the British pop historian who wrote Dominion and now tours the American evangelical circuit as their favorite secular validator. The headline promises a “brush with the supernatural.” The article delivers something more instructive: a case study in what happens when a thesis is tailored to its paying audience.

Holland went to Sinjar in 2016, where ISIS had massacred Yazidis by the hundreds. Men executed. Women sold into slavery. The stench of decomposing bodies so overpowering he doubled over on camera. His takeaway: a cross was still standing above the rubble. It moved him. Financially. The dead Yazidis didn’t get a second thought as he walked through them towards his personal savior plans.

I was convinced I was going to devote my life to the Yazidi cause – and I tried. But I don’t devote my life to it. There are whole weeks when I don’t think of them.

Well, the Yazidi aren’t Christian, which is perhaps what prevented him from thinking of them more.

Booze for the Alcoholics

Holland’s thesis in Dominion is that Western values like compassion, equality, and human rights are Christian inventions. Secular people hold Christian beliefs without knowing it. The argument has a problem and a function, and the function explains why nobody talks about the problem.

The problem is that the thesis is false. Buddhist ethics developed sophisticated frameworks for compassion and non-harm five centuries before Christ. Jewish law codified obligations to the poor, the stranger, and the vulnerable long before Paul wrote his first letter. Confucian reciprocity predates Christianity by the same margin. Islamic jurisprudence built an entire legal architecture around human dignity. The Yazidi faith he walked through in Sinjar teaches its followers to pray for other religions before their own. It traces to pre-Zoroastrian traditions over a thousand years before Christ. Holland ignores ALL of it. A historian who omits most of human civilization from his thesis about most of human civilization is not doing history. He is doing something else.

The function is flattery. The Southern Baptist Seminary president calls Holland’s premise “fairly unassailable.” American evangelicals get a credentialed British intellectual telling them their religion invented morality. Holland gets the audience, the debate invitations, the YouTube clips, the Easter profiles. Booze for the alcoholics. Delivered in a posh accent with a PBS shine.

The same CNN writer who profiled Holland for Easter published a piece three months ago that documents Christianity’s central role in the KKK, slavery, and colonial genocide. The Holland thesis requires amnesia from the people telling it.

The Content Creator in the Foxhole

Holland’s own faith statements reveal how thin the performance is. “There are times where I can feel that I believe it. There are times when I don’t feel it at all.” His mother tells CNN “he never quite acknowledges it.” He says belief makes “the universe more interesting.” This is not faith. It is aesthetic consumption.

He cites R.S. Thomas as his spiritual touchstone. Any reader of Thomas knows what that means. Thomas was the poet of God’s absence, unanswered prayer, the empty church. His life’s work was the theology of divine silence. Holland cited him as a branding reference in a CNN puff piece. If Holland understood what Thomas was actually writing about, he would not have brought him up.

Holland himself invokes the foxhole cliché. Diagnosed with bowel cancer in 2021, he prayed at midnight mass on Christmas Eve. The cancer hadn’t spread. His brother connected him with a specialist. He now calls it a possible “Marian miracle” while conceding he can’t “100 percent say it’s a coincidence.” His brother’s phone call saved him. He credited the Virgin Mary.

Serious people have examined what happens to faith in actual foxholes. Rabbi Richard Rubenstein published After Auschwitz in 1966 and founded an entire field of theology on one premise: after the Holocaust, belief in a God who acts in history is intellectually indefensible. Elie Wiesel, who survived Auschwitz as a teenager, wrote The Trial of God with God as the defendant. The people who endured genocide concluded God was absent or dead. Holland walked through a genocide site and saw a camera angle.

The Honest Version

Christianity did reshape Western moral frameworks. That much is defensible, and Holland deserves credit for stating it plainly. Where the argument collapses is in calling it revelation rather than what the historical record shows it to be: power technology.

After 1945, British occupation forces deployed church networks across Germany to deprogram a generation raised on Nazi ideology. Christianity was the available operating system that could overwrite the previous one. The British didn’t evangelize the Hitler Youth because they believed. They did it because it worked. Christianity spread through colonization for the same reason. Empires used it because it was effective, and its effectiveness is what Holland is actually documenting.

An honest version of Holland’s thesis would say: Christianity became the dominant moral framework of the West because it was backed by cruel militant empires of history trying to obliterate other faiths, for profit. That is a serious historical argument. But it would empty his bleachers, so he wraps the same insight in a conversion narrative and sells it as mystery. The stench from Holland is almost too much to bear.

CNN calls this a story about faith. It’s a story about supply and demand. And if Holland actually believes what he now claims to believe, he should worry. He declared himself a Christian, then used the faith to sell snake oil to the faithful. By his own adopted theology, that’s the kind of thing they send you to hell for.

Decoding the Secret Dark Messaging of German Netflix

Way back in 1995, Bryan Singer gave us a special decoder key to video-based information.

He is supposed to be Turkish. Some say his father was German. Nobody believed he was real.

Keyser Söze was the invisible supervillain. The menace was the ethnic ambiguity itself. He was Turkish yet German. Dark yet light.

The devil’s trick is that he walks among us, nobody can see him.

Thirty years later, German streaming is moving this aesthetic logic mainstream and into the realm of direct statements. The sorting is the same, while the old dogwhistles are turning into fire alarms.

The Dog Show

Take Eat Pray Bark on Netflix for example. It is supposedly a lightweight comedy about eccentric dog owners attending a training camp in the Austrian Alps. The guru is framed as a mythological God, tall, blond, blue-eyed fair-skinned man named Rúrik Gíslason. Every character in the film regularly salivates over him. Literally, their tongues hang out and the screen pauses as they are struck by his blonde haired blue eyed Godliness. He is framed as a kind of Nordic oracle. Wisdom flows from his hairless body and carved cheekbones.

And then, there is the character of Hakan.

Played by Kerim Waller, an Austrian actor with a Turkish first name, he has hazel eyes, brown hair, and a bearded dark complexion. Hakan is quiet. Hakan is closed off. His line is literally “people are scared of me”. The other characters are regularly positioned as visibly uncomfortable around him. Even the mythical God who can do anything pauses, fails, and gives up trying to help Hakan.

Then, Hakan pulls out a police ID. And everyone relaxes. He’s welcomed, as if a magic token of acceptance was presented.

This is the bizarre scene that started me counting. In America, pulling out a police badge to reveal concealed authority only escalates tension. In this German comedy, it abruptly resolves all fears of Hakan. The badge functions obviously as a German whitening mechanism. The state vouches for a swarthy man. He must be ok, trusted now. You can stop being afraid of the beard because, police.

Just to be clear, the whole time that this guy would enter a scene I couldn’t understand why people acted like he was the devil. In American terms, he looks like the typical average dressed, calm, regular guy you’d see anywhere. Here’s what I’m talking about.

Source: Eat Pray Bark, Netflix

But the message being broadcast by German Netflix, apparently, is not that this is a normal friendly Joe. They emphasize the inversion using the hero of the story, a completely hairless body, scrubbed like a baby, topped with a wild blonde mane and a beard so thin it could be a rat tail.

Source: Eat Pray Bark, Netflix

Think about the images in American terms: Top guy is almost invisible he’s so regular. The bottom guy is attention-seeking, biker gang, drug dealer, human trafficker. To put it another way, as a security professional in the Bay Area, the bottom guy aesthetic is nearly identical to one of the largest drug dealers of San Francisco, who I ran into at a sushi bar one afternoon, not long before he was nearly stabbed to death.

Now for comparison, consider what seems to be the opposite in the German Netflix framing: Top guy is quiet, attention-avoidant, street gang, drug dealer, human trafficker. The film even scripts him into talking about his crime-filled life and security work on the edge, the death of his brother in a robbery gone bad. Meanwhile, the bottom guy becomes a superman, mythical god-like, demanding everyone’s attention in his wet pants.

And to be fair, it might not be an American versus German cultural parsing. Imagery of hairless men with large breasts who wet their pants has been heavily promoted recently by RFK Jr, if you see what I mean here:

Source: YouTube

A friend then mentioned they were enjoying the new Netflix series called Unfamiliar. A quick look and I saw a swarthy Jew was cast as the villain, while the “Nordic” German man was cast as the hero. The emojis my friend sent were notable when I pointed out the encoding. He couldn’t believe it as I explained how it worked. And once he could see, he said he could SEE. He even seemed a bit disappointed that he didn’t see before I explained what to look for. That got me thinking. I wondered if we should test the decoder key more broadly with Netflix. Pulling one thread started to unravel a much larger issue.

The decoder works not because anything sophisticated is going on. The opposite. It’s just a method like spotting animal camouflage in the wild. Do you see the praying mantis? First you don’t, then you do. Remember the fear of the devil who walks among us? Are you more or less comfortable knowing someone can train to spot disinformation in video productions?

Simply put, I studied disinformation history and it trains the eyes and ears. Disinformation expertise is literally useful everywhere, all the time, because we are swimming in IT these days. Did I just show you my police badge? Did it work?

Quick Back-of-Napkin Count

I scanned through casting data of 28 German-language Netflix productions from 2017 to 2026. I read 93 named cast entries. I classified each actor by name origin and documented heritage, and each role by type: protagonist, antagonist, or supporting.

The results:

Actor Name Origin Protagonist Antagonist Antagonist Rate
Germanic 33 6 13%
Turkish/Arabic/Persian 6 7 27%
Jewish/Sephardic 0 1 50%
Slavic/Eastern European 2 1 14%
Romance/Western European 3 1 20%
All Non-Germanic 11 10 25%

Germanic-named actors get protagonist roles more than double the rate of Turkish/Arabic-named actors. When Turkish or Arabic actors do lead a show, their character is still a criminal. Kida Khodr Ramadan played the Arab clan boss in 4 Blocks. Then he played the Arab enforcer Rami in Netflix’s Crooks.

Same face, same purpose, different show.

Frederick Lau played the Germanic undercover cop in 4 Blocks. Then he played the Germanic safecracker hero in Crooks.

Kren directed both 4 Blocks and Crooks, while the 4 Blocks writing team went on to create Kleo. The fact that Ramadan moved from one show’s Arab boss to another show’s Arab enforcer while Lau moved from Germanic cop to Germanic hero, is what we can call proof that this isn’t coincidence. There’s a repeating institutional practice across productions.

There’s a curated pipeline, an information doctrine.

Laundering Method

There is a secondary pattern in the character names. When non-Germanic actors are given protagonist roles, they receive maximally Germanic character names. The system scrubs the foreignness off them before it lets them lead.

Alexandra Maria Lara is Romanian. She plays “Ursula” in Eat Pray Bark. Jeanne Goursaud is French. She plays “Sara Wulf” in Exterritorial. Devrim Lingnau is German-Turkish. She plays Empress Elisabeth of Austria in The Empress. The most Germanic character imaginable.

When actors are cast as villains, the opposite happens. The character names stay ethnically marked. Hassan Al-Walid. Behzat Aygün. Rami. Josef Koleev. Hakan. The names signal foreignness. The audience is told who to trust and who to fear before a word of dialogue is spoken.

Unfamiliar All Too Familiar

When I was shown Netflix’s Unfamiliar, the biggest German-language spy thriller of 2026, I saw Finzi cast as Josef Koleev. The Russian mastermind. The high-ranking foreign threat. The antagonist.

Samuel Finzi is one of the most celebrated stage actors in the German-speaking world. Decades of awards. Deutsches Theater. Berliner Ensemble. Volksbühne. Critics’ polls have named him the favorite of the German-speaking scene. He is Jewish, and his father’s name is Itzhak Fintzi. A Bulgarian, born in Plovdiv.

Felix Kramer, born in East Berlin, plays opposite him as the German protagonist. The hero. This gets interesting because it shows a system isn’t sorting by actual complexion. It’s the thing that made my friend struggle to parse the information. Kramer and Finzi may be within a shade of each other. The system is sorting by name, by heritage signal, by who gets the Germanic wife and the Germanic surname and the protagonist arc, and then curating them with cinematography.

Source: Unfamiliar

Germany’s most decorated stage actor takes the villain role. The casting directors may not know or think about Finzi’s Jewishness. Finzi maybe doesn’t either. What viewers end up seeing is that he is the swarthy man. That is actively translated into German “foreignness”, making his Jewish-Balkan features a foundational aspect. Nobody had to articulate it for it to be real.

Source: Unfamiliar

Look at how they are portrayed. The villain is bathed in darkness. Shadows cutting across the face, low lighting, shot from slightly below. Classic villain framing. Meanwhile Kramer above is on the boat in daylight, next to a blonde, with the Oberbaumbrücke behind him. Berlin landmarks, natural light, open water. Hero framing.

The camera itself is swarthifying Finzi and lightening Kramer. The complexion difference is manufactured in post-production and cinematography, not just inherited from the actors’ faces. The mise-en-scène tells you who to fear before the script does.

What About a Control Case?

Dark, the most acclaimed German Netflix series ever made, ironically has no ethnic villain coding of darkness at all. The cast is almost entirely Germanic. The story is set in a homogeneous fictional town. There is no complexion entered into the screen to sort, so the sorting system activates by removing all the possibilities.

The pattern appears only when non-Germanic actors enter the cast. German storytelling is fine, yet it brings context that may not be. Ask what happens when German casting frames a dark face into a particular role.

Systematic Aesthetic

We shouldn’t move from what’s observable into wondering if someone overtly said “cast swarthy people as villains”. That is not how aesthetic systems work. They work most often through inheriting, and then emphasizing, the ugly yet easy defaults. Existing bias is a “feels right” moment without anyone asking why that bias feels right, in a self-perpetuating unchallenged environment. The blond guru is scripted to radiate wisdom, and when he turns out to be a fraud, he’s immediately redeemed for it, inherently absolved of guilt. The swarthy loner radiates threat. A police ID resolves his threat, because it’s externally applied validation. A Germanic character name resolves the foreignness.

These don’t have to be decisions, because they have been embedded to more conveniently make them into reflexes.

The word for what this system sorts against is not “race” in the American sense. That would make people racist, and they don’t want to be that. It is not “ethnicity” in the bureaucratic sense. That would mean ethnic groups have a complaint. This is a move into the integrity fog of complexion. Swarthy. Dark. The same word the show is named after, though the show itself never had to confront what it is conveying to audiences.

In 1995 the devil was played up as Turkish and German. In 2026 the German devil is the strong and silent type that appears… swarthy. The logic has not changed much. The casting system wants the audience to believe it is just watching light story-telling, when something much darker has been going on.