EFF Proposes Granular Government Control Over Speech Devices

The policy that EFF just advocated on their blog and the business outcome the NETGEAR CEO just announced are the same policy and the same outcome. And that’s a very bad thing. The consumer is the justification in both cases and the beneficiary in neither.

Whether their combined failure is coordination or convergent interest doesn’t matter. Both organizations take an externally imposed condition and narrate it as evidence of their own virtue.

  • NETGEAR can’t sell new routers without a government waiver? Then the waiver becomes proof they’re trustworthy.
  • EFF can’t generate traffic on X? Then they leave claiming it as proof they have standards.

These are identical and disappointing because they convert an unjustified constraint into a credential.

EFF’s position on router bans protects the same move NETGEAR is making to falsely credential itself. EFF argues for product-by-product evaluation instead of a geographic ban. NETGEAR is the poster child for that argument: a US-headquartered company manufacturing abroad that would sail through a Cyber Trust Mark certification while being caught in a geographic ban. EFF’s “better policy” argument backs up the literal corrupted regulatory environment where the NETGEAR letter from their CEO makes sense.

To our Valued Customers:

We’re pleased to share that NETGEAR is the first retail consumer router company to receive conditional approval from the Federal Communications Commission (FCC) as a trusted consumer router company. We hope this recognition gives you added peace of mind — knowing that the network powering your home meets rigorous standards.

First of all, my name isn’t “our Valued Customers”. Did someone in China write this?

Second, “conditional approval” is a waiver from a blanket ban, not recognition or endorsement. It says “we licked boot” or “we kissed ring” and basically nothing more. The “transparency” process requires full management structure disclosures, supply chain disclosures, and a plan for onshoring manufacturing. None of that is real security. It’s basically a hostile roadmap for the Trump family to exert control over private companies. There’s no mention from NETGEAR how they navigated Andrew Jackson levels of federal corruption, and nothing has been proven about consumer safety let alone a security certification.

Third, at the risk of repeating myself, “rigorous standards”? What? I see none.

The traditional EFF position on this would frame Trump’s blanket ban on foreign hardware as overreach, full stop. Instead, their response has been to swallow expanded government authority and loss of liberty. Because why? Are they worried about certain forms of hardware or software being among the good stuff, justifying tailored controls? What proof of any gain in security do they need to prevent overreach?

Let me put it like this. The EFF says we can’t narrowly tailor speech to block Nazis. We have to let even the most heinous and deadly intent for genocide flow on networks. They take a reverse granular approach to find and support Nazis at risk of being held accountable, setting them free to do more harm. However, Cyber Trust Mark is suddenly their granular, case-by-case government evaluation standard we should get behind for devices that control speech? Why not apply that logic to content on them? The Cyber Trust Mark literally could judge whether the router is capable of filtering Nazi speech.

The Cyber Trust Mark evaluates a product’s security properties on a case-by-case basis. Content moderation evaluates speech properties on a case-by-case basis. How different are they?

Hello, is this thing on? Can you hear me?

Is your router blocking me? I joke but UK Virgin routers literally block this blog by default as unsafe content because… I talk about Nazis being bad. That’s true.

EFF champions and rejects the same thing without a logic to sustain the contradiction. The router that passes Cyber Trust Mark certification could enforce the content standards EFF says nobody should enforce. The hardware and the speech run through the same device, and EFF has now argued for granular government authority over the device while arguing against granular authority over what flows through it.

FreeBSD CVE-2026-4747 Log Suggests Mythos is a Marketing Trick

Anthropic’s flagship showcase for Claude Mythos Preview is CVE-2026-4747, a remote kernel code execution vulnerability in FreeBSD’s RPCSEC_GSS module. It is a 17-year-old bug. It is a textbook stack buffer overflow. And it was found before Mythos, patched by FreeBSD, and publicly exploited by a third party. Yet someone’s idea of credit flows backwards to Mythos.

The FreeBSD security advisory says this:

Credits: Nicholas Carlini using Claude, Anthropic
Announced: 2026-03-26

The advisory notably credits “Claude”, leaving out the model that Carlini used in his February 2026 paper documenting 500+ vulnerabilities found by the prior model.

Then the Anthropic Mythos launch blog says this:

Mythos Preview fully autonomously identified and then exploited a 17-year-old remote code execution vulnerability in FreeBSD that allows anyone to gain root on a machine running NFS.

The FreeBSD advisory is dated March 26, and the Mythos launch was April 7, 2026. Twelve day gap.

Carlini is an Anthropic employee. If he used Mythos to find this bug, Anthropic controls the disclosure pipeline and the credit line. “Nicholas Carlini using Claude Mythos Preview, Anthropic” makes sense as their marketing pitch. It’s also weird to market tools in a disclosure. What brand office chair was he sitting on? Did Logitech provide the keyboard? Was his underwear Calvin Klein?

Ads in bug reports? The future integrity of vulnerability disclosure at stake

The simplest explanation for why they did not heavily brand promote Mythos in a March 26 advisory is that Mythos was not the model used. If that explanation is wrong, the question is why Anthropic left the most valuable attribution in the entire Glasswing launch on the cutting room floor of a FreeBSD advisory, only to claim it twelve days later in a blog post, without offering proof. Reversal is hard and not believable.

So either Mythos rediscovered a bug that Anthropic’s own prior model had already found, reported publicly, and gotten patched, or Anthropic is attributing the prior model’s work to the new product.

In the first case, the showcase proves Mythos can find what someone else already found. In the second case, the showcase is misattributed.

Neither version supports the “unprecedented frontier capability” narrative.

And both versions of this story are irrelevant next to the fact that AISLE showed 8 of 8 open-weight models detect the same bug, including a small model that costs eleven cents per million tokens.

That’s everything.

The frontier-exclusive claim dies on the commodity reproduction regardless of which Anthropic model found it first.

Timeline

  • February 5, 2026: Carlini and colleagues at Anthropic’s Frontier Red Team publish “Evaluating and mitigating the growing risk of LLM-discovered 0-days.” The model is apparently Claude Opus 4.6. The paper documents over 500 validated high-severity vulnerabilities in open-source software, including FreeBSD findings. The FreeBSD advisory credits the same researcher, the same company, and the same disclosure pipeline that produced the February paper.
  • March 26, 2026: FreeBSD publishes advisory FreeBSD-SA-26:08.rpcsec_gss. Credits Nicholas Carlini using Claude, Anthropic. The bug is patched across all supported FreeBSD branches.
  • March 29, 2026: Calif.io’s MAD Bugs project asks Claude to develop an exploit for the already-disclosed CVE. Claude delivers two working root shell exploits in approximately four hours of working time. Both work on first attempt. The model used is Opus 4.6.
  • April 7, 2026: Anthropic launches Mythos Preview. The launch blog claims Mythos “fully autonomously identified and then exploited” the FreeBSD vulnerability. No mention of Opus 4.6, or that it found it first. No mention that FreeBSD patched it twelve days earlier. No mention that a third party had already built a working exploit with the prior model.
  • April 8-13, 2026: AISLE tests 8 open-weight models against the same CVE. All 8 detect it, including GPT-OSS-20b with 3.6 billion active parameters at $0.11 per million tokens.

The Vulnerability

CVE-2026-4747 is a stack buffer overflow in svc_rpc_gss_validate(). The function copies an attacker-controlled credential body into a 128-byte stack buffer without checking that the data fits. The XDR layer allows credentials up to 400 bytes, giving 304 bytes of overflow. The overflow happens in kernel context on an NFS worker thread, so controlling the instruction pointer means full kernel code execution.

Two things make the exploitation straightforward.

FreeBSD 14.x has no KASLR. Kernel addresses are fixed and predictable. And FreeBSD has no stack canaries for integer arrays, which is what the overflowed buffer uses.

A modern Linux kernel would have both mitigations. FreeBSD has neither. And the FreeBSD forums noticed. One user pointed out that Claude “wrote code to exploit a known CVE given to it” and did not “crack” FreeBSD.

That distinction matters a lot here, because Anthropic doesn’t seem very good at it.

  • The advisory was public.
  • The vulnerable function was identified.
  • The lack of mitigations was documented.

The exploit development, while technically impressive as an AI demonstration of cost reallocation, was performed against a disclosed vulnerability on a target with no modern exploit mitigations. That is a VERY different claim from “autonomous discovery of an unprecedented threat.”

Anthropic FUD Show

If you read the Mythos blog claim charitably, Mythos may have independently rediscovered CVE-2026-4747 during internal testing before launch. That is plausible. It is also meaningless as a capability demonstration, because Opus 4.6 found it first, a third party exploited it with Opus 4.6 three days later, and AISLE showed that an inexpensive old model finds it too.

If you read the claim less charitably, Anthropic presented a prior model’s discovery as a new model’s achievement in the launch materials for the new model. The FreeBSD advisory is a PGP-signed public document dated March 26 that credits “Claude,” not “Mythos.” The Mythos blog post claims the finding without acknowledging the prior discovery, which is damning. Anthropic controlled the credit line on the advisory. It’s not Mythos.

Either way, the showcase flops because it does not demonstrate what Anthropic claims.

The “too dangerous to release” framing requires the capability to be frontier-exclusive. A bug found by a prior model, detectable by small open-weight models for eleven cents per million tokens, on a target with no KASLR and no stack canaries, is the opposite of frontier-exclusive.

It is the worked example that proves the capability is already commodity.

Enough of This

“Hey kids. Nice trick. You just charged me over 200 times the going rate to fuzz a vulnerability that my 3.6B model found for a dime. Now I’d like my credits back.”

This is the same structure as the Firefox 147 evaluation. Bugs found by Opus 4.6, handed to Mythos, tested in an environment with mitigations removed, presented as evidence that Mythos is too dangerous to release.

The Firefox bugs were pre-discovered by Opus 4.6 and already patched by Firefox 148. The FreeBSD bug was pre-discovered by Opus 4.6 and already patched by FreeBSD on March 26.

In both of the cases we are expected to investigate, the prior model found the bugs.

In both cases, the targets lacked the defenses that production systems have.

In both cases, AISLE reproduced the detection on pocket-change models.

In both cases, I’m getting tired of this not being the actual news.

  1. The system card’s Firefox evaluation collapses to 4.4% when the top two bugs are removed.
  2. The FreeBSD showcase collapses entirely when you read the date on the advisory.

The Anthropic Riddle

Did Mythos find CVE-2026-4747 independently, or did Anthropic attribute the prior model’s finding to Mythos in the launch materials?

The FreeBSD advisory is a signed document with a date and a credit line. The Mythos blog post seems to be a sloppy marketing document with a bullshit claim.

If Mythos found it independently, say so explicitly, with timestamps, and explain why rediscovering a bug your prior model already found and got patched is evidence of unprecedented capability rather than evidence that the capability is already widespread.

If Mythos did not find it independently, retract the claim, and tell the hundreds of people signing up for Martian gamma ray defense training that it’s all just a sad joke.

The PGP signature on the FreeBSD advisory is there for a reason. It’s one thing in this entire story that cannot be edited after the fact, which now says a lot about the current trajectory of trustworthiness in Anthropic.


Sources

Texas Governor Moves to Defund Police

Texas Governor Greg Abbott is already moving to defund the police in Houston. He is pulling $110 million in public safety grants that fund police, fire, emergency preparedness, and security operations for the 2026 FIFA World Cup at NRG Stadium.

It seems to be related to an ordinance that says police shouldn’t wait around for federal agents. He gave Mayor John Whitmire until April 20 to repeal a money-saving efficiency ordinance or repay the full $110 million within 30 days. Attorney General Ken Paxton, running for U.S. Senate, opened an investigation the same week and raised the possibility of removing elected officials from office. For what?

Council Member Abbie Kamin called the Texas State order what it is: Abbott is “defunding the police.”

Houston’s city council simply voted 12-5 last week to do what is expected of them: free police officers from being saddled with detaining people or prolonging traffic stops only over civil immigration warrants issued by ICE. Officers still contact ICE. They just don’t stop all police work to instead sit around and physically detain people while federal agents may never show up. If ICE wants someone physically detained, that’s ICE’s job, while the police have more important actual police work to do.

This is routine. San Antonio requires officers to contact ICE but operates the same way. Dallas officers don’t wait for ICE to respond either. Austin and Dallas give supervisors discretion over whether to contact ICE at all. Houston’s policy is more cooperative than several other major Texas cities.

Abbott is targeting Houston in his first move to defund the police. If the Houston ordinance stands, Texas defunds police by cutting budget. If the Houston ordinance is removed, Texas defunds police by cutting authority.

Police lose, either way.

Science: Control Damages Mental Accuracy, Especially Wealth

Dave Chappelle told the AP recently:

One of the worst things that can happen to a comedian is becoming successful before they get good. Because you miss the part where you get to explore and make mistakes.

A new paper in Nature Reviews Neuroscience by Lisa Feldman Barrett and Earl K. Miller explains why. Before you process sensory input, your brain has already constructed a category based on prior experience, current needs, and a predicted action plan. Incoming signals get compressed into that prediction. The brain doesn’t receive evidence and then decide. It decides and then receives evidence.

The architecture is lopsided. As much as 90% of synapses in the visual cortex carry feedback signals from memory, not feedforward signals from the senses. Beta frequency waves carrying goals and plans constrain gamma waves carrying sensory specifics. The system is built to confirm, not to discover.

Barrett puts it plainly:

The stimulus, cognition, response model of the brain is wrong. The brain prepares for a response and then perceives a stimulus. A brain is not reactive. It’s predictive. Action planning comes first. Perception comes second, as a function of the action plan.

None of this is new.

Linguistic anthropology has been saying it for a century. Sapir and Whorf argued that language categories shape perception before evidence arrives. Boas documented how culturally constructed categories determine what counts as data. The entire tradition of cultural relativism rests on the observation that humans don’t perceive first and categorize second. Our 419 scam research showed the same mechanism at the social level: the mark’s categorical predictions, trust, greed, opportunity, suppress disconfirming signals in the data until the money is gone.

What Barrett and Miller add is synapses and beta waves. They’ve given neuroscience a mechanism for something fieldwork established generations ago.

Everything does this.

Special operations and intelligence work are supposed to be the disciplines where categorical calibration matters most. A Delta Force commander named Pete Blaber formulated a principle he called “Don’t Get Treed by a Chihuahua”: don’t impose the wrong threat category on incoming data and take extreme self-limiting action based on a misidentification. That’s Barrett and Miller’s model stated as tactical doctrine. The operator who categorizes every sound in the dark as “bear” will exhaust himself climbing trees. The one who categorizes every sound as “nothing” will get killed. Calibration is survival.

But Blaber’s own cultural priors were so uncalibrated he believed Cat Stevens was the most famous celebrity convert to Islam, while obviously Muhammad Ali towered directly above him in the data. His feedback architecture on Islamic culture had never been tested by prediction error, so it never updated. A special operations commander with access to the best tools on the planet, wandering along a flat line of cultural ignorance about Islam while giant mountains of evidence stood right above him, unexplored.

Intelligence operations face the same structural problem. An analyst arrives at a data stream with a category already constructed: “threat,” “insurgent,” “enemy combatant.” Incoming signals get compressed into that prediction. Disconfirming evidence, the villager who is just a villager, the communication that is just a communication, gets suppressed by the 90/10 feedback-to-feedforward ratio. The category shapes the evidence, not the other way around.

Barrett and Miller describe this as efficient allostasis. In institutional form, it is something else.

ICE executes an innocent American in Minneapolis, and a Silicon Valley billionaire announces that “no law enforcement has shot an innocent person.” The shooting creates the guilt that justifies the shooting. There is no possible prediction error because the category “innocent dead person” has been defined as inherently empty. A military command designates targets before ground truth arrives. Palantir automates the whole broken process, speeding up disaster.

The paper identifies two pathological modes.

  1. Depression: overly broad threat categorization imposed on situations that don’t require it.
  2. Autism: inadequate compression, treating every input as novel, failing to generalize. Both are failures of categorical calibration.

The first produces false positives that destroy lives. The second produces paralysis. But Barrett and Miller don’t address a third failure mode, the one that matters most for power.

Learning, in their model, happens through prediction error. When your categorical predictions fail, surprise gets integrated and the system updates. That’s the reset mechanism. But the 90/10 feedback-to-feedforward ratio means the architecture actively suppresses disconfirming signals. You need sustained, consequential prediction failure to force categorical restructuring.

Power (control) such as wealth eliminates prediction error.

If your categories are being set up to never get tested against reality, they never update. You can construct an environment where your priors are confirmed by every input, because you select the inputs. You hire people who compress information into your existing categories. You fund institutions that broadcast your predictions back to you as findings. You build companies that do this at scale.

I’ve written before about how Peter Thiel’s father Klaus strategically relocated his family through a series of Nazi-sympathetic enclaves, from Germany to Swakopmund to Reagan’s California, each time fleeing the prospect of democratic accountability. The categorical priors installed in that childhood, racial hierarchy as natural order, extraction as economic soundness, authoritarianism as operational efficiency, are exactly what Barrett and Miller’s model predicts would become permanent architecture. And the result is what I described years ago as false-paranoia fundamental to Nazism: someone who perceives existential threats everywhere, monsters under the bed that do not actually exist. Barrett would call that pathological overgeneralization of the threat category.

Thiel built an empire on it.

Palantir is, in a precise neuroscientific sense, a machine for imposing categorical predictions on incoming data and suppressing signals that don’t fit the action plan. It replicates the brain’s feedback-dominant architecture at the scale of national intelligence. In Iraq, Palantir’s “God’s Eye” nearly killed a farmer because it misidentified his hat color at dawn. Military intelligence on the ground said if you doubt Palantir, you’re probably right. But the system had no mechanism for integrating that doubt. False positives at checkpoints radicalized the communities being falsely flagged, which eventually confirmed the original threat predictions.

The categorical errors generated the evidence that validated the categories.

Blaber understood that an operator must calibrate threat categories against reality or die. Palantir removed the operator from the loop entirely, replacing calibration with automation, and the result was a self-fulfilling prophecy that created the terrorists it promised to find. That’s why I called Palantir the self-licking ISIS-cream cone.

The question Barrett and Miller raise without answering: what happens when the system is designed so that prediction errors never reach the organism?

The humanities are a trained feedback mechanism that catches categorical errors, and the CEO of Palantir is specifically campaigning to eliminate that layer. A working-class person with humanities training is Palantir’s worst customer because they can spot the prediction failures the system is designed to suppress.

The brain can reset its priors. The architecture allows it. But only if predictions fail hard enough, consistently enough, that the feedback loop can’t absorb the error. Control power is the ability to make your predictions unfalsifiable. Wealth is one such mechanism that makes inaccurate power (e.g. racism) neurologically permanent.