Texas Tesla Robotaxi Failures “Scary as Hell”

The first videos shared from Austin of Tesla, revealing its verson of a robotaxi, are filled with complaints and criticism. “Scary as hell” is how people describe the vehicle within minutes.

So far the only people allowed to put their lives in danger are Tesla staff or stock investors. In other words, only people captured/indentured are allowed to test, in order to prevent free speech.

This brings to mind, of course, that the Tesla CEO in 2019 raised over $2 billion in stock by fraudulently claiming he would have 1 million robotaxis on roads by 2020.

That was after he infamously faked videos in 2016 to claim he had solved driverless, predicting that nobody would need to touch the steering wheel in a Tesla by 2017.

Perhaps it’s obvious now in 2025, watching Tesla still fail at the most basic driving, how they have killed so many people.

MD Tesla Crashes Into Police Car

Once again a Tesla crashes around 230 in the morning, which seems to be one of the usual signs that driverless is at fault.

Troopers responded to the inner loop of I-495 at Greenbelt Road around 2:44 a.m. for reports of a crash in Prince George’s County on Saturday, according to Maryland State Police (MSP). The trooper was in his marked vehicle when a gray Tesla operated by Teo Kim, 34, of Hanover, Maryland, hit the end of the police SUV.

KKK and the Red Dragon of Canada

An odd footnote in history is how Canadian chapters of the KKK were recognized by their “Red Dragon” theme or even “Grand Ragon”, as explained at the University of Washington.

The Royal Riders were a Ku Klux Klan auxiliary for people who were “Anglo-Saxon” and English-speaking but not technically native-born American citizens.

While many Royal Riders chapters were in the United States, the KKK also organized chapters in Canada. Some of the earliest documented Klan organizing in Canada occurred in British Columbia in November, 1921, at roughly the same time that organizers first began working in Washington state.

The Royal Riders of the Red Robe was only nominally a separate organization from the Klan. It was listed in the Klan’s Pacific Northwest Domain Directory, shared an office with Seattle Klan Local 4, and had its meetings with similar rituals in the same places as the Seattle Klan. Beginning in 1923, Klan events and propaganda came to regularly feature Royal Rider initiations and news.

The Grand Ragon (as opposed to the Klan’s Grand Dragon) of the Pacific Northwest Realm of the Royal Riders of the Red Robe was J. Arthur Herdon, and the King County Ragon was Walter L. Fowler. Naturalized but not native-born citizens in Seattle’s Royal Riders were organized into another Klan Auxiliary, the American Krusaders, on October 18, 1923.

Solid TEE is the Antidote to Simon’s Lethal AI Trifecta

Solid is the Fix Everyone Needs for AI Agent Security

Following up on my recent analysis of the overhyped GitHub MCP “vulnerability”, I was reading Simon Willison’s excellent breakdown of what he calls the AI agent security trifecta.

Cleaning bourbon off my keyboard from laughing at Invariant Labs’ awful “critical vulnerability” report, made me a bit more cautious this time. While my immediate reaction was to roll my eyes at yet another case of basic misconfiguration being dressed up as a zero-day, Simon’s actually describing something fundamental – and more fixable – which opens the door to regulators to drop a hammer on the market, to drive the right kind of innovation.

Configuration Theatre

Let’s be clear about the “AI security vulnerabilities” exploding in the news for clicks: they are configuration issues wrapped with intentionally scary marketing language.

The GitHub MCP exploit I debunked? Classic example:

  1. Give your AI org-wide GitHub access (stupid)
  2. Mix public and private repos (normal)
  3. Let untrusted public content reach your agent (predictable)
  4. Disable security prompts (suicidal)

The “fix”?

Use GitHub’s fine-grained personal access tokens that have existed for years. Scope your permissions.

Dear Kings and Queens, don’t give the keys to your kingdom to a robotic fire-breathing dragon. Oh, the job of the court jester never gets old, unfortunately.

Ok, now let’s switch gears. Here’s where Simon gets it right: this isn’t really about any single vulnerability. It’s about the systemic impossibility in an ocean of unregulated code of maintaining perfect security hygiene across dozens of AI tools, each with their own configuration quirks, permission models, and interaction effects.

Security Theatre

The problem isn’t that we lack the tools to secure AI agents. No, no, we experts have plenty to sell you:

  • Fine-grained access tokens
  • Sandboxing and containerization
  • Network policies and firewalls
  • “AI guardrail” products (at premium prices, naturally)

The problem is another ancient and obvious one. We’re pushing liability onto users to make perfect security decisions about complex tool combinations, often without understanding the full attack surface. Even security professionals mess this up regularly, which means a juicy market both for exploitation by attackers and defenders. This favors wealthy elites and undermines society.

Don’t underestimate the sociopolitical situation that disruptive technology puts everyone in. When you install a new MCP server, which is fast becoming an expectation, you’re making implicit trust decisions:

  • What data can it access?
  • What external services can it call?
  • How does it interact with your other tools?
  • What happens when malicious content reaches it?

Misconfiguration means leaving the castle gates open and boom you just proved Simon’s lethal trifecta.

TEE Time With Solid: Secure-by-Default as the Actual AI Default

This is where the combination of a Trusted Execution Environment (TEE) and Solid data wallets gets interesting. Instead of relying on perfect human configuration, we can deploy a security model architecturally easy to configure safely and hard to misconfigure (fail safe). Imagine a process failing to run unless it has passed preflight checks, the same way a plane can’t take off without readiness assessments.

Cryptographic Access Control Slays the Configuration Dragon

Traditional approach: “Please remember to scope all your tokens scattered everywhere correctly and never give your AI agent access to repos that could be sensitive in the past or future.”

Impossible.

TEE + Solid approach: Your AI agent cryptographically cannot access data you haven’t explicitly granted it permission to use. The permissions are enforced at the hardware level, not the “please accept all fault even for things you can’t see or understand” level.

Traditional approach: Rely on application developers to implement proper sandboxing and hope they got it right or someone has enough money to start a lawsuit that might help then years from now.

TEE + Solid approach: The AI runs in a hardware-isolated environment where even the operating system can’t access its memory space. Malicious instructions can’t break out of a boundary that doesn’t rely on software for enforcement.

Traditional approach: Trust each vendor to implement security correctly and make a wish on a star that somehow someday their interests will align at least a little with yours.

TEE + Solid approach: Your data stays under your cryptographic control. The AI agent proves what it’s doing through attestation mechanisms. No vendor can access your data without your explicit, cryptographically verified consent.

Simon’s trifecta after TEE

The difference is remarkable when we’re talking about TEE time.

TEEs aren’t perfect, since everything has attack surfaces, yet they shift the security model from hoping users configure everything correctly to enforcing boundaries in more programmatic hardware layers.

Private Data Access: Instead of hoping developers scope permissions correctly, the TEE can only decrypt data you’ve explicitly authorized through your Solid wallet. Misconfiguration becomes cryptographically constrained.

Untrusted Content: Rather than trusting prompt filtering and “guardrails,” untrusted content gets processed in isolated contexts within the TEE. Even if malicious instructions succeed, they can’t access data from other contexts.

External Communication: Instead of hoping network policies are configured correctly, all outbound communication requires cryptographic signatures from your Solid identity. Data exfiltration requires explicit user consent that can’t be bypassed by clever prompts.

Ending the Hype Cycle

I’ve spent decades watching vendors try to return misconfiguration checks into pay-to-play profit for an elite few (e.g. how IBM used to require IBM to charge preflight checks and operations help).

Speaking of extractive business models, welcome to your robot taxi that en route asks you for five more dollars or its configuration will allow you to crash into a tree and be burned alive.

The AI space is in danger of falling into an extractive, toxic anti-competitive market playbook: identify a real problem, propose a proprietary monitoring layer to hedge as many customers in as possible, charge enterprise prices and build a moat for RICO-like rent seeking.

But TEE + Solid represents something different: an open standards based fundamental architectural shift that ends entire classes of vulnerabilities rather than just making them profit centers for those offering steps marginally harder to exploit.

This isn’t about buying better guardrails or more sophisticated monitoring. It’s about building systems where:

  1. Security properties are enforced by physics (hardware isolation) and mathematics (cryptography)
  2. Users maintain sovereign control over their data and AI interactions
  3. Default configurations are secure configurations because insecure configurations are impossible

It’s not any more difficult than saying TLS (transit) and AES (storage) are needed for safe information management. Turn it on: one, two, three.

Practical Path Forward is Practical

We have building blocks for TEE + Solid AI agents today. Why not try one? The hardware is already at scale, the protocols have standardization, and the user experience is all that needs serious work.

The foundation has been laid since around 2015 (MapReduce shout out), which means AMD, Intel, and ARM have all been pushing TEE capabilities into mainstream processors for almost a decade. The Solid protocol is the Web’s logical evolution for storage. Open-source AI models are good enough to run safely and locally in controlled environments.

When we put these pieces together, we won’t need to hail Mary anymore. Expecting a perfect play of security hygiene from users or perfect implementation from vendors is a myth for Disney profits on fantasy, not a daily life strategy. The security needs to be built into the architecture itself.

So scope your damn cryptographic tokens like always, read your efficient security prompts attached to reasonable boundaries, and maybe don’t give org-wide access to experimental AI tools. Let’s start thinking about how we build systems that don’t require belief in an ubermensch or superhuman.

Super vigilance from every user should be recognized as a form of being super villainous, an anti-pattern to building trusted systems.

My bourbon should be for celebrating solutions, not crying over fixing problems that never should have existed in the first place.