Solid TEE is the Antidote to Simon’s Lethal AI Trifecta

Solid + TEE is the Fix Everyone Needs for AI Agent Security

Following up on my recent analysis of the overhyped GitHub MCP “vulnerability”, I was reading Simon Willison’s excellent breakdown of what he calls the AI agent security trifecta.

Cleaning bourbon off my keyboard from laughing at Invariant Labs’ awful “critical vulnerability” report, made me a bit more cautious this time. While my immediate reaction was to roll my eyes at yet another case of basic misconfiguration being dressed up as a zero-day, Simon’s actually describing something fundamental – and more fixable – which opens the door to regulators to drop a hammer on the market to spike the right kind of innovation.

Configuration Theatre

Let’s be clear about the “AI security vulnerabilities” exploding in the news for clicks: they are configuration issues wrapped with intentionally scary marketing language.

The GitHub MCP exploit I debunked? Classic example:

  1. Give your AI org-wide GitHub access (stupid)
  2. Mix public and private repos (normal)
  3. Let untrusted public content reach your agent (predictable)
  4. Disable security prompts (suicidal)

The “fix”?

Use GitHub’s fine-grained personal access tokens that have existed for years. Scope your permissions.

Dear Kings and Queens, don’t give the keys to your kingdom to a robotic fire-breathing dragon. Oh, the job of the court jester never gets old, unfortunately.

Ok, now let’s switch gears. Here’s where Simon gets it right: this isn’t really about any single vulnerability. It’s about the systemic impossibility in an ocean of unregulated code of maintaining perfect security hygiene across dozens of AI tools, each with their own configuration quirks, permission models, and interaction effects.

Security Theatre

The problem isn’t that we lack the tools to secure AI agents. No, no, we experts have plenty to sell you:

  • Fine-grained access tokens
  • Sandboxing and containerization
  • Network policies and firewalls
  • “AI guardrail” products (at premium prices, naturally)

The problem is another ancient and obvious one. We’re pushing liability onto users to make perfect security decisions about complex tool combinations, often without understanding the full attack surface. Even security professionals mess this up regularly, which means a juicy market both for exploitation by attackers and defenders. This favors wealthy elites and undermines society.

Don’t underestimate the sociopolitical situation that disruptive technology puts everyone in. When you install a new MCP server, which is fast becoming an expectation, you’re making implicit trust decisions:

  • What data can it access?
  • What external services can it call?
  • How does it interact with your other tools?
  • What happens when malicious content reaches it?

Misconfiguration means leaving the castle gates open and boom you just proved Simon’s lethal trifecta.

TEE Time With Solid: Secure-by-Default as the Actual AI Default

This is where the combination of Trusted Execution Environments (TEEs) and Solid data wallets gets interesting. Instead of relying on perfect human configuration, we can deploy a security model architecturally easy to configure safely and hard to misconfigure (fail safe). Imagine a process failing to run unless it has passed preflight checks, the same way a plane can’t take off without readiness assessments.

Cryptographic Access Control Slays the Configuration Dragon

Traditional approach: “Please remember to scope all your tokens scattered everywhere correctly and never give your AI agent access to repos that could be sensitive in the past or future.”

Impossible.

TEE + Solid approach: Your AI agent cryptographically cannot access data you haven’t explicitly granted it permission to use. The permissions are enforced at the hardware level, not the “please accept all fault even for things you can’t see or understand” level.

Traditional approach: Rely on application developers to implement proper sandboxing and hope they got it right or someone has enough money to start a lawsuit that might help then years from now.

TEE + Solid approach: The AI runs in a hardware-isolated environment where even the operating system can’t access its memory space. Malicious instructions can’t break out of a boundary that doesn’t rely on software for enforcement.

Traditional approach: Trust each vendor to implement security correctly and make a wish on a star that somehow someday their interests will align at least a little with yours.

TEE + Solid approach: Your data stays under your cryptographic control. The AI agent proves what it’s doing through attestation mechanisms. No vendor can access your data without your explicit, cryptographically verified consent.

Simon’s trifecta after TEE

The difference is remarkable when we’re talking TEE time.

Private Data Access: Instead of hoping developers scope permissions correctly, the TEE can only decrypt data you’ve explicitly authorized through your Solid wallet. Misconfiguration becomes cryptographically impossible.

Untrusted Content: Rather than trusting prompt filtering and “guardrails,” untrusted content gets processed in isolated contexts within the TEE. Even if malicious instructions succeed, they can’t access data from other contexts.

External Communication: Instead of hoping network policies are configured correctly, all outbound communication requires cryptographic signatures from your Solid identity. Data exfiltration requires explicit user consent that can’t be bypassed by clever prompts.

Ending the Hype Cycle

I’ve spent decades watching vendors try to return misconfiguration checks into pay-to-play profit for an elite few (e.g. how IBM used to require IBM to charge preflight checks and operations help).

Welcome to your robot taxi that while you are en route asks you for five more dollars or it will crash into a tree and burn you alive.

The AI space is in danger of falling into an extractive, toxic anti-competitive market playbook: identify a real problem, propose a proprietary monitoring layer to hedge as many customers in as possible, charge enterprise prices and build a moat for RICO-like rent seeking.

But TEE + Solid represents something different: an open standards based fundamental architectural shift that ends entire classes of vulnerabilities rather than just making them profit centers for those offering steps marginally harder to exploit.

This isn’t about buying better guardrails or more sophisticated monitoring. It’s about building systems where:

  1. Security properties are enforced by physics (hardware isolation) and mathematics (cryptography)
  2. Users maintain sovereign control over their data and AI interactions
  3. Default configurations are secure configurations because insecure configurations are impossible

It’s not any more difficult than saying TLS (transit) and AES (storage) are needed for safe information management. Turn it on or GTFO.

Practical Path Forward is Practical

We have consumer-ready TEE + Solid AI agents today. Why not try one? The hardware is already at scale, the protocols have standardization, and the user experience is all that needs serious work.

The foundation has been laid 2015. AMD, Intel, and ARM have all been pushing TEE capabilities into mainstream processors for almost a decade. The Solid protocol is the Web’s logical evolution. Open-source AI models are good enough to run safely and locally in controlled environments.

When we put the pieces together, we won’t need to hail Mary. The perfect security hygiene from users or perfect implementation from vendors is a myth for Disney profits, not daily life progress. The security will be built into the architecture itself.

So scope your damn cryptographic tokens, read your efficient security prompts on reasonable boundaries, and maybe don’t give org-wide access to experimental AI tools. Let’s start thinking about how we build systems that don’t require the fictional ubermensch and superhuman.

Super vigilance from every user should be recognized as super villainous, the anti-pattern to trust.

My bourbon should be for celebrating solutions, not crying over fixing problems that never should have existed in the first place.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.