Category Archives: Poetry

Let AI Dangle: Why the sketch.dev Integrity Breach Demands Human Accountability, Not Technical Cages

AI safety should not be framed as choosing between safety and capability when it’s more accurately between the false security of constrained tools and the true security of accountable humans using powerful tools wisely. We know which choice builds better software and better organizations. History tells us who wins and why. The question is whether we have the courage to choose freedom of democratic systems over the comfortable illusion of a fascist control fetish.

“Let him have it” Chris – those few words destroyed a young man’s life in 1952 because their meaning was fatally ambiguous, as famously memorialized by Elvis Costello in his hit song “Let Him Dangle”.

Did Derek Bentley tell his friend to surrender the gun or to shoot the police officer? The dangerous ambiguity of language is what led to a tragic miscarriage of justice.

Today, we face a familiar crisis of contextualized intelligence, but this time it’s not human code that’s ambiguous, it’s the derived machine code. The recent sketch.dev outage, caused by an LLM switching “break” to “continue” during code refactor, represents something far more serious than a simple bug.

This is a small enough change in a larger code movement that we didn’t notice it during code review.

We as an industry could use better tooling on this front. Git will detect move-and-change at the file level, but not at the patch hunk level, even for pretty large hunks. (To be fair, there are API challenges.)

It’s very easy to miss important changes in a sea of green and red that’s otherwise mostly identical. That’s why we have diffs in the first place.

This kind of error has bitten me before, far before LLMs were around. But this problem is exacerbated by LLM coding agents. A human doing this refactor would select the original text, cut it, move to the new file, and paste it. Any changes after that would be intentional.

LLM coding agents work by writing patches. That means that to move code, they write two patches, a deletion and an insertion. This leaves room for transcription errors.

This is another glaring example of an old category of systemic failure that has been mostly ignored, at least outside nation-state intelligence operations: integrity breaches.

The real problem isn’t the AI because it’s the commercial sector’s abandonment of human accountability in development processes.

The common person’s bad intelligence is a luxury that is evaporating rapidly in the market. The debt of ignorance is rising rapidly due to automation.

The False Security of Technical Controls

When sketch.dev’s team responded to their AI-induced outage by adding “clipboard support to force byte-for-byte copying,” they made the classic mistake of treating a human process problem with a short-sighted technical band-aid. Imagine if the NSA reacted to a signals gathering failure by moving agents into your house.

The Stasi at work in a mobile observation unit. Source: DW. “BArch, MfS, HA II, Nr. 40000, S. 20, Bild 2”

This is like responding to a car accident by lowering all speed limits to 5 mph. Yes, certain risks can be reduced by heavily taxing all movements, but it also defeats the entire purpose of having movement highly automated.

As the battle-weary Eisenhower, who called for “confederation of mutual trust and respect”, also warned us:

If you want total security, go to prison. There you’re fed, clothed, given medical care and so on. The only thing lacking… is freedom.

Constraining AI to byte-perfect transcription isn’t security. It’s not, it really isn’t. It’s surrendering the very capabilities that make AI valuable in the first place, lowering security and productivity with a loss-loss outcome.

My father always used to tell me “a ship is safe in harbor, but that’s not what ships are built for”. When I sailed across the Pacific, every day a survival lesson, I knew exactly what he meant. We build AI coding tools to intelligently navigate the vast ocean of software complexity, not to sit safely docked at the pier in our pressed pink shorts partying to the saccharin yacht rock of find-and-replace operations.

Turkey Red and Madder dyes were used for uniforms, from railway coveralls to navy and military gear, as a low-cost method to obscure evidence of hard labor. New England elites (“Nantucket Reds”) ironically adapted them to be a carefully cultivated symbol of power. The practical application in hard labor inverted to a subtle marker of largess, American racism of a privileged caste.

The Accountability Vacuum

The real issue revealed by the sketch.dev incident isn’t that the AI made an interpretation – it’s that no human took responsibility for that interpretation.

The code was reviewed by a human, merged by a human, and deployed by a human. At each step, there was an opportunity for someone to own the decision and catch the error.

Instead, we’re creating systems where humans abdicate responsibility to AI, then blame the AI when things go wrong.

This is unethical and exactly backwards.

Consider what actually happened:

  • AI made a reasonable interpretation of ambiguous intent
  • A human reviewer glanced at a large diff and missed a critical change
  • The deployment process treated AI-generated code as equivalent to human-written code
  • When problems arose, the response was to constrain the AI rather than improve human oversight

The Pattern We Should Recognize

Privacy breaches follow predictable patterns not because systems lack technical controls, but because organizations lack accountability structures. A firewall that doesn’t “deny all” by default isn’t a technical failure, because we know all too well (e.g. codified in privacy breach laws) it’s organizational failure. Someone made the decision to configure it that way, and someone else failed to audit that very human decision.

The same is true for AI integrity breaches. They’re not inevitable technical failures because they’re predictable organizational failures. When we treat AI output as detached magic that humans can’t be expected to understand or verify, we create exactly the conditions for catastrophic mistakes.

Remember the phrase guns don’t kill people?

The Intelligence Partnership Model

The solution isn’t to lobotomize our AI tools into ASS (Artificially Stupid Systems) it’s to establish clear accountability for their use. This means:

Human ownership of AI decisions: Every AI-generated code change should have a named human who vouches for its correctness and takes responsibility for its consequences.

Graduated trust models: AI suggestions for trivial changes (formatting, variable renaming) can have lighter review than AI suggestions for logic changes (control flow, error handling).

Explicit verification requirements: Critical code paths should require human verification of AI changes, not just human approval of diffs.

Learning from errors: When AI makes mistakes, the focus should be on improving human oversight processes, not constraining AI capabilities.

Clear escalation paths: When humans don’t understand what AI is doing, there should be clear processes for getting help or rejecting the change entirely.

And none of this is novel, or innovative. This comes from a century of state-run intelligence operations within democratic societies winning wars against fascism. Study the history of disinformation and deception in warfare long enough and you’re condemned to see the mistakes being repeated today.

The Table Stakes

Here’s what’s really at stake: If we respond to AI integrity breaches by constraining AI systems to simple, “safe” operations, we’ll lose the transformative potential of AI-assisted development. We’ll end up with expensive autocomplete tools instead of genuine coding partners.

But if we maintain AI capabilities while building proper accountability structures, we can have both safety and progress. The sketch.dev team should have responded by improving their code review process, not by constraining their AI to byte-perfect copying.

Let Them Have Freedom

Derek Bentley died because the legal system failed to account for human responsibility in ambiguous situations. The judge, jury, and Home Secretary all had opportunities to recognize the ambiguity and choose mercy over rigid application of rules. Instead, they abdicated moral responsibility to legal mechanism.

We’re making the same mistake with AI systems. When an AI makes an ambiguous interpretation, the answer isn’t to eliminate ambiguity through technical constraints when it’s to ensure humans take responsibility for resolving that ambiguity appropriately.

The phrase “let him have it” was dangerous because it placed a life-or-death decision in the hands of someone without proper judgment or accountability. Today, we’re placing system-critical decisions in the hands of AI without proper human judgment or accountability.

We shouldn’t accept the kind of world where we eliminate ambiguity, as if a world without art could even exist, so let’s ensure someone competent and accountable can be authorized to interpret it correctly.

Real Security of Ike

True security comes from having humans who understand their tools, take ownership of their decisions, and learn from their mistakes. It doesn’t come from building technical cages that prevent those tools from being useful.

AI integrity breaches will continue until we accept that the problem is humans who abdicate their responsibility to understand and verify what is happening under their authority. The sketch.dev incident should be a wake-up call for better human processes, more ethics, not an excuse for replacing legs with pegs.

A ship may be safe in harbor, but we build ships to sail. Let’s build AI systems that can navigate the complexity of real software development, and let’s build human processes to navigate the complexity of working with those systems responsibly… like it’s 1925 again.

Musical Genius Tom Lehrer Passes Away at 97

The New York Times has the buried lede:

As popular as his songs were, Mr. Lehrer never felt entirely comfortable performing them. “I don’t feel the need for anonymous affection,” he told The New York Times in 2000. “If they buy my records, I love that. But I don’t think I need people in the dark applauding.”

Lehrer’s Genius was unmistakable, and his devotion to helping others instead of simply amassing attention, is what made him such a super hero. Think about the phrase “free time” in this obituary.

A math prodigy, Lehrer studied mathematics at Harvard at age 15, and graduated with a bachelor’s degree in 1946. He earned his master’s at Harvard the following year. He also worked on a doctorate there and at Columbia University, but never completed his Ph.D. thesis.

While in school, Lehrer wrote songs in his free time, and eventually recorded his first solo album, Songs of Tom Lehrer, in 1953. The release became a surprise hit and led him to perform at nightclubs and venues across the country.

Imagine all the free time of a teenager getting his graduate degree from Harvard in the 1940s. He was a true genius and an American hero, the kind that was always giving to others and yet never wanted the attention he deserved.

In October 2020, Lehrer released all music and lyrics he had ever written into the public domain. In November 2022, he formally relinquished the copyright, performing, and recording rights to his songs, and established a website to host recordings and printable copies of his sheet music for download. He added that the website “will be shut down at some point in the not too distant future, so if you want to download anything, don’t wait too long.” As of July 2025, the website is still operational.

Some say Lehrer stopped performing after Henry Kissinger had won the Nobel Peace Prize, stating simply that satire was now obsolete.

I’m not interested in promoting myself, or revealing to total strangers anything about me. That’s not my job. I read some of these things with people who will tell you about their abortions, and their affairs and their divorces and their breakdowns and their parents, and why are they doing that? And I’m sure if you asked them how much money they made last year, they’d tell you it’s none of your business.

His witicisms about risk and safety are legend.

“When I was in college, there were certain words you couldn’t say in front of a girl,” he writes in the sleeve notes for the new collection. “Now you can say them, but you can’t say ‘girl’. “

Tesla Income Plunges Over 40 Percent In a Bed of Lies

Wall Street doesn’t reflect reality, but reality is killing Tesla.

The electric carmaker posted total revenue of 22.5 billion dollars, down twelve per cent year-on-year and falling short of Wall Street’s 22.7 billion dollar estimate. Operating income plunged by forty-two per cent to 900 million dollars, marking Tesla’s second consecutive quarterly decline.

You would think the stock would be worthless by now, given it’s for a car company with seriously flawed designs flogged by a Nazi that nobody likes.

…Tesla is “a toxic brand that is inseparable from its leader.” Quarterly profits … fell to $1.17 billion, or 33 cents a share, from $1.4 billion, or 40 cents a share. That was the third quarter in a row that profit dropped. […] Tesla shares were little changed in after-hours trading…

Bye bye buy.

In terms of the investor call, the CEO played this game of fraud:

…we’ll have Robotaxi in half the population of the US by the end of the year. […] Investor questions begin with an inquiry about Tesla Robotaxis. Tesla noted that it expects to 10X its current operation in the coming months. The Bay Area is next, and Tesla is looking to expeedite the service’s approval. As for technical and regulatory hurdles for Unsupervised FSD, Elon Musk stated that he believes the feature should be available in a number of cities by the end of the year. Tesla, however, is being extremely paranoid about safety, so Unsupervised FSD’s rollout will be very, very cautious.

What a pile of absolute bullshit.

Promising investors revolutionary scale at revolutionary speed while emphasizing safety is a combination that defies technical and regulatory reality.

It’s amazing that bald face lying is still a thing to prop up stock prices.

Let’s count the problems, starting with a false dichotomy between aggressive expansion and safety (claiming both “extremely paranoid about safety” and serving half the US population by year-end), an appeal to extremes in promising impossible scaling (10X growth in months to reach 165+ million Americans), hasty generalization from limited current operations to nationwide deployment, post hoc reasoning that implies regulatory approval will automatically follow their timeline rather than determining it, equivocation through vague terms like “coming months” and “a number of cities” that obscure the lack of concrete planning, contradiction between needing Bay Area approval while claiming imminent national rollout, survivorship bias in focusing only on potential success while ignoring the massive infrastructure, regulatory, and technical hurdles, and wishful thinking disguised as business projections where desired outcomes are presented as inevitable results despite the fundamental impossibility of achieving such scale in the stated timeframe while maintaining the claimed safety standards.

Eight(yes eight)flaws from the ceo of 88 who’s always late, and full of hate, the Texas fraud of no cattle and all hat.

Tesla dealer showroom after the CEO gave Hitler salutes at a political rally

Let Them Eat Cake Recipes: Why Consciousness Will Never Be Code

Security professionals are intimately familiar with the tension between formalization and practice.

We can document every protocol, codify every procedure, and automate every response, yet still observe the art of security requires something more. Things made Easy, Routine and Minimal judgement (ERM) depend on a reliable source of Identification, Storage, Evaluation and Adaptation (ISEA).

A recent essay by astrophysicist Adam Frank in Noema Magazine explores a similar tension in consciousness studies, one that has profound implications for how we think about all intelligence, both human and artificial.

The tension here is far from new. Jeremy Bentham’s ambitious attempt to create a mathematical model of ethics—his utilitarian calculus—ultimately failed because it tried to reduce the irreducible complexity of moral experience to quantitative formulas. No amount of hedonic arithmetic could capture the lived reality of ethical decision-making. His codified concept of “propinquity” was never made practical, foreshadowing the massive deadly failures of driverless AI hundreds of years later.

In sharp contrast, Ludwig Wittgenstein succeeded in understanding language precisely because he abandoned the quest for mathematical foundations while being one of the best mathematicians in history (yet not a very good WWI soldier). His practical and revolutionary language games emerged from what he called “forms of life”—embodied, contextual practices that resist formal reduction. We depend on them heavily today as foundational to daily understanding.

Frank’s central argument is that modern science has developed what he calls a “blind spot” regarding consciousness and experience. The idiocy of efficiency means a rush to reduce everything to computational models and mathematical abstractions has totally forgotten something fundamental to success:

Experience is intimate — a continuous, ongoing background for all that happens. It is the fundamental starting point below all thoughts, concepts, ideas and feelings.

The blindness of the efficiency addict (e.g. DOGE) isn’t accidental. It’s built into the very foundation of dangerously lowering the safety bar for how we practice science. As Frank explains, early architects of the scientific method deliberately set aside subjective elements to focus on what Michel Bitbol calls the “structural invariants of experience“—the patterns that remain consistent across different observers. That may be a baseline, a reductive approach, that drops far too low to protect against harms.

The problem emerges when abstractions are allowed to substitute for reality itself, without acknowledging fraud risks. Frank describes this as a “surreptitious substitution” where mathematical models are labeled as more real than the lived experience they’re meant to describe.

Think of how temperature readings replaced the embodied experience of feeling hot or cold, to the point that thermodynamic equations became regarded as more fundamental than the sensations they originally measured.

Meta is Fraud, For Real

This leads to what Frank identifies as the dominant paradigm in consciousness studies: the machine metaphor (meta). From this perspective, organisms are “nothing more than complicated machines composed of biomolecules” and consciousness is simply computation running on biological hardware.

And of course there’s a fundamental difference between machines and living systems. Machines are engineered for specific purposes, while organisms exhibit something far more substantive in what philosophers call “autopoiesis“—they are self-creating and self-maintaining. Meta is extractive, reductive, a road to death without a host it can feed on. As Frank notes:

A cell’s essence is not its specific atoms. Instead, how a cell is organized defines its true nature.

This organizational closure—the way living systems form sustainable unified wholes that cannot be reduced to their parts—suggests a different approach to understanding consciousness. Rather than asking how matter creates experience, we might ask how experience and matter co-evolve through embodied symbiotic healthy interaction with the world.

You Can’t Eat a Recipe

To understand this distinction, consider consciousness within the act of cooking to eat rather than just computation. The recipe captures the structural patterns and relationships—the “how” and “what” that can be systematized and shared.

Actual cooking involves embodied skill, responsiveness to the moment, intuitive adjustments based on how things look, smell, and feel. There’s a tacit knowledge that emerges through the doing itself.

A skilled chef can follow the same recipe as the unskilled one and produce something entirely different. Ratatouille, the animated film, wasn’t about a rat as much as the lived experience; the kind of analysis of an environment that I like to call in my AI security work “compost in, cuisine out” (proving that “garbage in garbage out” is a false and dangerously misleading narrative).

A lightning strike enlightens this animated film protagonist like Frankenstein turned chef

The consciousness-as-cooking isn’t just about following instructions—it’s about lived engagement with materials, real-time adjustments, the way experience shapes perception which shapes action in an ongoing loop. OODA, PDCA… we know the loop models of audit and assessment as fundamental to winning wars.

Frank’s emphasis on “autopoiesis” fits here perfectly. Like cooking, consciousness might be fundamentally about self-creating and self-maintaining processes that can’t be fully captured from the outside. You can describe the biochemical reactions in bread rising, but the seasoned baker’s sense of when a proper bagel is ready involves a different kind of knowing altogether.

AI Security is Misunderstood

The necessary perspective has serious implications for how we think about artificial intelligence and its role in information security. When we treat intelligence as “mere computation,” we risk building systems that can process information but lack the embodied understanding that comes from being embedded in the world.

Everyone using a chatbot these days knows this intimately when you ask about the best apple and the machine spits back the fruit when you want the computer, or vice versa.

Frank warns that the deceptive reductionist approach “poses real dangers as these technologies are deployed across society.” When we mistake computational capability for intelligence, we risk creating a world where:

…our deepest connections and feelings of aliveness are flattened and devalued; pain and love are reduced to mere computational mechanisms viewable from an illusory and dead third-person perspective.

In security contexts, this might mean deploying AI systems that can detect patterns but lack critical contextual understanding that comes from embodied experience. They might follow the recipe perfectly while missing the subtle cues that experienced practitioners would notice.

Palantir is maybe the most egregious example of death and destruction from fraud. They literally tried to kill an innocent man, with zero accountability, while generating the terrorists that they had begged millions of dollars to help find. I call them the “self licking ISIS-cream cone” because Palantir is perhaps the worst intelligence scam in history.

Correct Approach: Embedded Experience

Rather than trying to embed consciousness in physics, Frank suggests we need to “embed physics into our experience.” This doesn’t mean abandoning mathematical models, but recognizing them as powerful tools that emerge from and serve embodied understanding.

From this perspective, the goal isn’t to explain consciousness away through formal systems, but to understand how mathematical abstractions manifest within lived experience. We don’t seek explanations that eliminate experience in favor of abstractions, but account for the power of abstractions within the structures of experience.

Cooking School Beats Every Recipe Database

This might be why the “hard problem” of consciousness feels so intractable when approached mathematically—it’s like trying to capture the essence of cooking by studying only the recipe. The formalization is useful, even essential, but it necessarily abstracts away from the very thing we’re most interested in: the lived experience of the cooking itself.

Perhaps consciousness studies—and by extension, our approach to AI and security—needs more public “cooking schools” and fewer Palantir “recipe databases.” More emphasis on cultivating the capacity for analysis and curiosity for lived inquiry rather than just dumping money into white supremacist billionaires building racist theoretical machine models.

This is the opposite of abandoning rigor or precision. It means recognizing that some forms of knowledge are irreducibly embodied and contextual. The recipe and the cooking are both essential—but they operate in different domains and serve different purposes.

For those of us working in security, our most sophisticated tools and protocols will always depend on practitioners who can read the subtle signs, make contextual judgments, and respond creatively to novel situations. The poetry of information security written here since 1995 lies not just in the practice of developing algorithms, but in the lived practice of protecting systems and people from harm in an ever-changing world.

The question isn’t whether we can build machines that think like humans, but whether we can create technologies that enhance rather than replace the irreducible art of human judgment and response. Like Bentham’s failed calculus, purely computational approaches to intelligence miss the embodied nature of understanding. But like Wittgenstein’s language games, consciousness might be best understood not as a problem to be solved, but as a form of life to be lived.

Perhaps the poet Wallace Stevens captured this tension best in “The Idea of Order at Key West,” where he writes of the sea and the singer who shapes our perception of it:

She sang beyond the genius of the sea.
The water never formed to mind or voice,
Like a body wholly body, fluttering
Its empty sleeves; and yet its mimic motion
Made constant cry, caused constantly a cry,
That was not ours although we understood,
Inhuman, of the veritable ocean.

The sea was not a mask. No more was she.
The song and water were not medleyed sound
Even if what she sang was what she heard,
Since what she sang was uttered word by word.
It may be that in all her phrases stirred
The grinding water and the gasping wind;
But it was she and not the sea we heard.

Consciousness, like the singer by the sea, is neither reducible to its material substrate nor separate from it. It emerges in the dynamic interaction between embodied beings and their world—not as computation, but as the lived poetry of existence itself.