Category Archives: Sailing

How Ethics Breaks Linear Thinkers

Think about the very concept of “waxing” and “waning” of a moon in orbital cycle. Such predictable rotation is mischaracterized by linear and momentary perspectives on what is fundamentally cyclical and relational.

It’s like a wheel being described as “going up” or “going down” when the wheel’s nature is rotation itself. The idea of modern flight is an apt metaphor too, if you can imagine describing lift without gravity, or up without down.

Dynamic equilibrium makes flight possible. Just look at penguins under water. Yeah, I’m talking about the flying penguin.

This connects deeply to a new article by Drew Dalton about ethics called “Reality is evil“. He makes a fascinating but ultimately flawed argument about just one aspect of thermodynamic reality (entropy, decay), unfairly declaring it the fundamental truth, while dismissing the other aspect (the emergence of complexity, life, consciousness) as mere illusion.

This is exactly the kind of unbalanced linear thinking that should and can be avoided in ethics. Dalton writes:

Everything eats and is eaten. Everything destroys and is destroyed.

But do you notice how he frames this cycle as purely destructive, rather than recognizing the relationships, the very mechanisms by which complexity and beauty emerge? Yes, entropy increases in closed systems, and yet Earth isn’t a closed system. We have constant energy input from the sun. The “destruction” he describes is also the creative process by which simple hydrogen becomes stars, stars create heavier elements, and those elements organize into the intricate dance of life.

His big ethical conclusion to “strike back at the Universe” is highly misleading and overly linear. He’s created a false opposition between human flourishing and natural processes, when in fact our capacity for love, art, healing, and meaning-making emerges from and through these very processes he calls evil.

It reminds me of when security professionals first start their career and have to be constantly reminded how business growth factors into any risk equations — they have to learn how and why the organization exists to create value, not just avoid harm.

What would an ethics look like that truly grasped the cyclical nature? Perhaps one that sees our role not as imperialist fighters against nature, but as conscious participants to curate ongoing creative-destructive dancing of existence itself.

Chinese Ships Collide Trying to Spook Filipinos: Coast Guard Rams Navy in Aggressive Territorial Maneuvers

I’ve watched this video many times. You have to marvel at how the Filipino sailors chased by larger Chinese ships outmaneuvered them, and then offered assistance, to further emphasize their superior seamanship.

Interesting to see how the lighter ship bow was totally crushed by the armored warship, yet maintained integrity as if by design. I mean it’s not unreasonable to expect puncture abeam instead of that crumple zone in a bow.

These Keystone Cops of the sea have been doing this stuff for years, and now digital media is helping expose one of the world’s biggest flashpoints. The contest has to do with controlled access to these waters being part of defending Taiwan from China. And the same region hosts $5tn of annual trade, meaning both high military and commercial stakes for anyone dominating the games.

It also reminded me a bit of this other high stakes competition crash on the water that just happened, which I’m sure few really care about.

Moments like this are when I miss being on the water the most.

Let AI Dangle: Why the sketch.dev Integrity Breach Demands Human Accountability, Not Technical Cages

AI safety should not be framed as choosing between safety and capability when it’s more accurately between the false security of constrained tools and the true security of accountable humans using powerful tools wisely. We know which choice builds better software and better organizations. History tells us who wins and why. The question is whether we have the courage to choose freedom of democratic systems over the comfortable illusion of a fascist control fetish.

“Let him have it” Chris – those few words destroyed a young man’s life in 1952 because their meaning was fatally ambiguous, as famously memorialized by Elvis Costello in his hit song “Let Him Dangle”.

Did Derek Bentley tell his friend to surrender the gun or to shoot the police officer? The dangerous ambiguity of language is what led to a tragic miscarriage of justice.

Today, we face a familiar crisis of contextualized intelligence, but this time it’s not human code that’s ambiguous, it’s the derived machine code. The recent sketch.dev outage, caused by an LLM switching “break” to “continue” during code refactor, represents something far more serious than a simple bug.

This is a small enough change in a larger code movement that we didn’t notice it during code review.

We as an industry could use better tooling on this front. Git will detect move-and-change at the file level, but not at the patch hunk level, even for pretty large hunks. (To be fair, there are API challenges.)

It’s very easy to miss important changes in a sea of green and red that’s otherwise mostly identical. That’s why we have diffs in the first place.

This kind of error has bitten me before, far before LLMs were around. But this problem is exacerbated by LLM coding agents. A human doing this refactor would select the original text, cut it, move to the new file, and paste it. Any changes after that would be intentional.

LLM coding agents work by writing patches. That means that to move code, they write two patches, a deletion and an insertion. This leaves room for transcription errors.

This is another glaring example of an old category of systemic failure that has been mostly ignored, at least outside nation-state intelligence operations: integrity breaches.

The real problem isn’t the AI because it’s the commercial sector’s abandonment of human accountability in development processes.

The common person’s bad intelligence is a luxury that is evaporating rapidly in the market. The debt of ignorance is rising rapidly due to automation.

The False Security of Technical Controls

When sketch.dev’s team responded to their AI-induced outage by adding “clipboard support to force byte-for-byte copying,” they made the classic mistake of treating a human process problem with a short-sighted technical band-aid. Imagine if the NSA reacted to a signals gathering failure by moving agents into your house.

The Stasi at work in a mobile observation unit. Source: DW. “BArch, MfS, HA II, Nr. 40000, S. 20, Bild 2”

This is like responding to a car accident by lowering all speed limits to 5 mph. Yes, certain risks can be reduced by heavily taxing all movements, but it also defeats the entire purpose of having movement highly automated.

As the battle-weary Eisenhower, who called for “confederation of mutual trust and respect”, also warned us:

If you want total security, go to prison. There you’re fed, clothed, given medical care and so on. The only thing lacking… is freedom.

Constraining AI to byte-perfect transcription isn’t security. It’s not, it really isn’t. It’s surrendering the very capabilities that make AI valuable in the first place, lowering security and productivity with a loss-loss outcome.

My father always used to tell me “a ship is safe in harbor, but that’s not what ships are built for”. When I sailed across the Pacific, every day a survival lesson, I knew exactly what he meant. We build AI coding tools to intelligently navigate the vast ocean of software complexity, not to sit safely docked at the pier in our pressed pink shorts partying to the saccharin yacht rock of find-and-replace operations.

Turkey Red and Madder dyes were used for uniforms, from railway coveralls to navy and military gear, as a low-cost method to obscure evidence of hard labor. New England elites (“Nantucket Reds”) ironically adapted them to be a carefully cultivated symbol of power. The practical application in hard labor inverted to a subtle marker of largess, American racism of a privileged caste.

The Accountability Vacuum

The real issue revealed by the sketch.dev incident isn’t that the AI made an interpretation – it’s that no human took responsibility for that interpretation.

The code was reviewed by a human, merged by a human, and deployed by a human. At each step, there was an opportunity for someone to own the decision and catch the error.

Instead, we’re creating systems where humans abdicate responsibility to AI, then blame the AI when things go wrong.

This is unethical and exactly backwards.

Consider what actually happened:

  • AI made a reasonable interpretation of ambiguous intent
  • A human reviewer glanced at a large diff and missed a critical change
  • The deployment process treated AI-generated code as equivalent to human-written code
  • When problems arose, the response was to constrain the AI rather than improve human oversight

The Pattern We Should Recognize

Privacy breaches follow predictable patterns not because systems lack technical controls, but because organizations lack accountability structures. A firewall that doesn’t “deny all” by default isn’t a technical failure, because we know all too well (e.g. codified in privacy breach laws) it’s organizational failure. Someone made the decision to configure it that way, and someone else failed to audit that very human decision.

The same is true for AI integrity breaches. They’re not inevitable technical failures because they’re predictable organizational failures. When we treat AI output as detached magic that humans can’t be expected to understand or verify, we create exactly the conditions for catastrophic mistakes.

Remember the phrase guns don’t kill people?

The Intelligence Partnership Model

The solution isn’t to lobotomize our AI tools into ASS (Artificially Stupid Systems) it’s to establish clear accountability for their use. This means:

Human ownership of AI decisions: Every AI-generated code change should have a named human who vouches for its correctness and takes responsibility for its consequences.

Graduated trust models: AI suggestions for trivial changes (formatting, variable renaming) can have lighter review than AI suggestions for logic changes (control flow, error handling).

Explicit verification requirements: Critical code paths should require human verification of AI changes, not just human approval of diffs.

Learning from errors: When AI makes mistakes, the focus should be on improving human oversight processes, not constraining AI capabilities.

Clear escalation paths: When humans don’t understand what AI is doing, there should be clear processes for getting help or rejecting the change entirely.

And none of this is novel, or innovative. This comes from a century of state-run intelligence operations within democratic societies winning wars against fascism. Study the history of disinformation and deception in warfare long enough and you’re condemned to see the mistakes being repeated today.

The Table Stakes

Here’s what’s really at stake: If we respond to AI integrity breaches by constraining AI systems to simple, “safe” operations, we’ll lose the transformative potential of AI-assisted development. We’ll end up with expensive autocomplete tools instead of genuine coding partners.

But if we maintain AI capabilities while building proper accountability structures, we can have both safety and progress. The sketch.dev team should have responded by improving their code review process, not by constraining their AI to byte-perfect copying.

Let Them Have Freedom

Derek Bentley died because the legal system failed to account for human responsibility in ambiguous situations. The judge, jury, and Home Secretary all had opportunities to recognize the ambiguity and choose mercy over rigid application of rules. Instead, they abdicated moral responsibility to legal mechanism.

We’re making the same mistake with AI systems. When an AI makes an ambiguous interpretation, the answer isn’t to eliminate ambiguity through technical constraints when it’s to ensure humans take responsibility for resolving that ambiguity appropriately.

The phrase “let him have it” was dangerous because it placed a life-or-death decision in the hands of someone without proper judgment or accountability. Today, we’re placing system-critical decisions in the hands of AI without proper human judgment or accountability.

We shouldn’t accept the kind of world where we eliminate ambiguity, as if a world without art could even exist, so let’s ensure someone competent and accountable can be authorized to interpret it correctly.

Real Security of Ike

True security comes from having humans who understand their tools, take ownership of their decisions, and learn from their mistakes. It doesn’t come from building technical cages that prevent those tools from being useful.

AI integrity breaches will continue until we accept that the problem is humans who abdicate their responsibility to understand and verify what is happening under their authority. The sketch.dev incident should be a wake-up call for better human processes, more ethics, not an excuse for replacing legs with pegs.

A ship may be safe in harbor, but we build ships to sail. Let’s build AI systems that can navigate the complexity of real software development, and let’s build human processes to navigate the complexity of working with those systems responsibly… like it’s 1925 again.

North Korea Says Capsized Russian-Designed Destroyer Was Only “Scratched”

You can’t make this Trump-sounding tinpot dictator stuff up. The second big new North Korean destroyer, designed by Russian spies stealing American ideas, was launched sideways and damaged, capsized, and has been taking on water.

KCNA reported that there were no holes on the ship’s bottom – contrary to initial reports.

“The hull starboard was scratched and a certain amount of seawater flowed into the stern section,” the agency said.

Seawater entered the hull but there was no hole? It’s like a ship taking on seawater through a… not hole. Nobody is allowed to say there’s a hole, when water flows into the stern, so that means ’tis but a scratch.

The Black Knight sketch was supposed to be fictional humor.

Arthur delivers a mighty blow which completely severs the Black Knight’s left arm at the shoulder. Arthur steps back triumphantly.

Arthur: “Now stand aside worthy adversary.”

Black Knight (Glancing at his shoulder): “Tis but a scratch.

Arthur: “A scratch? Your arm’s off.”

Black Knight: “No, it isn’t.”

Arthur (Pointing to the arm on ground): “Well, what’s that then?”

Black Knight: “I’ve had worse.”

Arthur: “You’re a liar.”

Black Knight: “Come on you pansy!”

“‘Tis but a scratch…. Just a flesh wound.” Source: Black Knight, Monty Python

I hate to ask out loud, but am I the only one seeing the 1982 Russian integrity vulnerability, when looking at that capsized hulk?

In time the Soviets came to understand that they had been stealing bogus technology, but now what were they to do? By implication, every cell of the Soviet leviathan might be infected. They had no way of knowing which equipment was sound, which was bogus. All was suspect, which was the intended endgame for the entire operation.

I suspect North Korea now will send even more of their boys to die in a ditch in Ukraine, until Russia gives them a new bogus stolen ship design. “This time with launch instructions” someone must be screaming in Russian right about now.