The Stupid. It Burns.
Six months of nation-state access to highly targeted networks simply because a widely-deployed tool treated TLS as the one and only integrity verification (rather than what it is, transport security).
The “sophisticated” attack reads like a tourist getting their wallet stolen from their beach chair while they went for a swim without it. Easy pickings, for someone to exploit unsophisticated engineering.
I love reading Dan Goodin, perhaps my favorite tech reporter of all time, but his article buries the lede:
…insufficient update verification controls that existed in older versions.
That’s the whole game, right there.
All the threat intelligence theater with chill names like “Chrysalis” and “Lotus Blossom,” with all the attribution to China-state actors getting “hands-on-keyboard” drama, obscures that this is a solved problem since at least 2005. Like twenty years ago Microsoft OEM’d an Israeli patching company and said oh shit we need to sign code, and that should have been the end of it, right?
Linux package managers have done cryptographic signature verification for many decades. Using apt, yum, pacman, etc means you verify GPG signatures against pinned keys before execution. The fix is older than many people involved in this disaster.
Why am I even writing about this.
The attack chain was to intercept update requests, redirect to a malicious binary, and let it execute. A checksum won’t save you here—if the attacker owns the distribution infrastructure, they serve bad binary and matching hash.
Self-consistent fraud.
The actual integrity breach fix is asymmetric signing. The developer signs a binary with a private key that never lives on update infrastructure. The client verifies against a public key pinned in the already-installed binary. Own the servers all you want—you can’t forge the signature without that hidden private key.
Here’s the part that should make you spit tea all over your screen. Or maybe that’s just me. They had signing. From Beaumont’s razor sharp analysis:
The downloads themselves are signed—however some earlier versions of Notepad++ used a self signed root cert, which is on Github.
Nice.
The lock was in the door and the key for it was… too. The integrity mechanism existed in form but not in function. A self-signed cert with the key material published on GitHub means anyone who could redirect traffic could also forge valid signatures. That’s theater, an appearance of an integrity control when it doesn’t actually constrain anything.
Content-addressable integrity needs better marketing? I don’t get it. The transport layer is a layer for defense in depth, which someone confused with the core package integrity mechanism itself. And the signing layer, which should have been the real gate, was all hat no cattle.
Resources probably all went into features and user growth. They even went into transport layer security. Yet missing content integrity controls allowed something catastrophic.
No regulators apparently required basic cryptographic verification that actually works. So the distribution of content never innovated on authenticity. Now they have an integrity breach, scrambling to apologize and patch late what should have been there since twenty years ago.
Solved cryptographic engineering. Same pattern, always. A consent banner that doesn’t constrain data collection. An operations audit that doesn’t examine infrastructure. The signature that doesn’t verify authenticity. The presence of a control, without regulations to ensure standards of care, can become dangerous cover for its absence.