Cryspen’s co-founder Karthikeyan Bhargavan told The Register last week:
we did not do great with these advisories.
You can say that again.
Nadim Kobeissi, an applied cryptographer, found thirteen vulnerabilities in Cryspen’s libcrux and hpke-rs libraries. He published the findings in an IACR ePrint paper titled “Verification Theatre.” Catchy title. You can tell right away he’s a legit researcher. Nine of the bugs were in unverified code. Four were inside the formally verified boundary, which means the code Cryspen markets as providing “the highest level of assurance.”
This is actually a big deal.
I noticed, as one example, only 58.4% of ML-KEM deployment code actually has its proofs checked. The entire NEON backend for every ARM64 device (iOS and Android) is fully admitted with zero proofs checked. That’s the “verification theatre”.
Cryspen built Signal’s post-quantum ratchet (SPQR) using libcrux’s ML-KEM implementation. Their own website states the implementation is:
formally verified for panic-freedom, functional correctness, and secret independence, and hence provides a high degree of assurance.
One of Kobeissi’s bugs caused real decryption failures in that implementation (a cross-backend endianness error). Signal’s signal-crypto crate depends directly on hpke-rs, where the nonce-reuse vulnerability lives.
The nonce-reuse bug enables full AES-GCM plaintext recovery and forgery after 2^32 encryptions with a single HPKE setup. In plain terms, the encryption uses a counter that runs out of numbers after about four billion messages, at which point it starts reusing the same “salt” to scramble data. Once that happens, an attacker can read everything and forge new messages that look authentic.
It sounds bad because it is. AES-GCM nonce reuse is a textbook attack with published working exploits since 2016. Whether four billion sounds like a lot depends on who’s using the library. Google noted their servers handle hundreds of millions of encrypted tokens per second under load. That’s ten seconds or less. The Netty project had the identical counter overflow bug and it got a CVE and a published advisory as one would expect.
There’s also a missing mandatory X25519 validation required by RFC 9180, ECDSA signature malleability, an Ed25519 key generation defect, a denial-of-service via unhandled AES-GCM decryption, and two FIPS 204 specification violations in the ML-DSA verifier.
At this point you might expect Kobeissi to be given appropriate respect for his work. He clearly went above and beyond in helping consumers of these libraries (Signal, OpenMLS, Google, and components touching the Linux kernel and SSH).
Cryspen told The Register these bugs “were addressed within a week”, using speed optics rather than assurance. They then claimed no bugs had been found in their verified code.
Kobeissi’s paper in fact documents four.
So here’s where it gets really interesting.
RustSec is the advisory database for the Rust ecosystem. When you run cargo audit, it checks your dependencies against RustSec. If they refuse to add an advisory, you get a false clean audit. Your CI passes when it should stop. Your security review finds nothing. Every automated tool in the ecosystem treats a vulnerability as if it doesn’t exist.
Kobeissi filed advisory pull requests with RustSec for the nonce-reuse and denial-of-service vulnerabilities. The RustSec maintainer went passive aggressive and closed the pull requests without any technical justification.
You might say this is just sloppy or rushed work, but then Kobeissi was silently blocked from the RustSec GitHub organization without notice. His pending pull request was closed after he was blocked. You probably catch the drift here. Someone must have felt shame, embarrassment even, and started shooting a messenger. One Register commenter gave more context:
The bugs are real, and easily fixed — but I’ve absolutely no idea whether the version I’m using is fixed or not, because they refused to publish an advisory.
Dead messengers, no messages. That’s not a good sign. Let’s review.
- February 5: Kobeissi submitted PRs with tested fixes
- Within 24 hours: Cryspen blocked his GitHub account and closed all four PRs without technical review
- February 9: Cryspen merged his fixes without attribution
- February 12: Cryspen published a response omitting most of the bugs and claiming “no bugs have been found in the verified code.”
I’ve been researching and writing about systems integrity failure for decades. The pattern is simple: the system meant to admit a problem is captured by the people who produced it, and who are incentivized to hide problems instead. The advisory database appears to be set up for it to be cheaper to suppress the report than to deal with what it would say. That unfairly transfers risk to external people who won’t even know.
Kobeissi logically escalated, with a complaint to the Rust Moderation Team and Leadership Council about the RustSec maintainer’s conduct. The result confirmed an integrity breach of the RustSec system. Five hours later he was banned from Rust Project Zulip spaces.
He then escalated to the Rust Foundation. In his complaint he identified the structural problem: the Rust Project’s moderation team representative on the Leadership Council is the same individual who issued a public moderation warning against him in the underlying advisory dispute.
He is both a participant in the conduct I am complaining about and a member of the body responsible for reviewing that conduct.
This is true, and thus we see the integrity breach.
The Rust Project’s own governance documents cite council representative obligations:
must promptly disclose conflicts of interest and recuse themselves from affected decisions.
The Rust Foundation’s own conflict of interest policy describes a covered official:
whose potential conflict is under review may not debate, vote, or otherwise participate in such determination.
Neither happened.
This apparently isn’t new for Rust. I researched their integrity issues and found in November 2021 the entire Rust moderation team resigned over “the Core Team placing themselves unaccountable to anyone but themselves.”
Their letter said they had been “unable to enforce the Rust Code of Conduct to the standards the community expects.” They recommended the community “exercise extreme skepticism of any statements by the Core Team.”
The Leadership Council was created in response. It was supposed to fix the accountability problem. What it actually created was another layer where the people being complained about adjudicate the complaints.
Filippo Valsorda, a cryptographer known for drama with Kobeissi for over a decade, tried to get a dig into The Register, telling them:
looks more and more to him like the harassment of open source maintainers.
More and more? That phrase isn’t working. There’s less and less evidence of harassment as the vulnerabilities are proven accurate. He also said the nonce-reuse bug “seems to be a valid security issue.” But from that he incorrectly concluded if RustSec banned Kobeissi, “he’s inclined to believe they had reason to do so.” He means he would like to believe that, as a personal matter. He gives exactly zero reasons, or even tries to name them.
It’s circular. The question is whether the reasons were technical or political. The pull requests were closed without technical justification. The ban message cited “harassment”. Really? The same word used to dismiss the advisory contributions, imposed by the same people whose conduct was being complained about. How convenient for them.
The “harassment” label doesn’t pass even a basic test. It converts a dispute about whether acknowledged cryptographic vulnerabilities deserve public advisories into tone shaming. Once you’re arguing about tone, the institution automatically shuts down security. It controls the standards, the process, and the enforcement. The same people who refused to publish the advisory get to decide whether asking them to publish it constitutes harassment.
It obviously was the opposite of harassment. Extremely high quality professional work was delivered.
Kobeissi is a cryptographer who found real bugs in real libraries used by billions of people, published a paper clearly documenting them, and followed the correct disclosure process. For that he’s been vilified by a system designed to block, ban, and label research to avoid facing the truth.
Every developer running cargo audit against a dependency with a known nonce-reuse vulnerability could be getting a false result right now.
That’s an integrity breach.