Cryptographers engage in war of words over RustSec bug reports and subsequent ban

Foto: The Register
Cryptographers clash over security vulnerability reporting in Rust ecosystem. Nadim Kobeissi, a cryptography specialist, discovered critical vulnerabilities in the hpke-rs library, including a nonce reuse flaw and complete AES-GCM plaintext recovery. Since February, he has unsuccessfully attempted to publish security reports in the RustSec database. In response to his complaints filed with the Rust Moderation Team, he was banned from Rust Project Zulip channels. The conflict escalated to the Rust Foundation with allegations of code of conduct violations. Cryptographer Filippo Valsorda opposed Kobeissi, questioning the integrity of his actions and criticizing the aggressive tone of his arguments against Cryspen — a Paris-based cryptographic software company. The dispute concerns both the merits — the actual number of vulnerabilities in libraries — and the procedure for reporting them. Cryspen acknowledged that it "did not handle well" the advisory process, but maintains that errors in its pre-release code were fixed within a week. The situation highlights tensions between security transparency and formal verification processes in open source cryptographic projects.
In the world of software security, conflicts between researchers and project maintainers are the norm, but they rarely take on as dramatic a form as the dispute that has erupted between cryptographer Nadim Kobeissi and the RustSec team. Since February 2026, the cryptographer has been attempting to publish reports on critical security vulnerabilities in Rust cryptographic libraries, but instead of support, he has encountered closed doors, being ignored, and ultimately a ban from the project's security channels. This is not an ordinary dispute about vulnerability semantics – it is a case that has exposed fundamental problems in how the open source community manages security and conflicts.
The story began innocently, with a bug report in the libcrux-ml-dsa library. However, what could have been a routine vulnerability remediation procedure turned into a war of words between an experienced cryptographer and the team maintaining the security advisory database. Kobeissi claims to have discovered thirteen vulnerabilities in cryptographic libraries, two of which are so critical that they require immediate notification to the public. Meanwhile, RustSec maintainers claim his reports are exaggerated and his approach to the problem is aggressive and disproportionate.
What is happening in Rust matters to the entire software security ecosystem. The Rust language has earned itself a reputation through memory safety guarantees and growing adoption in critical projects, from Linux to Signal. If the vulnerability handling system in its ecosystem fails, the consequences could be serious for millions of users.
Read also
Nonce-reuse and full decryption – what exactly is at stake
To understand the scale of the dispute, one must first analyze what vulnerabilities are in question. Kobeissi claims to have discovered a vulnerability related to nonce reuse in the HPKE-rs (Hybrid Public Key Encryption) library. This is not an abstract theoretical flaw – it is a vulnerability that in practice enables complete recovery of plaintext and message forgery.
How does this work? In AES-GCM encryption, the nonce (a number used only once) is key. If the same nonce is used twice with the same key, all encryption security collapses. An attacker can not only read encrypted messages but also create fake messages that will appear authentic. This is not a theoretical weakness – it is a practical attack vector.
The second vulnerability that Kobeissi points to is a denial of service attack consisting of complete key recovery after 2^32 encryptions. For many applications, this might sound like enough to avoid the problem. But Kobeissi rightly points out that Signal, OpenMLS, Google, and the Linux kernel are precisely the applications that could potentially reach this number of operations.
Filippo Valsorda, a well-known cryptographer who himself reported the original bug in libcrux-ml-dsa, has a different perspective. He claims that the nonce-reuse vulnerability is not as critical as Kobeissi presents it. According to Valsorda, the flaw affects only applications that perform more than four billion encryptions with the same HPKE setup – while the average application does one. This is an important distinction, but Kobeissi rightly argues that Signal and other applications could indeed reach these thresholds.
Cryspen, formal verification, and promises that didn't hold up
At the center of the conflict is Cryspen, a Paris-based cryptographic software company that maintains the libcrux library. The company specializes in formal verification – a mathematical proof that code works exactly as it should. This sounds impressive, but practice shows that even formally verified code can contain errors if the specification of the requirements itself is incorrect.
Kobeissi in his February 5 post accused Cryspen of silently fixing the bug without public disclosure or a security statement. For a company that builds its reputation on formal verification guarantees, this was particularly painful. How can one claim that code is formally verified if it contains critical cryptographic errors?
Karthikeyan Bhargavan, Cryspen's co-founder and chief scientist, under whom Kobeissi studied as a doctoral student, admitted: "we did not handle these advisors well". This is an important admission – it shows that even companies specializing in security can fail in basic processes of communication and transparency.
However, Cryspen in its official statement claims that the errors found by Kobeissi were in pre-release software and were fixed within a week. This distinction matters – errors in draft versions are not the same as errors in production code. But if the pre-release code was advanced enough to be tested and integrated with other projects, can it really be called "pre-release"?
RustSec, lack of transparency, and closed doors
Here the matter becomes even more complicated. Kobeissi claims that the RustSec team – a security advisory database for the Rust ecosystem – closed his pull requests without technical justification, quietly blocked him in the GitHub organization, and then closed his pending pull request after discovering he had been blocked.
In the world of open source, such actions are unacceptable. If project maintainers close vulnerability reports without explanation, it not only harms security – it also undermines trust in the entire system. How are other researchers supposed to report vulnerabilities if they cannot count on a fair review of their report?
Kobeissi was ultimately banned from the Rust Project Zulip channels – just five hours after filing a formal complaint with the Rust moderation team and leadership council. The official reason was "harassment". But Kobeissi rightly points out that the same people who banned him are also the people who rejected his vulnerability reports. What is the conflict of interest here? Should a person who rejects a security report be the one who bans them for "harassment" for attempting to escalate the matter?
Aggressive communication or justified frustration?
Valsorda and other critics of Kobeissi focus on the tone of his communication. They argue that his February 5 blog post was aggressive, accusatory, hyperbolic, and "on the verge of dishonesty". These words are important – they suggest that the problem is not the content but the way Kobeissi presented it.
But here a question arises: is the frustration resulting from a month of being ignored and having justified security reports rejected not... justified? Kobeissi claims he tried "in good faith" to publish security advisories for over a month. If proper channels fail, is turning to media and the community "harassment" or is it a justified escalation?
Valsorda herself has over a decade-long conflict with Kobeissi. This historical tension must be taken into account when evaluating her comments. Is Valsorda an impartial observer, or does she have a personal interest in portraying Kobeissi in a negative light?
Conflicts of interest in Rust's structure
Kobeissi in his complaint to the Rust Foundation points to a direct conflict of interest. The representative of the Rust moderation team on the Leadership Council is the same person who issued a public moderation warning against Kobeisski in the discussion about security advisories. In other words, a person who is a participant in the dispute is also a member of the body responsible for reviewing that dispute.
This is not a minor procedural inconvenience – it is a fundamental problem in governance. Every system of justice, whether judicial or corporate, requires separation between the parties to a dispute and the arbiters. When these roles are mixed, the outcome is predetermined.
The Rust Foundation on Friday acknowledged receiving Kobeissi's complaint, stating: "We take all reports very seriously". But this is only the beginning. The real test will be in how the foundation actually handles the case and whether it will be able to act independently of the people involved in the original conflict.
The bright side of formal verification – myth or reality?
One aspect of this dispute that is often overlooked is the fundamental disagreement about the value of formal verification in cryptography. Cryspen builds its reputation on the idea that mathematically proven code is more secure than code tested by traditional methods.
But the history of the libcrux library shows something different. Even formally verified code can contain errors if the specification is incorrect or if the implementation does not fully correspond to the specification. Valsorda in his comment on Kobeissi's post admitted that testing and engineering practices can deliver better results than formal verification for highly reliable software.
This is an important lesson for the entire ecosystem. Formal verification is a powerful tool, but it is not a magic bullet. Projects that rely solely on formal verification without traditional security testing may turn out to be less secure than projects that combine both approaches. Cryspen should have known this – and perhaps it did, but failed to implement it in practice.
The fundamental problem of open source – people, not code
Finally, The Register article contains a cited observation that strikes at the heart of the matter: "When one person's passion is aggressive advocacy to another, a critical gap in open source might simply be people".
This is a fair observation. Software security is important, but the way communities manage conflicts is equally important. Rust has great tools, great technology, and great people. But the security and conflict management system seems to be weak.
The questions that must be resolved are: How should the Rust Foundation separate the review of the complaint from the people involved in the original conflict? How should open source projects handle vulnerability reports from researchers who have an aggressive communication style? Is there a way to enforce transparency in security processes without having to escalate to the media?
These questions are important not only for Rust but for the entire open source ecosystem. Security depends on researchers having channels to report vulnerabilities and on those channels being fair, transparent, and free from conflicts of interest. Without this, even the most technologically advanced projects can become a weak link in the security chain.
More from Industry
Elon Musk misled Twitter investors ahead of $44 billion acquisition, jury says
Super Micro co-founder indicted on Nvidia smuggling charges leaves board
Alibaba workforce shrinks 34% in 2025 as Chinese tech giant doubles down on AI
OpenAI to create desktop super app, combining ChatGPT app, browser and Codex app
Related Articles

Gamers Hate Nvidia's DLSS 5. Developers Aren’t Crazy About It, Either
7h
WSL graphics driver update brings better GPU support for Linux apps
10h
Starship may chauffeur Orion to the Moon, as NASA mulls ditching SLS after Artemis V
10h

