AI slop got better, so now maintainers have more work

Foto: The Register
Daniel Stenberg, the creator of the curl project, warns: the era of useless "AI slop" in bug reports has ended, but this is not making life any easier for developers. Instead of obvious spam, open-source repositories are now being hit by an avalanche of security vulnerability reports that look exceptionally professional and credible thanks to the support of advanced language models. The problem is that while the reports are of higher quality, they still require time-consuming human verification, and their massive scale is paralyzing the work of key projects such as the Linux kernel or Node.js. For the global tech community, this represents a paradoxical situation: AI has accelerated the process of finding potential bugs, but it has not increased the capacity of the people who must fix them. Many reports, despite their correct form, concern low-priority issues or those that cannot realistically be exploited for an attack. In response to this phenomenon, foundations and programs like Internet Bug Bounty have begun suspending financial reward payments to discourage "bounty hunters" from flooding systems with automatically generated analyses. End users must face the possibility that software development may paradoxically slow down, as maintainers spend more time filtering through AI-generated technical noise instead of writing new code. The advancement of AI tools is forcing a complete shift in the incentive model within the open-source world, where the priority is no longer just discovering bugs, but the ability to efficiently screen and patch them.
For years, the open-source sector has struggled with a flood of so-called AI slop — low-quality, mass-generated bug reports that were easy to identify and dismiss. However, the situation has changed drastically. Language models have become so advanced that the security analyses they generate sound professional and credible, which paradoxically is becoming the biggest nightmare for developers maintaining key infrastructure projects.
When artificial intelligence does the lion's share of the work in scanning code, the burden of verifying the results still rests on human shoulders. The problem is that with the increasing quality of submissions, their number is growing drastically, paralyzing the work of teams that must manually check every "probable" attack scenario.
Evolution from spam to credible reports
Daniel Stenberg, creator and leader of the curl project, noticed a significant shift in the nature of the reports received. In his recent post, he emphasized that the era of primitive AI spam has come to an end. Instead of obvious errors, the project is now receiving an increasing number of extremely polished security reports that are prepared almost entirely using AI tools. While this sounds like a technological success, for maintainers it means a drastic increase in workload.
Read also
This phenomenon is not unique to curl. Similar observations are coming from the Linux kernel camp. Greg Kroah-Hartman, one of the key Linux kernel developers, admitted that AI-supported reports now contain significantly less "garbage" and more real concerns regarding code architecture. While large teams, like those working on Linux, are trying to adapt to the new reality, smaller open-source projects are beginning to buckle under the weight of reports that are too good to ignore.
- Increase in credibility: Reports are technically consistent and use professional terminology.
- Faster distribution: Automation allows reports to be sent almost immediately after a potential vulnerability is detected.
- Scaling problem: The number of reports is growing faster than the capacity for human verification.
The trap of "probable" bugs
The main problem is that even if a report is technically correct, it does not automatically mean it describes a critical security vulnerability (CVE). Daniel Stenberg points to the public list of closed curl reports as evidence: most cases are closed because the identified problems do not pose a real threat that could be exploited in an attack. Often these are purely "informational" issues, such as a data race in a library, which although requiring a fix, was not a critical error.
As a result, maintainers waste hours analyzing scenarios that AI deemed risky but which have negligible significance in a real-world environment. This is a classic example of cost shifting — users of AI tools gain "productivity" by generating mass reports, while the cost of processing and verifying them is shifted onto developers working pro bono on open-source software.
The end of the bug bounty era
In the face of the new wave of reports, organizations are starting to withdraw the financial incentives that previously stimulated the community to search for vulnerabilities. Internet Bug Bounty announced a halt to cash prize payouts at the end of March 2026, affecting programs like Node.js. This decision stems from an imbalance between the pace of vulnerability discovery and the ability of open-source projects to fix them.
"The bug discovery landscape is changing. AI-assisted research is increasing the scope and speed of vulnerability detection across the ecosystem. Responsibility to the community requires us to reconsider the incentive structure," reads the Internet Bug Bounty statement.
Stenberg himself had already stopped paying rewards for reports in the curl project. The goal was to eliminate the financial motivation for people who use automated systems to mass-send unverified reports, hoping for easy profit with minimal personal effort.
New rules of the game in the LLM world
Developers like Willy Tarreau from the Linux kernel team suggest that it is time for a radical change in bug reporting rules. Since report authors have powerful LLM models at their disposal, they should be obligated to perform a larger portion of the analytical work before sending a report to the maintainer. The proposed changes aim to reduce the overhead associated with triage — the initial selection and classification of bugs.
The vision of a programmer as a "10x developer" thanks to AI has its dark side — it also means "10x more cleanup." If AI tools do not also begin to take responsibility for verification and proving the real security impact of a bug, the open-source model may face a capacity crisis. The productivity gained at the code-writing stage will be completely consumed by the endless process of reviewing mass-generated content.
Modern artificial intelligence does not increase human competence in the decision loop, but only accelerates processes that still require human approval. The industry must develop new content filtering standards where AI helps maintainers separate the wheat from the technical chaff, instead of just burying them under increasingly well-written but still redundant analyses.
More from Industry
Broadcom agrees to expanded chip deals with Google, Anthropic
OpenAI asks California, Delaware to investigate Musk's 'anti-competitive behavior' ahead of April trial
Hope for a U.S.-Iran deal, Apple's anniversary, OpenAI's podcast deal and more in Morning Squawk
AI data center boom ‘stress tests’ insurers as private capital floods in
Related Articles

Researchers didn’t want to glamorize cybercrims. So they roasted them
Apr 5
AI agents promise to 'run the business,' but who is liable if things go wrong?
Apr 5
Netflix, Meta, and IBM speakers: AI will make anyone a 10x programmer, but with 10x the cleanup
Apr 4

