AI4 min readArs Technica AI

Anthropic says its leak-focused DMCA effort unintentionally hit legit GitHub forks

P
Redakcja Pixelift0 views
Share
Anthropic says its leak-focused DMCA effort unintentionally hit legit GitHub forks

Foto: Getty Images

More than 8,000 repositories on the GitHub platform were blocked following an unfortunate legal intervention by Anthropic, which was attempting to stop the source code leak of its new tool, Claude Code. Aggressive actions based on the DMCA procedure backfired, hitting thousands of legitimate projects that were merely official forks of the company's public assets used by developers to report bugs and fixes. Although Anthropic admitted to a communication error and restored access to the wrongfully removed content, the incident exposed the brutal effectiveness of automated content removal mechanisms at the request of corporations. For the global developer community, this is a clear signal that the line between protecting intellectual property and restricting open collaboration is extremely thin. While the company is trying to limit the spread of the leak initiated by a user nicknamed nirholas, fighting thousands of copies on the internet resembles a battle with a hydra. Users must face the reality that even when working on officially shared code, they can become victims of a "digital purgatory" due to errors by lawyers or algorithms. The entire situation undermines trust in major AI players regarding transparent management of the open-source ecosystem. The Claude Code leak remains a fact, and attempts to remove it by force have only increased interest in the forbidden files.

In the world of large language models, the arms race is taking place not only in the field of parameters and computing power, but above all in the sphere of intellectual property protection. Anthropic, one of OpenAI's most serious rivals, has just learned how brutal and unpredictable an attempt to regain control over leaked source code in an open-source environment can be. What was intended to be a precise operation to remove illegal copies of the Claude Code tool turned into a digital "carpet bombing" that ricocheted into thousands of innocent developers.

This incident sheds light on the tensions between the corporate need to protect trade secrets and the culture of collaboration on platforms like GitHub. When a leak of the Claude Code client code appeared online, Anthropic reacted according to the crisis management handbook: a DMCA (Digital Millennium Copyright Act) procedure was initiated. The problem is that the algorithmic and legal sieve turned out to be too dense, dragging over 8,000 repositories into a whirlwind of blocks, most of which had nothing to do with the theft of intellectual property.

Algorithmic error on a massive scale

The beginning of the crisis is linked to a user nicknamed nirholas, who published the leaked source code of the Claude Code client on GitHub. Anthropic quickly identified the threat and sent a formal DMCA notice, pointing to the original source and nearly 100 specific forks (copies of the project). However, GitHub's mechanism, supported by the claimant's suggestion that "most forks infringe rights to the same extent as the original," led to the automatic blocking of the entire network of connections. As a result, 8,100 repositories disappeared from the web in a single evening.

Headquarters of an AI technology company
Aggressive protection of intellectual property by AI giants is becoming an industry standard.

The greatest absurdity of this situation is the fact that the "victims" were legitimate forks of Anthropic's official, public repository. The company makes part of its code available publicly to encourage the community to report bugs and create fixes. Developers who were working on improving the official tool were suddenly treated like pirates. Voices of outrage quickly flooded social media, and programmers like Robert McLaws did not spare harsh words, pointing out Anthropic's lawyers' inability to read repository structures and announcing DMCA counter-notices.

Extinguishing the image fire

The reaction from Anthropic management was swift, although for many it was hours too late for the developers' work. Boris Cherny, head of the Claude Code project, admitted publicly that the overzealous removal of content "was not intended." Meanwhile, Thariq Shihipar described the entire incident as a "communication error." By Wednesday, the company had backed away from the broad claims, asking GitHub to limit the blocks exclusively to the 96 URLs listed in the original request and to restore the remaining thousands of projects.

  • 8,100 – the number of repositories originally blocked by GitHub.
  • 96 – the number of actually infringing forks identified by Anthropic.
  • Claude Code – the tool whose leak became the spark of the conflict.
  • DMCA – the legal basis used for mass content removal.

This situation shows how dangerous automatic content removal mechanisms are in the hands of tech giants. In the age of AI, where code is currency, companies are becoming increasingly paranoid about their assets. However, in this case, the "shoot first, ask questions later" mechanism hit the very community on which Anthropic wants to build its ecosystem. This is a painful lesson in Developer Relations (DevRel), showing that a single lawyer's mistake can undo years of building trust among coders.

Symbolic graphic representing data security
The Claude Code leak puts Anthropic in a difficult negotiating position with the open-source community.

Sisyphean task in the age of the decentralized internet

Despite withdrawing the erroneous notices, Anthropic faces a nearly impossible task. Once code is made public on the internet, especially one as desirable as that concerning Claude Code, it is practically impossible to remove. Even if GitHub successfully cleans its servers, copies of the code are already circulating on alternative platforms, in P2P networks, or on developers' private servers. Attempting to stop this process using the DMCA is like trying to hold back the tide with a broom.

"The fact that lawyers cannot distinguish an official repository from an illegal leak demonstrates a deep misunderstanding of the platform they operate on" – commented users affected by the block.

Analyzing this case, one can conclude that the AI industry is entering a "fortress under siege" phase. Companies like Anthropic, despite declarations of openness and safety, will react increasingly aggressively to any violations of their IP. However, in the world of open-source, this aggression often proves to be a double-edged sword. The Claude Code incident shows that in the coming years, it will be not only the quality of models but also efficiency in crisis management and community relations that will determine who emerges victorious from this race. It can be assumed that Anthropic will now be forced to make a double effort to appease the developers whose projects were groundlessly flagged as pirated, while the actual leak will remain available to determined individuals in the darker corners of the web anyway.

Comments

Loading...