AI5 min readThe Verge AI

Claude Code leak exposes a Tamagotchi-style ‘pet’ and an always-on agent

P
Redakcja Pixelift0 views
Share
Claude Code leak exposes a Tamagotchi-style ‘pet’ and an always-on agent

Foto: The Verge AI

More than 512,000 lines of source code for the Claude Code tool were leaked online following an error during the version 2.1.88 update, revealing the inner workings of Anthropic engineers' efforts. The leak, which rapidly infected GitHub with over 50,000 forks, exposed not only system instructions and the model's memory architecture but also intriguing, previously unreleased features. Among the findings, the most excitement surrounds a Tamagotchi-style digital pet designed to react to a programmer's progress, and a mysterious project titled "KAIROS," which suggests the introduction of an "always-on agent" running in the system background. For the global community of developers and creators, this incident carries dual significance. On one hand, it exposes human error in the operational processes of AI giants, raising questions about security and the potential for malicious actors to bypass guardrails. On the other hand, code analysis provides a unique insight into how Anthropic optimizes the agentic capabilities of Claude, which is already capable of independently performing tasks on a user's computer. Although the company maintains that customer data remained secure, such a deep vivisection of the tool will accelerate the reverse-engineering process and force competitors to revise their own security measures. This transparency, forced by an error, demonstrates that the future of AI programming will be based on even deeper integration of agents with the operating system.

In the world of technology, human error is sometimes more fascinating than the most polished product launch. Anthropic found this out the hard way when, after releasing update 2.1.88 for the Claude Code tool, over 512,000 lines of source code were leaked online. What was meant to be a standard patch became a goldmine of knowledge for developers and analysts, who found a source map file within the package containing the complete codebase written in TypeScript. The leak, which spread instantly on X and through the editorial offices of Ars Technica and VentureBeat, exposed not only the inner workings of one of the most advanced AI assistants but also the company's plans for the near future.

The scale of the event is unprecedented for a company that builds its image on a foundation of safety and ethics. Although Anthropic reacted quickly to the incident, the damage was already done — a GitHub repository containing the copied code saw over 50,000 forks before effective blocking measures were taken. For competitors and the open-source community, this is a rare opportunity to look under the hood of algorithms that have been gaining significance since their February 2025 launch due to their agentic capabilities.

A digital companion and an agent working in the shadows

Analysis of the disclosed data provided surprising information about how Anthropic intends to humanize the coding process. The most intriguing find is a feature reminiscent of Tamagotchi — a virtual "pet" that, according to Reddit users, is meant to sit next to the command input field and react to the programmer's style and progress. This unusual approach to the user interface suggests that the company is looking for ways to reduce psychological barriers in AI interaction, making the tool more of an interactive companion than a dry terminal.

Logo identifying content generated by artificial intelligence
The Claude Code leak revealed Anthropic's plans for new, interactive assistant features.

However, it is not the digital pets that are causing the most excitement among professionals, but a project codenamed KAIROS. The code suggests that this is an "always-on background agent" feature — an agent that runs constantly in the background, capable of monitoring a project and performing tasks without a direct prompt from the user. Such an architecture brings Claude Code closer to the vision of a true autonomous collaborator that not only answers questions but proactively manages file structure or code optimization in real time.

  • KAIROS: A constant background agent feature enabling autonomous work on a project.
  • Tamagotchi-style pet: An interactive UI element reacting to the programmer's actions.
  • Memory Architecture: Insight into the tool's "memory" system, allowing for a better understanding of how Claude maintains context between sessions.
  • System Prompts: Discovery of the internal instructions Anthropic gives the model to guide its behavior.

Developer honesty and technical challenges

The leak also provided a rare glimpse into the work culture and dilemmas of Anthropic engineers. Comments from developers were found in the code, shedding light on the tool's optimization process. One creator openly admitted in a comment that the applied "memoization drastically increases complexity and there is no certainty if it actually improves performance." These types of insights show that even leading AI companies struggle with technical debt and problems managing complex TypeScript architecture.

Data center and servers processing AI models
Analysis of over half a million lines of code gives insight into the memory architecture of Claude Code.

Anthropic spokesperson Christopher Nulty, in an official statement, emphasized that the incident was the result of a release packaging issue, not a hacking attack. The company ensures that no sensitive customer data or authentication keys were disclosed. Nevertheless, for an organization competing with OpenAI and Google, any leak in the source code of a flagship product is a painful blow to its image.

Perspective on security and market maturity

From an industry perspective, this leak is a warning sign for the entire AI sector. Arun Chandrasekaran, an analyst at Gartner, notes that the incident opens the door for "bad actors" who can now look for vulnerabilities in the assistant's guardrails. Understanding how Claude Code filters queries and what system instructions guide it makes it easier to create bypass methods, which is particularly risky in the case of tools that have access to users' operating systems (the Cowork feature).

On the other hand, this incident may force Anthropic to accelerate work on operational maturity. Although Claude Code has gained immense popularity due to its ability to control a computer and write applications independently, such a trivial error as including source map files in a public package indicates certain gaps in control procedures. The AI industry is moving at breakneck speed, but this case shows that the rush to deliver agentic features can lead to costly stumbles.

In the long run, the disclosure of plans regarding KAIROS and the interactive pet may paradoxically increase interest in the tool. The programming community has received confirmation that Anthropic is not afraid to experiment with new forms of interaction and deep automation. If the company can turn this blunder into a lesson in security and quickly deliver the announced features, the leak may be remembered more as free marketing than a technological catastrophe. The key challenge now remains securing Claude Code against reverse engineering attempts, which, based on 512,000 lines of code, will certainly be continued by hackers and AI researchers worldwide.

Source: The Verge AI
Share

Comments

Loading...