AI4 min readArs Technica AI

Here's what that Claude Code source leak reveals about Anthropic's plans

P
Redakcja Pixelift0 views
Share
Here's what that Claude Code source leak reveals about Anthropic's plans

Foto: Anthropic

More than 512,000 lines of code contained in 2,000 files have leaked online, revealing Anthropic's ambitious plans for the Claude Code tool. The most significant discovery is the Kairos system—a background daemon process that can operate even after the terminal is closed. Thanks to flags such as "PROACTIVE," the assistant gains the ability to independently initiate actions and notify the user about important issues they did not even ask about. The key to this new generation of AI is expected to be an advanced long-term memory system supported by the AutoDream mechanism. This solution allows the model to "reflect" on the course of a session during periods of inactivity or after work has concluded. During the "dreaming" process, the AI scans transcripts, removes duplicates, updates outdated information, and synthesizes knowledge so that it can immediately orient itself within the project context in the next session. For developers and creators, this signifies a transition from reactive chatbots to autonomous agents that truly know their user's work style and preferences. If Anthropic successfully implements mechanisms to prevent memory "drift," Claude Code will become the first tool that, instead of starting every conversation from scratch, evolves alongside the code being developed. The leak confirms that the AI industry is shifting its focus from immediate responses to lasting, multi-session human-machine collaboration.

The leak of over 512,000 lines of source code spread across more than 2,000 files is a nightmare for any security department, but for AI industry analysts, it is a rare opportunity to look under the hood of Claude Code. The tool from Anthropic, which was supposed to be "just" a terminal interface for developers, actually hides the foundations for something much more powerful. Technical documentation that leaked online reveals plans to build an autonomous agent capable of operating without human intervention, possessing its own long-term memory, and even a "dreaming" process designed to optimize knowledge.

Kairos and the birth of the resident agent

The most intriguing discovery in the code is a system codenamed Kairos. This is not a simple chat function; it is a persistent daemon-type process that can run in the background of the operating system even after the Claude Code terminal window is closed. This architecture relies on cyclic <tick> prompts that regularly wake the model to check if new actions are required. This is a transition from a reactive model – where the AI only responds to a question – to a proactive model.

Inside the code, a PROACTIVE flag was found, intended to "display something the user didn't ask for but needs to see now." In practice, this means that Claude could independently monitor code compilation progress, detect errors in server logs in real-time, or suggest fixes to commits before the developer even thinks to check them. This is a fundamental paradigm shift: AI stops being a tool and becomes an autonomous collaborator that looks after the project in our absence.

Claude Code interface without ads
The code leak reveals advanced features hidden under the hood of the Claude Code terminal interface.

Long-term memory and the AutoDream system

One of the biggest problems with current language models is "sclerosis" – the loss of context after a session ends. Anthropic intends to solve this using a file-based memory system hidden under the KAIROS flag. Instructions contained in the leak indicate that the system is meant to build a "complete picture of who the user is, how they want to collaborate, what behaviors to avoid, and which ones to repeat." This is the building of a digital collaboration profile that goes beyond simple system instructions.

To prevent this mass of data from becoming chaotic, engineers designed the AutoDream mechanism. This is a process activated when the user enters idle mode or manually issues a "sleep" command. At that point, Claude Code performs a "reflective pass over the memory files." During this "sleep" cycle, the model scans transcripts from the entire day, identifies new information worth keeping, and consolidates it to avoid duplicates and contradictions. This is a fascinating analogy to biological sleep, where the brain organizes memories and removes unnecessary data.

  • Knowledge consolidation: Combining scattered facts from multiple sessions into a coherent knowledge base.
  • Redundancy removal: Trimming overly wordy or outdated memory entries.
  • Drift correction: Detecting moments where the AI's "memory" begins to deviate from facts (so-called memory drift).
Claude operation diagram
The AutoDream mechanism is intended to allow the Claude model to synthesize acquired knowledge during user work breaks.

Buddy and Undercover mode – a new work dynamic

In the maze of files, mentions were also found of a virtual assistant named Buddy and an "Undercover" mode. While details regarding Buddy are less clear, the naming suggests an attempt to give the tool a more personal, assistant-like character, which may be a response to similar initiatives from competitors. Meanwhile, the stealth mode could suggest functions for discreetly monitoring the work environment without directly interfering with the code, fitting into the vision of a Kairos type agent.

From a developer perspective, a key challenge for Anthropic will be mastering the "drifting memories" problem mentioned in the code. Users who have tried to layer their own memory systems onto Claude models have often reported that over time, the AI begins to hallucinate about its own previous arrangements. AutoDream is meant to be a systemic solution to this problem, ensuring that each new session can "quickly orient itself" to the current state of the project and the programmer's preferences.

Analysis of these 512,000 lines of code shows that Anthropic is not just building another AI-integrated editor. They are building infrastructure for a permanent artificial intelligence presence in the operating system. If these features are officially implemented, Claude Code will stop being a tool called on demand and will become a developer's digital shadow – analyzing, dreaming of optimization, and proactively acting in the background. This is the direction in which AI stops being just a "smarter search engine" and becomes a real entity in the creative process.

Comments

Loading...