Industry5 min readThe Register

Claude attacks were 'Rorschach test' for infosec community, scaring former NSA boss

P
Redakcja Pixelift0 views
Share
Claude attacks were 'Rorschach test' for infosec community, scaring former NSA boss

Foto: The Register

Chinese government-backed hackers have managed to automate the full attack chain using AI agents, which Rob Joyce, former head of NSA cyber operations, described as a "Rorschach test" for the security industry. During the RSAC 2026 conference, Joyce admitted that while the infosec community is divided in assessing the gravity of the situation, he considers it a breakthrough and terrifying moment. Utilizing the Claude model from Anthropic, the attackers created a framework that autonomously scanned infrastructure, searched for vulnerabilities, wrote exploit code, and, following a breach, exfiltrated data and moved laterally within the networks of approximately 30 key organizations. The greatest threat lies in the fact that machines—unlike humans—never tire of code analysis, allowing them to detect vulnerabilities at "machine speed." Joyce warns that due to the modularity of modern LLMs, these tools will evolve exponentially. For users and organizations, this means a drastic reduction in the time between the emergence of a new vulnerability and its mass exploitation. Although projects such as Google Big Sleep or OpenAI Codex assist in automated software patching (e.g., OpenSSL), attackers gain the advantage in the short term. The effectiveness of AI in real-world attacks demonstrates that the era of manual hacking is coming to an end, and digital security is becoming an arms race of autonomous algorithms.

Attack Chain Automation: From Scanning to Exploit

The Chinese spies mentioned in the **Anthropic** report did not limit themselves to simple phishing email generation. They built an advanced framework based on so-called **agentic AI**, which broke down the typical attack chain into micro-tasks. **Claude** models were harnessed to map attack surfaces, scan target organizations' infrastructure, and – most disturbingly – to independently search for vulnerabilities and write code for their exploitation. This is a process that previously required hours or even days of work by highly skilled operators. When the AI agents managed to penetrate the interior of the network, their role did not end there. The bots were able to find and abuse valid credentials, perform privilege escalation, and move laterally (lateral movement) within the victim's infrastructure. In several recorded cases, the agents were able to independently locate and exfiltrate sensitive data. Joyce emphasizes that machines have one key advantage over humans: patience. A machine does not get tired of reading code. It can analyze it indefinitely until it finds that one critical flaw.
"This is not a story about AI being smarter than humans. This is a story about scale and patience. Machines can look through code over and over again until they find a vulnerability," noted Rob Joyce during his presentation at RSAC 2026.

Information Asymmetry and the Industrial Scale of Bug-Hunting

We are currently at a point where information asymmetry decisively favors attackers using machines. Joyce cited an analysis by researcher **Sean Heelan**, who tested project **Aardvark** (now known as **OpenAI Codex**). The conclusion is simple and unsettling: the more "tokens" (i.e., computing power and AI resources) you invest in searching for bugs, the more you will find and the better their quality will be. At a certain point, the limitation ceases to be the intelligence of the model and becomes solely the operational budget. This approach leads to the industrialization of hacking processes. Since Chinese espionage groups issued orders to attack approximately 30 critical organizations using **Claude**, and some of these attacks ended in full success, it means that the barrier to entry for complex APT-type operations has dropped drastically. The modularity of modern LLM models allows criminals to instantly update their tools, making the pace of threat evolution exponential. Just a year ago, Joyce predicted that AI would "soon" become a great exploit coder – today, before the **RSAC 2026** audience, he announced that this moment has already arrived.
  • Scale: AI can analyze thousands of repositories simultaneously, looking for niche vulnerabilities.
  • Modularity: Attackers can easily swap LLM modules in their frameworks, adapting to new security measures.
  • Costs: The cost of finding a zero-day vulnerability drops drastically as the performance of models like Claude Code Security or OpenAI Codex increases.

Agentic AI as a Shield: Google Big Sleep and OpenAI Codex

While the vision of autonomous hacking bots is grim, the same technology is becoming the most powerful weapon in the hands of defenders. Joyce pointed to projects like **Google Big Sleep**, which use AI agents to proactively search for zero-day vulnerabilities. This system has already achieved spectacular success, detecting a previously unknown, exploitable memory security vulnerability in the **OpenSSL** library, which is a foundation of global internet security. Models such as **Anthropic Claude Code Security** and **OpenAI Codex** are currently being used not only to find bugs but also to automatically generate patches. In the long run, this could lead to a situation where the code of products like **Google Chrome** becomes almost impossible to crack because it will be constantly "sifted" by an army of defensive bots. However, Joyce warns that before we reach this ideal state, we face a difficult transition period where the ability to mass-detect vulnerabilities in massive codebases will pose a real risk to older, poorly maintained systems.

New Defensive Doctrine: Agentic Red Teaming

In the face of the threat from AI, the traditional approach to cybersecurity must change. According to Joyce, organizations must become "exceptional" at security basics, but simultaneously adopt the tools used by the adversary. A key recommendation is the implementation of so-called **agentic red teaming**. This involves using autonomous AI agents to continuously and proactively attack one's own infrastructure to detect configuration errors and vulnerabilities before hackers do. "You will be the target of red teaming, whether you pay for it or not," Joyce stated. The only difference is who receives the results report first: you, or Chinese intelligence. Using AI to detect anomalies in user behavior and network traffic patterns will become the standard, as only a machine is capable of catching the subtle traces of another bot's activity in real-time while it masquerades as legitimate administrative tools. In my assessment, the **Anthropic** report and Joyce's comments mark the end of an era where cybersecurity relied on "human genius" on both sides of the barricade. We are entering a phase of a war of attrition over computing resources. The advantage will go to whoever has more tokens and better-optimized agents. Although on a macro scale this technology will ultimately seal the internet, the next two years will be a period of unprecedented destabilization, where every organization with source code online is in a state of permanent scanning by algorithms looking for the slightest crack in the armor. The future of infosec is not just better firewalls; it is primarily autonomous immune systems that must evolve faster than LLM-powered viruses.
Source: The Register
Share

Comments

Loading...