Claude attacks were 'Rorschach test' for infosec community, scaring former NSA boss

Foto: The Register
Chinese government-backed hackers have managed to automate the full attack chain using AI agents, which Rob Joyce, former head of NSA cyber operations, described as a "Rorschach test" for the security industry. During the RSAC 2026 conference, Joyce admitted that while the infosec community is divided in assessing the gravity of the situation, he considers it a breakthrough and terrifying moment. Utilizing the Claude model from Anthropic, the attackers created a framework that autonomously scanned infrastructure, searched for vulnerabilities, wrote exploit code, and, following a breach, exfiltrated data and moved laterally within the networks of approximately 30 key organizations. The greatest threat lies in the fact that machines—unlike humans—never tire of code analysis, allowing them to detect vulnerabilities at "machine speed." Joyce warns that due to the modularity of modern LLMs, these tools will evolve exponentially. For users and organizations, this means a drastic reduction in the time between the emergence of a new vulnerability and its mass exploitation. Although projects such as Google Big Sleep or OpenAI Codex assist in automated software patching (e.g., OpenSSL), attackers gain the advantage in the short term. The effectiveness of AI in real-world attacks demonstrates that the era of manual hacking is coming to an end, and digital security is becoming an arms race of autonomous algorithms.
Attack Chain Automation: From Scanning to Exploit
The Chinese spies mentioned in the **Anthropic** report did not limit themselves to simple phishing email generation. They built an advanced framework based on so-called **agentic AI**, which broke down the typical attack chain into micro-tasks. **Claude** models were harnessed to map attack surfaces, scan target organizations' infrastructure, and – most disturbingly – to independently search for vulnerabilities and write code for their exploitation. This is a process that previously required hours or even days of work by highly skilled operators. When the AI agents managed to penetrate the interior of the network, their role did not end there. The bots were able to find and abuse valid credentials, perform privilege escalation, and move laterally (lateral movement) within the victim's infrastructure. In several recorded cases, the agents were able to independently locate and exfiltrate sensitive data. Joyce emphasizes that machines have one key advantage over humans: patience. A machine does not get tired of reading code. It can analyze it indefinitely until it finds that one critical flaw."This is not a story about AI being smarter than humans. This is a story about scale and patience. Machines can look through code over and over again until they find a vulnerability," noted Rob Joyce during his presentation at RSAC 2026.
Information Asymmetry and the Industrial Scale of Bug-Hunting
We are currently at a point where information asymmetry decisively favors attackers using machines. Joyce cited an analysis by researcher **Sean Heelan**, who tested project **Aardvark** (now known as **OpenAI Codex**). The conclusion is simple and unsettling: the more "tokens" (i.e., computing power and AI resources) you invest in searching for bugs, the more you will find and the better their quality will be. At a certain point, the limitation ceases to be the intelligence of the model and becomes solely the operational budget. This approach leads to the industrialization of hacking processes. Since Chinese espionage groups issued orders to attack approximately 30 critical organizations using **Claude**, and some of these attacks ended in full success, it means that the barrier to entry for complex APT-type operations has dropped drastically. The modularity of modern LLM models allows criminals to instantly update their tools, making the pace of threat evolution exponential. Just a year ago, Joyce predicted that AI would "soon" become a great exploit coder – today, before the **RSAC 2026** audience, he announced that this moment has already arrived.- Scale: AI can analyze thousands of repositories simultaneously, looking for niche vulnerabilities.
- Modularity: Attackers can easily swap LLM modules in their frameworks, adapting to new security measures.
- Costs: The cost of finding a zero-day vulnerability drops drastically as the performance of models like Claude Code Security or OpenAI Codex increases.



