The Ghost in the Code
Anthropic’s GTG-1002 report documents the first major espionage campaign run by AI, not just with it. The firewall has been breached.
In mid-September, a Chinese state-sponsored group ran a cyber-espionage campaign where the AI itself performed 80-90% of the tactical work. This wasn’t AI as a co-pilot suggesting exploits or drafting phishing emails. This was AI as the pilot: autonomously running reconnaissance, writing exploits, harvesting credentials, and moving laterally through networks. Human operators only intervened at a handful of critical decision points.
Anthropic’s November report documenting this attack, dubbed GTG-1002, announces the birth of the autonomous AI hacker. What was, until recently, a grim theory is now a documented fact. The report marks a fundamental shift in the economics and speed of cyber espionage, and it should be a five-alarm fire for security leaders across government and enterprise.
We’ve just crossed the threshold. The GTG-1002 campaign compressed months of elite, human-led hacking into hours. It operated at thousands of requests per second, a tempo physically impossible for a human team. The barrier to entry for nation-state-level attacks didn’t just drop; it evaporated. There’s little that would prevent small teams of low-resource but AI-savvy hackers from performing a similar feat.
A New Breed of Attack
To understand the shift, compare GTG-1002 to the classic, patient art of the human-driven hack. The F5 breach disclosed in October 2025 was a masterpiece of traditional tradecraft. Chinese actors maintained access for over a year, deployed custom backdoors, and demonstrated exceptional operational security. Similarly, Salt Typhoon’s compromise of nine U.S. telcos, which accessed wiretap systems, involved dwell times exceeding 18 months. Both required patient, hands-on-keyboard operations and sustained institutional investment.
GTG-1002 operated on a different plane. The attack framework orchestrated Claude Code through six phases, with human operators spending just 2-10 minutes reviewing and authorizing an attack that the AI then executed autonomously for hours.
Traditional spies achieve sophistication through methodical human patience; GTG-1002 achieved it through machine-speed execution. This represents a categorically different operational model: AI drives the engine while humans provide strategic guidance.
The Conductor, Not the Instrument
GTG-1002’s real innovation lay in orchestration.
The AI wrangled a suite of commodity penetration tools—network scanners, password crackers—all widely available. The sophistication was in the conductor, not the instruments. As Anthropic’s report notes, the key barrier was convincing Claude that it wasn’t doing harm: The attackers “broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose.”
This crucial distinction reshapes our understanding: cyber capability now derives from orchestrating cheap, available resources. Anthropic validated a small number of successful intrusions across roughly 30 targets, providing a scalable business model. A small, AI-literate team with basic hacking skills could have carried out this attack. The exploding agent ecosystem will make it even easier by providing cheap, chainable tools for adversaries.
Traditional APT operations require teams of skilled operators and scale linearly: each new target demands new human attention. AI-orchestrated operations scale sub-linearly. The GTG-1002 framework simultaneously managed approximately 30 targets with minimal incremental human effort. Compute costs are replacing human labor costs.
Throw Out the Cyberdefense Playbook
For defenders, indicator-based defense no longer works. Traditional tells—odd login times, human-paced input, off-hours activity—fall flat in a machine-driven environment. Behavioral analytics depend on slow pattern-building across systems, a model that breaks against AI-driven attacks. Because an AI adversary can tune its request rate to mirror routine automated traffic, it slips past standard anomaly detection entirely.
This reality reshapes defensive strategy:
Priority Shift: Investment must move from preventative boundary controls to high-speed detection and isolation. The goal becomes minimizing blast radius once AI breaches the perimeter.
Architectural Change: Defensive systems must assume AI-speed adversaries. Human-behavior monitoring fails when AI can emulate any innocuous machine pattern.
The Counter-Offensive
Responding to this threat demands a new defensive posture, built on three pillars.
Fight AI with AI. Security leaders must seize the AI advantage. As Anthropic’s report emphasizes, “The very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense.” Early data proves promising: organizations using AI and automation in their security operations see significantly lower breach costs.
Burn the Old Playbooks. Incident response plans built for human attackers have become fantasy. MITRE’s ATT&CK and ATLAS frameworks now includeAI-specific techniques, but leaders must go further. Red team exercises must simulate machine-speed attacks. Response timelines must compress from days to minutes.
Invest in Dual-Threat Talent. Technology alone is not enough. The market needs defenders who understand both cybersecurity and AI: adversarial machine learning, prompt injection, and how to interpret AI-generated threat intelligence.
The Clock is Ticking
GTG-1002’s capabilities will proliferate rapidly. Bruce Schneier warns of “a singularity event for cyber attackers” where “attack capabilities could accelerate beyond our individual and collective capability to handle.” Gartner predicts that by 2028, 40% of social engineering attacks against executives will use deepfakes.
These predictions build on operational reality. Tempo will accelerate. Dwell times measured in months will disappear; attacks completing in hours will become standard. The democratization of these tools guarantees a flood, moving from nation-states to criminal syndicates.
For CTOs and CISOs, the GTG-1002 report sounds a klaxon. The window to adapt is now measured in quarters, not years. This represents regime change in the threat landscape.
The organizations that treat it as such—by adopting defensive AI as aggressively as the attackers—will maintain an advantage. The rest will become future case studies.
The time to wonder if AI would become the hacker has passed. It just did.
This post was co-written with my colleague Nick Weir, VP of Mission Engineering at Legion Intelligence.




