Menu planning, therapy, essay writing, highly sophisticated global cyberattacks: People just keep coming up with innovative new uses for the latest AI chatbots.
An alarming new milestone was reached this week when the artificial intelligence company Anthropic announced that its flagship AI assistant Claude was used by Chinese hackers in what the company is calling the āfirst reported AI-orchestrated cyber espionage campaign.ā
According to a report released by Anthropic, in mid-September, the company detected a large-scale cyberespionage operation by a group theyāre calling GTG-1002, directed at āmajor technology corporations, financial institutions, chemical manufacturing companies, and government agencies across multiple countries.ā
Attacks like that are not unusual. What makes this one stand out is that 80 to 90 percent of it was carried out by AI. After human operators identified the target organizations, they used Claude to identify valuable databases within them, test for vulnerabilities, and write its own code to access the databases and extract valuable data. Humans were involved only at a few critical chokepoints to give the AI prompts and check its work.
Claude, like other major large language models, comes equipped with safeguards to prevent it from being used for this type of activity, but the attackers were able to ājailbreakā the program by breaking its task down into smaller, plausibly innocent parts and telling Claude they were a cybersecurity firm doing defensive testing. This raises some troubling questions about the degree to which safeguards on models like Claude and ChatGPT can be maneuvered around, particularly given concerns over how they could be put to use for developing bioweapons or other dangerous real-world materials.
Anthropic does admit that Claude at times during the operation āhallucinated credentials or claimed to have extracted secret information that was in fact publicly-available.ā Even state-sponsored hackers have to look out for AI making stuff up.
The report raises the concern that AI tools will make cyberattacks far easier and faster to carry out, raising the vulnerability of everything from sensitive national security systems to ordinary citizensā bank accounts.
Still, weāre not quite in complete cyberanarchy yet. The level of technical knowledge needed to get Claude to do this is still beyond the average internet troll. But experts have been warning for years now that AI models can be used to generate malicious code for scams or espionage, a phenomenon known as āvibe hacking.ā In February, Anthropicās competitors at OpenAI reported that they had detected malicious actors from China, Iran, North Korea, and Russia using their AI tools to assist with cyber operations.
In September, the Center for a New American Security (CNAS) published a report on the threat of AI-enabled hacking. It explained that the most time- and resource-intensive parts of most cyber operations are in their planning, reconnaissance, and tool development phases. (The attacks themselves are usually rapid.) By automating these tasks, AI can be an offensive game changer ā and that appears to be exactly what took place in this attack.
Caleb Withers, the author of the CNAS report, told Vox that the announcement from Anthropic was āon trend,ā considering the recent advancements in AI capabilities and that āthe level of sophistication with which this can be done largely autonomously, by AI, is just going to continue to rise.ā
Chinaās shadow cyber war
Anthropic says the hackers left enough clues to determine that they were Chinese, though the Chinese embassy in the United States described the charge as āsmear and slander.ā
In some ways, this is an ironic feather in the cap for Anthropic and the US AI industry as a whole. Earlier this year, the Chinese large language model DeepSeek sent shockwaves through Washington and Silicon Valley, suggesting that despite US efforts to throttle Chinese access to the advanced semiconductor chips required to develop AI language models, Chinaās AI progress was only slightly behind Americaās. So it seems at least somewhat telling that even Chinese hackers still prefer a made-in-the-USA chatbot for their cyberexploits.
Thereās been increasing alarm over the past year about the scale and sophistication of Chinese cyberoperations targeting the US. These include examples like Volt Typhoon ā a campaign to preemptively position state-sponsored cyber-actors into US IT systems, to prepare them to carry out attacks in the event of a major crisis or conflict between the US and China ā and Salt Typhoon, an espionage campaign that has targeted telecommunications companies in dozens of countries and targeted the communications of officials including President Donald Trump and Vice President JD Vance during last yearās presidential campaign.
Officials say the scale and sophistication of these attacks is far beyond what weāve seen before. It may also only be a preview of things to come in the age of AI.














