State-sponsored Chinese hackers used American AI technology to carry out cybercrimes: Anthropic

State-sponsored Chinese hackers are believed to be responsible for a widespread cyber espionage campaign that primarily relied on American artificial intelligence technology, according to a new report released by Anthropic on Thursday.

The 13-page report revealed the first documented case of a cyberattack that largely leveraged AI capabilities with limited human input to carry out cybercrimes. AI usage accounted for 80-90% of the tactical operations conducted, leaving human operators with 10-20% of the remaining work.

The hacking operation targeted about 30 unnamed entities, some of which were successfully breached. Anthropic says the initial targets included tech companies, financial institutions, chemical manufacturing companies, and government agencies in multiple countries.

Following its own assessment, the AI startup company said it has “high confidence” that the threat actor was a Chinese state-sponsored group.

“We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic said in a statement outlining the report.

The firm found that its “agentic” AI tool, Claude Code, was exploited by the hackers. Unlike generative AI, agentic AI refers to systems that operate autonomously without as much human input.

“Agents are valuable for everyday work and productivity—but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,” the company explained.

“These attacks are likely to only grow in their effectiveness,” it continued. “To keep pace with this rapidly-advancing threat, we’ve expanded our detection capabilities and developed better classifiers to flag malicious activity. We’re continually working on new methods of investigating and detecting large-scale, distributed attacks like this one.”

Anthropic intends to release similar reports on a regular basis in the near future.

In response to Thursday’s report, China’s Foreign Affairs Ministry accused the AI startup of making accusations without evidence and maintained that the Chinese government opposes hacking.

Anthropic joins Microsoft and OpenAI in raising the alarm about state-sponsored actors using their technologies.

Last month, Microsoft revealed that cybercriminals in China, Iran, Russia, and North Korea have used AI to “automate phishing and create synthetic content.” And in February, OpenAI disclosed the extent of a China-linked operation that used AI models to build an AI-powered surveillance tool for monitoring social media posts in Western countries and feeding real-time reports to Chinese security services.

One of the latest high-profile cyberattacks in recent memory was a data breach at the Congressional Budget Office. The incident poses several potentially disastrous national security implications for federal agencies in today’s digital age. The CBO did not confirm who was responsible for the cyberattack last week, but reports suggest China may have been the perpetrator.

While Anthropic had been monitoring the cyber espionage campaign involving its own technology, a conservative nonprofit took aim at the company for prioritizing “woke” AI models over protecting its consumers from foreign adversaries.

“Anthropic should spend less time trying to force woke AI models onto consumers and more time protecting them from China,” Consumers’ Research Executive Director Will Hild said in a statement provided to the Washington Examiner.

“While Anthropic has been spending all of its efforts pushing woke, cult-like movements like Effective Altruism and kowtowing to left-wing radicals, they have left their platforms and consumers vulnerable to foreign adversaries like China,” Hild added. “Consumers should demand real accountability and relentless focus on security over woke social crusades.”

OPENAI SLAMS NEW YORK TIMES FOR DEMANDING CHATGPT LOGS TO CRACK DOWN ON PAYWALL BYPASSING IN LAWSUIT

Consumers’ Research describes Anthropic as the “wokest” AI company, but the tech startup says it’s actively making Claude “politically even-handed” in its AI-generated responses.

In a Thursday statement, Anthropic detailed its open-source method to measure bias in chatbots and admitted that xAI’s Grok and Google’s Gemini are more even-handed than Claude. Anthropic has been accused of inserting political ideology into Claude by xAI owner Elon Musk. The company has also been facing pressure from the Trump administration, starting with the president’s July executive order banning “woke AI” within the federal government.

Related Content