EXCLUSIVE — Anthropic announced it is launching a new artificial intelligence cybersecurity initiative in a preview rollout with a select group of industry partners, aiming to test and refine the system before any broader release.
The rollout comes as concerns grow over cybersecurity tools being used in major conflicts, including the Iran war.
The effort, called Project Glasswing, uses Anthropic’s Claude Mythos frontier model to identify and exploit software vulnerabilities, a skill typically associated with human cybersecurity researchers.
Rather than making the model widely available, Anthropic is working with a select group of companies across different industries to improve defenses and better understand how to safely deploy the technology. The companies include: Amazon, Apple, Cisco, CrowdStrike, the Linux Foundation, and Microsoft.
Logan Graham, who leads Anthropic’s frontier development team, told the Washington Examiner that the Claude Mythos model has been around, but only recently have researchers found it is particularly adept at cybersecurity work.
“At a high level, we have a new model, but it’s not even that this moment is really about this model, per se,” Graham said. “It’s about what we’re doing with it, and how we hope to kick off a meaningful industrywide improvement in cybersecurity, preparing for a future of models that are this good at cybersecurity tasks.”
Graham said the model is capable of performing a wide range of cybersecurity tasks, including identifying and exploiting vulnerabilities in code and testing systems for weaknesses. He added that it has already found “tens of thousands” of high-risk vulnerabilities in every major operating system and web browser.
Anthropic decided against a broad release after determining the model’s capabilities could pose risks if misused. Instead, the company is partnering with organizations that maintain key digital infrastructure, including platforms and software that support global systems.
“Our assessment was that this model is the first time a model is this good that we decided to approach release in a very different way,” Graham said. “We’re working with a number of industry partners who are going to use this model to meaningfully improve their own security.”
The initiative comes as cyber operations play an increasing role in global conflict and national security, raising concerns about how AI could accelerate both offensive and defensive capabilities.
“If we are crossing the Rubicon where you can functionally automate those capabilities and make them very cheap as well, then we’re in an entirely new world,” Graham said, adding that Anthropic will share what they learn so the whole industry can benefit.
In recent weeks, an Iranian-linked hacking group claimed responsibility for a disruptive cyberattack on U.S. medical technology company Stryker, causing widespread system outages and raising concerns about the vulnerability of healthcare infrastructure.
Officials and cybersecurity experts have warned that such attacks may continue as part of the broader conflict with Iran, with proxy groups targeting civilian sectors such as healthcare to create disruption.
Anthropic says its new model is designed to address exactly those types of vulnerabilities before they can be exploited.
“The fact that cyber is a part of even active warfare, and a very common part of active warfare, I think, underscores its importance,” Graham said. “The world probably doesn’t appreciate just how much it relies on security. And our estimate is that, if not properly guardrailed and deployed, everything could change very, very fast.”
A separate Anthropic official told the Washington Examiner that the company is in discussions with federal government agencies about how the technology should be deployed to protect critical infrastructure from adversaries.
The rollout comes as Anthropic is engaged in a legal dispute with the Department of War after the department scrapped a $200 million deal with the company.
The decision to terminate the contract stems from the company’s desire to be involved in determining the appropriate use of its AI software by the Pentagon, while the department wanted to use Claude, Anthropic’s AI model, as it wished. Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk” after President Donald Trump directed all federal agencies to phase out their use of the company’s services. Anthropic subsequently filed two lawsuits challenging the risk label.
TRUMP RELEASES BUDGET PLAN TO BOOST PENTAGON AND CUT DOMESTIC SPENDING
The Anthropic official said the company’s relationship with the Pentagon does not diminish the value of the new cybersecurity technology. The official said discussions with the government will continue because of the critical role agencies play in both cyber defense and offense.
The official also explained that supporting a case-study-style rollout of the Claude Mythos system will help accelerate adoption further down the line.
