Security

Anthropic boasts major AI-enabled cybersecurity breakthrough

By Ray Le Maistre

Apr 8, 2026

  • AI developer says its Claude Mythos model has capabilities that could ‘reshape cybersecurity’
  • It has formed Project Glasswing with big name partners, such as AWS and Cisco, to put Mythos through its paces in a limited release
  • At least one analyst is sceptical of the extent of the claims, believing that Mythos is being held back while major bugs are fixed

Fresh from its announcement of expanded capacity and rapidly growing revenues, major AI developer Anthropic says its new frontier large language model (LLM), Claude Mythos, has next-level cybersecurity chops and has announced the formation of Project Glasswing with a number of big name partners in order to put Mythos to the test in a controlled environment before unleashing it on the world.    

When we say big names, we mean huge – those participating in Project Glasswing, which aims to “secure the world’s most critical software” are Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. You can see what Cisco has to say about the initiative in this blog

Anthropic says it has also “extended access to a group of over 40 additional organisations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems” and is offering up to $100m in “usage credits for Mythos Preview” to support such efforts.

Anthropic says it formed the initiative “because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” Anthropic provides a great deal of insight into its claims for Mythos and its relative capabilities across multiple tests in this lengthy post on its website

The company believes we have now reached a tipping point in terms of AI’s role in cybersecurity. “Many flaws in software go unnoticed for years because finding and exploiting them has required expertise held by only a few skilled security experts. With the latest frontier AI models, the cost, effort and level of expertise required to find and exploit software vulnerabilities have all dropped dramatically. Over the past year, AI models have become increasingly effective at reading and reasoning about code – in particular, they show a striking ability to spot vulnerabilities and work out ways to exploit them. Claude Mythos Preview demonstrates a leap in these cyber skills – the vulnerabilities it has spotted have, in some cases, survived decades of human review and millions of automated security tests, and the exploits it develops are increasingly sophisticated.”

As a result of those deep-dive abilities, Anthropic claims that Mythos Preview has “already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.” It adds: “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely… Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.”

It notes: “Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: The same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software – and for producing new software with far fewer security bugs. Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.”

But because such models can be used for bad as well as good, Anthropic says it does  not plan to make Claude Mythos Preview generally available. Instead its “eventual goal is to enable our users to safely deploy Mythos-class models at scale – for cybersecurity purposes but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model’s most dangerous outputs. We plan to launch new safeguards with an upcoming Claude Opus model, allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview.” 

The companies involved in Project Glasswing will use Mythos Preview as part of their defensive security work, with Anthropic pledging to “share what we learn so the whole industry can benefit.”

Anthropic notes: “Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systems – systems that represent a very large portion of the world’s shared cyberattack surface. We anticipate this work will focus on tasks like local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing of systems.” 

The company concluded: “We are hopeful that Project Glasswing can seed a larger effort across industry and the public sector, with all parties helping to address the biggest questions around the impact of powerful models on security. We invite other AI industry members to join us in helping to set the standards for the industry. In the medium term, an independent, third-party body – one that can bring together private- and public-sector organisations – might be the ideal home for continued work on these large-scale cybersecurity projects.”

But at least one industry analyst is sceptical that Mythos is all that Anthropic is making it out to be. Seasoned tech sector watcher Richard Windsor notes in his latest Radio Free Mobile blog that many claims have been made in the past that foundation models are so powerful and capable that they would pose a risk to the world if made generally available but, ultimately, they are released just as funding rounds come into view. 

“Anthropic is pumping the hype yet again by stating that its Mythos model is so good that it is too dangerous to make it generally available, and it is only allowing a few pre-vetted companies to have access to it,” writes Windsor. “My guess is that the reality is that Mythos is in beta and is being soft-launched to a few trusted partners so that the kinks and bugs can be worked out of it before it goes on general release… I think this commentary is just more of the usual hype and that Mythos will be released when it is market-ready, as there is no chance of it commanding a robot army to wipe out humanity. In fact, it is more likely to do something irretrievably stupid that harms its user through data loss or a hack, and it is this danger that Anthropic is working on fixing.”

That may be, but if the combined forces behind Project Glasswing can help iron out those kinks and bugs and move the needle on cybersecurity defence capabilities, then that might help to buy some time and help the collective tech sector figure out the best way to prevent bad actors using quantum computers to bypass the world’s cyber defences. 

- Ray Le Maistre, Editorial Director, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.

Subscribe

Cookies

TelecomTV uses cookies and third-party tools to provide functionality, personalise your visit, monitor and improve our content, and show relevant adverts.