Secure Testing Through Project Glasswing
Anthropic Withholds Mythos Model Over Security Risks
The AI firm limits Claude Mythos access to private testing through Project Glasswing to prevent cyber attacks.

A digital illustration of a glowing blue padlock protecting a complex web of interconnected data points, symbolizing AI cybersecurity and private data protection.
Photo: Avantgarde News
Anthropic announced on April 12, 2026, that it will withhold the public release of its new Claude Mythos model [1]. The company stated the frontier AI is too powerful for general availability [1][2]. Mythos can reportedly identify and exploit thousands of high-severity vulnerabilities in operating systems and web browsers [1][3]. To manage these risks, the company launched Project Glasswing [1]. This initiative allows select security partners to test the model in a private, controlled setting [1][2]. The goal is to identify and patch critical software flaws before the technology can be misused [2]. Some industry experts view the move as a strategic bid to lead in the AI safety sector [3]. While the model remains private, Anthropic continues to emphasize the importance of rigorous security evaluations [2][3]. The company did not specify when or if a public version would arrive [1].
Editorial notes
Transparency note
Drafted with LLM; human-edited
- AI assisted
- Yes
- Human review
- Yes
- Last updated
Risk assessment
Reviewed for sourcing quality and editorial consistency.
Sources
- 1.↗
cbsnews.com
https://www.cbsnews.com/news/mythos-anthropic-ai-project-glasswing-hacker-threat/
- 2.↗
businessinsider.com
https://www.businessinsider.com/anthropic-mythos-cybersecurity-concerns-what-smart-people-are-saying-ai-2026-4
- 3.↗
theguardian.com
https://www.theguardian.com/technology/2026/apr/12/too-powerful-for-the-public-inside-anthropics-bid-to-win-the-ai-publicity-war
Related stories
View allTopics
About the author
Avantgarde News Desk covers secure testing through project glasswing and editorial analysis for Avantgarde News.


