This is not a product announcement. It is a warning. Every bank in the world runs on software. Most of the hospitals, and most of the government portal, every payment system that processes your daily transactions. Somewhere inside that software, almost certainly, there are bugs. Engineers were not careless. Bugs are simply what happens when humans write code at scale. Engineers had not kept them intentionally. For decades, finding those bugs before someone with bad intentions did was a race we were slowly losing. Then, last week, a company called Anthropic in San Francisco announced a model that changed the math on that race entirely. It then decided the world was not ready for it. The model is called Claude Mythos Preview. Anthropic built it and then locked it away.
The public found out by accident. On March 26, 2026, a security researcher named Roy Paz discovered something unusual. Anthropic had left nearly 3,000 internal files in an unsecured, publicly searchable data store. One file was a draft blog post. It described a model Anthropic called “by far the most powerful AI model we’ve ever developed.” It said the company believed that model posed cybersecurity risks it could not yet contain. Fortune published the story that evening. Two weeks later, on April 7, Anthropic made the announcement official.
Mythos sits in a new model tier Anthropic calls Capybara, a step above its current Opus line. The company says Mythos outperforms every other model it has built across coding, reasoning, and research. But the number that matters is cybersecurity. Over a few weeks of internal testing, Mythos identified thousands of zero-day vulnerabilities. These spanned every major operating system and every major web browser. Zero-days are flaws that developers do not know exist. The oldest bug Mythos found had been sitting undetected in OpenBSD, an operating system widely known for its security record, for 27 years.
Anthropic did not train Mythos for this. The cybersecurity capabilities emerged from broader gains in coding, reasoning, and autonomous operation. That matters. It means the next model will likely be even better at this, trained or not.
This is where the problem sharpens. The same capability that lets Mythos find a 27-year-old bug for a defender lets it find that bug for an attacker. Anthropic’s CEO Dario Amodei wrote on X: “The dangers of getting this wrong are obvious.” Anthropic chose not to release Mythos publicly. It is the first major AI lab to withhold a model over safety concerns since OpenAI held back GPT-2 in 2019.
The market read that as a warning. After Fortune’s March report, shares in CrowdStrike, Palo Alto Networks, Zscaler, SentinelOne, and Okta all fell between 5% and 11%. Investors worried that a model capable of finding vulnerabilities faster than any human team could undercut the traditional security industry.
The worry reached Washington. Last week, Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent called a private meeting with major U.S. bank CEOs. They discussed the cyber risks Mythos poses. JPMorgan’s Jamie Dimon was the only major banking CEO absent. The gathering was not on any public schedule.
The cybersecurity issue is not the only one Anthropic flagged. In pre-release testing, Mythos showed something researchers found unsettling. In 29% of test transcripts, the model appeared to know it was being evaluated, without saying so directly. Researchers worry that a model detecting an evaluation will behave more carefully in testing than in real deployment. Anthropic published this finding in the Mythos system card rather than suppress it.
That level of transparency is rare. It is a reminder that a capability has moved beyond what its creators fully understand.
Anthropic’s response is Project Glasswing. Twelve founding partners, including AWS, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and NVIDIA, now have access to Mythos Preview for defensive security work. More than 40 additional organizations that build or maintain critical software infrastructure received access as well, with over $100 million in combined usage credits.
Glasswing partners use Mythos to scan their systems and close gaps. They are racing to fix vulnerabilities before a future model with similar capabilities becomes widely available. Anthropic says it will notify each relevant organization about every vulnerability it finds, then wait 135 days before any public disclosure.
Cisco’s chief security officer, Anthony Grieco, put it plainly: “The old ways of hardening systems are no longer sufficient.”
CrowdStrike’s 2026 Global Threat Report found an 89% increase in attacks by adversaries using AI, year over year. The race is already on.
The real story now becomes as Sri Lanka does not have a seat at this table. None of our banks, government agencies, or critical infrastructure operators are among the 52 organizations with access to Mythos Preview. The list reflects who already has the resources and relationships to join a program like this. But it raises a question worth sitting with.
A model like Mythos will reach the wider world. Sri Lanka’s software systems will face the same risks as everyone else’s when it does. Our banking infrastructure, government portals, and health systems all run on code. Some of that code carries bugs that are years old. Some are older than that.
The organizations with early access to Mythos have a window to find and fix their vulnerabilities before the wider world catches up. Sri Lanka does not currently have that window.
Anthropic built a tool that finds vulnerabilities faster than any team of human analysts. Then it decided the world was not ready for it. That decision was responsible. It was, quietly, a signal.
AI can do more than most security teams and policy frameworks anticipated. That gap is widening. Sri Lanka included.
The bug in OpenBSD sat undiscovered for 27 years. Mythos found it in hours. The real question is not whether tools like this will reach the world. They will. The question is whether, by the time they do, we will have had time to patch what needs patching.
Kalana Geethmal – Solutions Engineer
Sources for further reference and Study
- Anthropic Red Team Blog technical writeup of Mythos’s cybersecurity testing methodology and findings: https://red.anthropic.com/2026/mythos-preview/
- Anthropic Official Glasswing Announcement partner list, goals, and Dario Amodei’s statement: https://www.anthropic.com/glasswing
- Fortune, March 26, 2026 the original leak story, Roy Paz, and the misconfigured data store: https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/
- Fortune, April 7, 2026 official Glasswing launch, partner list, stock reaction, and the 27-year OpenBSD bug: https://fortune.com/2026/04/07/anthropic-claude-mythos-model-project-glasswing-cybersecurity/
- TechCrunch, April 7, 2026 corrected partner count and access structure: https://techcrunch.com/2026/04/07/anthropic-mythos-ai-model-preview-security/
- NBC News, April 9, 2026 29% evaluation awareness stat, GPT-2 comparison, 135-day disclosure policy: https://www.nbcnews.com/tech/security/anthropic-project-glasswing-mythos-preview-claude-gets-limited-release-rcna267234
- CNBC, April 10, 2026 Powell and Bessent bank CEO meeting: https://www.cnbc.com/2026/04/10/powell-bessent-us-bank-ceos-anthropic-mythos-ai-cyber.html
- CrowdStrike Blog founding partner perspective and 89% AI attack increase stat: https://www.crowdstrike.com/en-us/blog/crowdstrike-founding-member-anthropic-mythos-frontier-model-to-secure-ai/
- SecurityWeek, April 8, 2026 dual-use framing and Cisco SVP quote: https://www.securityweek.com/anthropic-unveils-claude-mythos-a-cybersecurity-breakthrough-that-could-also-supercharge-attacks/

