Anthropic Built an AI That Finds Zero-Days on Its Own — and Won’t Release It

Claude Mythos Preview autonomously discovered thousands of zero-day vulnerabilities across every major OS and browser — including a 17-year-old FreeBSD flaw. Anthropic locked it behind Project Glasswing. Here is what that means.

1000s
Zero-days discovered
40+
Companies in consortium
17yr
Oldest flaw found
0
Public access

On April 7, 2026, Anthropic announced that its most powerful model to date — Claude Mythos — will not get a general public release. Instead, it launched under a restricted program called Project Glasswing, available only to a curated consortium of roughly 40 technology and cybersecurity companies. The reason: Mythos can autonomously discover zero-day vulnerabilities at a scale and speed that no human security researcher has ever matched.

This is not a marketing claim about theoretical capability. Anthropic disclosed that during internal testing, Mythos Preview independently identified thousands of previously unknown vulnerabilities across every major operating system and every major web browser. The most dramatic find was CVE-2026-4747 — a 17-year-old remote code execution flaw in FreeBSD’s NFS implementation that could grant root access to any machine running the affected service.

The 5-Second Version

01

What Mythos Actually Did

Most AI security research to date has involved pointing a model at known vulnerability patterns and asking it to find similar ones. Mythos is different. According to Anthropic’s disclosure, the model was given access to source code and binary analysis tools, then left to operate autonomously. It did not follow a checklist. It explored code paths, formulated hypotheses about potential weaknesses, wrote exploit code to test those hypotheses, and documented its findings — all without human direction.

The FreeBSD vulnerability is the case that made headlines because it is so striking: a remote code execution flaw that had been sitting in the NFS implementation for 17 years, missed by every human auditor, every static analysis tool, and every previous AI system. Mythos found it, wrote a working exploit, and reported the full attack chain — all in what Anthropic describes as a single autonomous session.

The implications are hard to overstate. If an AI can find zero-days this effectively, so can adversaries with access to comparable models. This is why Anthropic made the decision to restrict access rather than release publicly.

02

What Is Project Glasswing

Project Glasswing is Anthropic’s answer to the question “what do you do when your AI is too dangerous to release?” Instead of a public API, Glasswing is a managed consortium where vetted companies can use Mythos Preview for defensive security work only.

The confirmed consortium members read like a who’s-who of tech and cybersecurity:

01

Cloud and Infrastructure

AWS, Google Cloud, Microsoft Azure, Broadcom — the companies that run the infrastructure everyone else depends on.

Protecting the foundations
02

Security Specialists

CrowdStrike, Cisco, and other dedicated cybersecurity firms using Mythos to find vulnerabilities before attackers do.

Offense as defense
03

Device Manufacturers

Apple and NVIDIA — companies whose hardware and software run on billions of devices worldwide.

Securing the endpoints
04

Financial Infrastructure

JPMorgan Chase and other financial institutions whose security directly affects the global economy.

Protecting the money

Mythos Preview is available through Google Cloud’s Vertex AI, but only to Glasswing members under strict access controls. Anthropic has been explicit that public release will not happen “until new safeguards exist” — without specifying what those safeguards would look like or when they might be ready.

03

Why Not Just Release It

The honest answer is that Mythos creates an asymmetric risk. In defensive hands, it finds vulnerabilities so they can be patched. In offensive hands, it finds vulnerabilities so they can be exploited. The model does not know the difference. The same capability that makes it valuable for defense makes it dangerous for offense.

Anthropic’s decision to restrict access is unprecedented in the AI industry. OpenAI, Google, and Meta have all eventually released their most capable models to the public (or at least to paying API customers). Anthropic is the first major lab to say “this model is too capable to release” and actually follow through by keeping it behind a consortium wall.

There is a legitimate debate about whether this is the right approach. Restricting access means only large, well-funded organizations get the benefit. Smaller companies and open-source security researchers — who have historically been responsible for some of the most important vulnerability discoveries — are locked out. The counterargument is that unrestricted access to a model this capable would be like publishing a master key to every system on the internet.

04

What This Means for Everyone Else

For practitioners who are not part of the Glasswing consortium, the immediate practical impact is indirect but significant. The vulnerabilities Mythos discovers will get reported through standard CVE channels and patched by the affected vendors. Your systems will benefit from Mythos even if you never touch the model — as long as you keep your software updated.

The larger implication is that AI-driven security is no longer theoretical. We have crossed the threshold where an AI system can autonomously find vulnerabilities that human experts missed for nearly two decades. That changes the economics of both offense and defense in cybersecurity permanently.

For anyone working in federal technology or defense contracting, this is a landmark moment. The agencies that adopt AI-driven vulnerability discovery first will have a structural advantage. The ones that do not will be playing catch-up against adversaries who will.

The Verdict
Claude Mythos is the first AI model capable enough that its maker chose not to release it. Whether Anthropic’s restricted approach is the right model for the industry remains to be seen — but the capability itself is here, and the security landscape has permanently changed.

The lesson for working professionals is not “AI is scary.” It is that AI security literacy — understanding how these tools work, what they can find, and how to respond to AI-discovered vulnerabilities — is becoming a core professional skill. The organizations that build that literacy now are the ones that will be ready when models like Mythos eventually become more widely available.

AI Literacy Is a Security Skill Now

The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.

Reserve Your Seat
PA

Published By

Precision AI Academy

Practitioner-focused AI education · 2-day in-person bootcamp in 5 U.S. cities

Precision AI Academy publishes deep-dives on applied AI engineering for working professionals. Founded by Bo Peng (Kaggle Top 200) who leads the in-person bootcamp in Denver, NYC, Dallas, LA, and Chicago.

Kaggle Top 200 Federal AI Practitioner 5 U.S. Cities Thu-Fri Cohorts