On April 7, 2026, Anthropic announced that its most powerful model to date — Claude Mythos — will not get a general public release. Instead, it launched under a restricted program called Project Glasswing, available only to a curated consortium of roughly 40 technology and cybersecurity companies. The reason: Mythos can autonomously discover zero-day vulnerabilities at a scale and speed that no human security researcher has ever matched.
This is not a marketing claim about theoretical capability. Anthropic disclosed that during internal testing, Mythos Preview independently identified thousands of previously unknown vulnerabilities across every major operating system and every major web browser. The most dramatic find was CVE-2026-4747 — a 17-year-old remote code execution flaw in FreeBSD’s NFS implementation that could grant root access to any machine running the affected service.
The 5-Second Version
- Claude Mythos Preview is Anthropic’s most advanced model. It is not available to the public.
- It autonomously found thousands of zero-day vulnerabilities across all major operating systems and browsers.
- The most notable find: a 17-year-old RCE flaw in FreeBSD (CVE-2026-4747) that grants root access via NFS.
- Project Glasswing restricts access to ~40 companies including AWS, Apple, Google, Microsoft, CrowdStrike, and NVIDIA.
- Anthropic says it will not release Mythos publicly until new safeguards exist.
What Mythos Actually Did
Most AI security research to date has involved pointing a model at known vulnerability patterns and asking it to find similar ones. Mythos is different. According to Anthropic’s disclosure, the model was given access to source code and binary analysis tools, then left to operate autonomously. It did not follow a checklist. It explored code paths, formulated hypotheses about potential weaknesses, wrote exploit code to test those hypotheses, and documented its findings — all without human direction.
The FreeBSD vulnerability is the case that made headlines because it is so striking: a remote code execution flaw that had been sitting in the NFS implementation for 17 years, missed by every human auditor, every static analysis tool, and every previous AI system. Mythos found it, wrote a working exploit, and reported the full attack chain — all in what Anthropic describes as a single autonomous session.
The implications are hard to overstate. If an AI can find zero-days this effectively, so can adversaries with access to comparable models. This is why Anthropic made the decision to restrict access rather than release publicly.
What Is Project Glasswing
Project Glasswing is Anthropic’s answer to the question “what do you do when your AI is too dangerous to release?” Instead of a public API, Glasswing is a managed consortium where vetted companies can use Mythos Preview for defensive security work only.
The confirmed consortium members read like a who’s-who of tech and cybersecurity:
Cloud and Infrastructure
AWS, Google Cloud, Microsoft Azure, Broadcom — the companies that run the infrastructure everyone else depends on.
Security Specialists
CrowdStrike, Cisco, and other dedicated cybersecurity firms using Mythos to find vulnerabilities before attackers do.
Device Manufacturers
Apple and NVIDIA — companies whose hardware and software run on billions of devices worldwide.
Financial Infrastructure
JPMorgan Chase and other financial institutions whose security directly affects the global economy.
Mythos Preview is available through Google Cloud’s Vertex AI, but only to Glasswing members under strict access controls. Anthropic has been explicit that public release will not happen “until new safeguards exist” — without specifying what those safeguards would look like or when they might be ready.
Why Not Just Release It
The honest answer is that Mythos creates an asymmetric risk. In defensive hands, it finds vulnerabilities so they can be patched. In offensive hands, it finds vulnerabilities so they can be exploited. The model does not know the difference. The same capability that makes it valuable for defense makes it dangerous for offense.
Anthropic’s decision to restrict access is unprecedented in the AI industry. OpenAI, Google, and Meta have all eventually released their most capable models to the public (or at least to paying API customers). Anthropic is the first major lab to say “this model is too capable to release” and actually follow through by keeping it behind a consortium wall.
There is a legitimate debate about whether this is the right approach. Restricting access means only large, well-funded organizations get the benefit. Smaller companies and open-source security researchers — who have historically been responsible for some of the most important vulnerability discoveries — are locked out. The counterargument is that unrestricted access to a model this capable would be like publishing a master key to every system on the internet.
What This Means for Everyone Else
For practitioners who are not part of the Glasswing consortium, the immediate practical impact is indirect but significant. The vulnerabilities Mythos discovers will get reported through standard CVE channels and patched by the affected vendors. Your systems will benefit from Mythos even if you never touch the model — as long as you keep your software updated.
The larger implication is that AI-driven security is no longer theoretical. We have crossed the threshold where an AI system can autonomously find vulnerabilities that human experts missed for nearly two decades. That changes the economics of both offense and defense in cybersecurity permanently.
For anyone working in federal technology or defense contracting, this is a landmark moment. The agencies that adopt AI-driven vulnerability discovery first will have a structural advantage. The ones that do not will be playing catch-up against adversaries who will.
The lesson for working professionals is not “AI is scary.” It is that AI security literacy — understanding how these tools work, what they can find, and how to respond to AI-discovered vulnerabilities — is becoming a core professional skill. The organizations that build that literacy now are the ones that will be ready when models like Mythos eventually become more widely available.
AI Literacy Is a Security Skill Now
The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.
Reserve Your Seat