As Anthropic launches its most powerful AI model ever, CEO Dario Amodei confirms company is in talks with US government and has offered...
Anthropic CEO Dario Amodei says the company has been in conversation with US government officials, offering to help assess and defend against the risks posed by increasingly powerful AI models. The admission comes as Anthropic unveils Claude Mythos preview—a model capable enough at finding software vulnerabilities that the company won't release it widely. "We've spoken to officials across the US government and offered to work with them and collaborate to assess the risks of these models and to help defend against the risks of these models," Amodei said. He framed cybersecurity as a shared problem that no single company or agency can solve alone, adding that everything in daily life now runs on software—and securing that software is, in effect, securing society itself.

Amodei called Mythos a particularly big jump in capability. Anthropic didn't train it for cybersecurity—it was trained to be good at code. But that turned out to be enough. According to Amodei, the model is roughly on par with a professional human security researcher when it comes to identifying bugs. Where it really pulls ahead is in chaining vulnerabilities together—stringing three, four, sometimes five individually minor flaws into a single sophisticated exploit, the kind of work that would occupy a human researcher for an entire day.
Mythos found a 27-year-old OpenBSD bug; Anthropic is limiting access through a new programme called Project Glasswing To prove the point, Anthropic aimed Mythos at open-source operating systems. In OpenBSD, it caught a bug that had gone unnoticed for 27 years—sending a couple of data packets to any server running it was enough to crash the machine. In Linux, the model uncovered privilege escalation flaws that let an unprivileged user gain full administrator access by running a binary. All the vulnerabilities were reported to the relevant maintainers and patched before Anthropic went public.
Because these capabilities cut both ways, Anthropic is keeping Mythos on a tight leash. The company is launching Project Glasswing, a partnership programme that gives early access to organisations responsible for maintaining critical software infrastructure. The idea is to put defenders ahead of attackers—let the people who write the code find the holes first. Amodei said the effort would take months, possibly years, and that no single organisation could tackle the problem alone.
Amodei's government pitch comes in the middle of a legal battle with the Pentagon over a supply chain blacklistingThe timing here is impossible to ignore. Even as Amodei extends an olive branch to Washington, Anthropic is fighting the Department of Defense in court. The Pentagon designated the company a supply chain risk in early March—a label historically reserved for foreign adversaries and never before publicly applied to an American company. The dispute traces back to a contract negotiation that fell apart last year: the DoD wanted unrestricted access to Claude for all lawful purposes, while Anthropic held firm on two narrow exceptions—fully autonomous weapons and mass domestic surveillance of Americans.
Just yesterday, a federal appeals court in Washington, D.C. sided with the government and denied Anthropic's request to temporarily block the blacklisting. A separate ruling from a San Francisco court, however, bars the Trump administration from enforcing a broader ban on Claude across other federal agencies. Acting Attorney General Todd Blanche called the appeals court result a "resounding victory for military readiness." Anthropic said it remains confident the designation will ultimately be found unlawful.
So the company finds itself in an unusual position—suing the Pentagon with one hand and offering to help the government with the other. Whether those two gestures can coexist is a question neither side has answered yet.
Amodei called Mythos a particularly big jump in capability. Anthropic didn't train it for cybersecurity—it was trained to be good at code. But that turned out to be enough. According to Amodei, the model is roughly on par with a professional human security researcher when it comes to identifying bugs. Where it really pulls ahead is in chaining vulnerabilities together—stringing three, four, sometimes five individually minor flaws into a single sophisticated exploit, the kind of work that would occupy a human researcher for an entire day.
Mythos found a 27-year-old OpenBSD bug; Anthropic is limiting access through a new programme called Project Glasswing To prove the point, Anthropic aimed Mythos at open-source operating systems. In OpenBSD, it caught a bug that had gone unnoticed for 27 years—sending a couple of data packets to any server running it was enough to crash the machine. In Linux, the model uncovered privilege escalation flaws that let an unprivileged user gain full administrator access by running a binary. All the vulnerabilities were reported to the relevant maintainers and patched before Anthropic went public.
Because these capabilities cut both ways, Anthropic is keeping Mythos on a tight leash. The company is launching Project Glasswing, a partnership programme that gives early access to organisations responsible for maintaining critical software infrastructure. The idea is to put defenders ahead of attackers—let the people who write the code find the holes first. Amodei said the effort would take months, possibly years, and that no single organisation could tackle the problem alone.
Amodei's government pitch comes in the middle of a legal battle with the Pentagon over a supply chain blacklistingThe timing here is impossible to ignore. Even as Amodei extends an olive branch to Washington, Anthropic is fighting the Department of Defense in court. The Pentagon designated the company a supply chain risk in early March—a label historically reserved for foreign adversaries and never before publicly applied to an American company. The dispute traces back to a contract negotiation that fell apart last year: the DoD wanted unrestricted access to Claude for all lawful purposes, while Anthropic held firm on two narrow exceptions—fully autonomous weapons and mass domestic surveillance of Americans.
Just yesterday, a federal appeals court in Washington, D.C. sided with the government and denied Anthropic's request to temporarily block the blacklisting. A separate ruling from a San Francisco court, however, bars the Trump administration from enforcing a broader ban on Claude across other federal agencies. Acting Attorney General Todd Blanche called the appeals court result a "resounding victory for military readiness." Anthropic said it remains confident the designation will ultimately be found unlawful.
So the company finds itself in an unusual position—suing the Pentagon with one hand and offering to help the government with the other. Whether those two gestures can coexist is a question neither side has answered yet.
Next Story