Supply Chain What? The NSA Is Using Anthropic’s Mythos According To Report
Two months after the Department of War declared Anthropic a “supply chain risk” and moved to several all ties with the AI wunderkind, the National Security Agency (NSA), which falls under DoW, is using it according to Axios.
According to the report, the nation’s top surveillance agency is using Mythos Preview – Anthropic’s most powerful model to date. It is unclear how the NSA is currently using Mythos, however other organizations are using it primarily to scan their own environments for exploitable security vulnerabilities. The company has restricted access to Mythos to around 40 organizations – as the company says the model’s offensive cyber capabilities are too dangerous for wider release. Axios notes further;
Anthropic only announced 12 of those organizations. One source said the NSA was among the unnamed agencies with access.
The NSA’s counterparts in the U.K. have said they have access to the model through the country’s AI Security Institute.
On Friday, Anthropic CEO Dario Amodei met White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent to discuss deploying Mythos within the government, as well as Anthropic’s wider plans and security practices.
As we noted late last week, the White House has directed federal agencies to begin using Mythos. So the Pentagon, er, Department of War, has egg (or an egg-like substance) on their face – after Anthropic demanded oversight over its use in military operations and domestic surveillance.
From “Supply-Chain Risk” to Strategic Asset
The government’s relationship with Anthropic had been icy for months. As we noted in February, the Pentagon threatened to blacklist the company as a “supply-chain risk” after Anthropic refused to strip certain ethical guardrails from its models for military use. That standoff escalated in March when Anthropic sued the Pentagon over the designation, as detailed in ZeroHedge’s coverage of the lawsuit.
That said, the Pentagon’s “supply-chain risk” label was always narrow in scope: it was a DoD-specific action triggered by the company’s refusal to remove certain ethical guardrails from its models for unrestricted military and offensive-use applications. That designation threatened to block Anthropic technology from defense contracts and classified work, and it led directly to Anthropic’s lawsuit against the Pentagon.
Today’s OMB memo changes almost nothing on paper for that designation. The Pentagon has not withdrawn it, the lawsuit is still active, and DoD contractors remain restricted from using Claude models (including Mythos) in offensive or surveillance contexts.
Just days ago, the U.S. Treasury was rushing to gain access to Mythos after internal warnings that the model could “hack every major system.” Senior Treasury and Federal Reserve officials had summoned CEOs of the nation’s largest banks to Washington, warning them that the financial system’s exposure to AI-powered attacks had become existential. Behind closed doors, federal agencies – including the Commerce Department’s Center for AI Standards and Innovation – had already begun quiet red-teaming of Mythos. Anthropic co-founder and president Daniela Amodei confirmed the company had briefed the administration early, telling reporters simply: “The government has to know about this stuff.”
Tyler Durden
Mon, 04/20/2026 – 14:00