Economy, business, innovation

Pentagon Threatens To Blacklist Anthropic As ‘Supply Chain Risk’ Over Guardrails On Military Use

Pentagon Threatens To Blacklist Anthropic As ‘Supply Chain Risk’ Over Guardrails On Military Use

The Pentagon is reportedly about to cut ties with Anthropic, makers of Claude, which is already embedded in classified systems
The company insists on implementing guardrails over how the US military can use Claude – specifically when it comes to mass surveillance and autonomous weapons – after it was used in the Maduro raid without their knowledge.  
The Pentagon is now calling Claude a threat to national security
Some are accusing the overwhelmingly left-leaning company of trying to undermine the Trump administration, while Elon Musk says Claude ‘hates whites, Asians, heterosexuals, and men.’

Defense Secretary Pete Hegseth is reportedly “close” to cutting business ties with Anthropic and designating the firm a supply chain risk – a penalty typically reserved for foreign adversaries, a senior Pentagon official told Axios.

Anthropic’s flagship model, Claude, is already embedded in the military’s classified systems – however the company’s CEO has been pushing for abstract guardrails over ethical concerns for what the government sees as urgent national security needs. 

If classified as a national security risk, the designation would force any company that wants to do business with the U.S. military to certify it does not use Anthropic’s AI – effectively blacklisting the firm from large swaths of the defense ecosystem.

“It will be an enormous pain in the ass to disentangle,” the senior official told Axios. “And we are going to make sure they pay a price for forcing our hand.”

Chief Pentagon spokesman Sean Parnell confirmed the review, framing it as a matter of national security.

“Our nation requires that our partners be willing to help our warfighters win in any fight,” Parnell said. “Ultimately, this is about our troops and the safety of the American people.”

Claude was notably used during the January operation targeting Venezuelan leader Nicolás Maduro, highlighting how deeply embedded the software already is within U.S. defense operations. As Axios noted on Saturday: 

 The tensions came to a head recently over the military’s use of Claude in the operation to capture Venezuela’s Nicolás Maduro, through Anthropic’s partnership with AI software firm Palantir.

According to the senior official, an executive at Anthropic reached out to an executive at Palantir to ask whether Claude had been used in the raid.
“It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot,” the official said.

Since then, Pentagon officials and Anthropic executives have been locked in contentious negotiations over how the military may use the AI, particularly in surveillance, intelligence collection, and weapons development.

Anthropic CEO Dario Amodei at the World Economic Forum in Davos in January 2026. Photo: Krisztian Bocsi/Bloomberg via Getty Images

Anthropic CEO Dario Amodei has pushed for guardrails to prevent mass surveillance of Americans or the use of AI in fully autonomous weapons systems without human involvement, however the Pentagon says those restrictions are unworkable. Anthropic’s own Acceptable Use Policy (UAP) explicitly prohibits the use of Claude for: 

The design or use of weapons
Domestic surveillance
Facilitating violence or malicious cyber operations

These restrictions are not waived for military/government users unless the contract includes specific safeguards that Anthropic judges adequate, however defense officials insist that military AI tools must be available for “all lawful purposes,” arguing that real-world operations are riddled with gray areas that rigid rules cannot anticipate. The same standard is being demanded of other major AI labs, including OpenAI, Google, and xAI.

One source familiar with the talks said senior defense officials had grown increasingly frustrated with Anthropic – and seized the opportunity to escalate the dispute publicly.

Musk piles on – ‘evil’ and ‘misanthropic’

As the Pentagon showdown escalated, Anthropic also found itself under fire from another powerful adversary – Elon Musk.

Earlier this month, Musk launched a blistering public attack after the company announced a massive $30 billion funding round valuing it at roughly $380 billion. Musk labeled the company’s AI “evil” and “misanthropic,” accusing Claude of ideological bias and hostility toward certain demographic groups, accusing it of “hating Whites, Asians, heterosexuals, and men” in its outputs. 

Musk – whose own company xAI competes directly with Anthropic – mocked the firm’s name, suggesting that a company branded as Anthropic had paradoxically become anti-human.

Your AI hates Whites & Asians, especially Chinese, heterosexuals and men.

This is misanthropic and evil. Fix it.

Frankly, I don’t think there is anything you can do to escape the inevitable irony of Anthropic ending up being Misanthropic. You were doomed to this fate when you…

— Elon Musk (@elonmusk) February 12, 2026

That said, in January Anthropic cut off xAI’s access to Claude models, which xAI engineers had been using via the Cursor coding tool to speed up internal work. Anthropic enforces a strict policy against using its models to build/train competitors (they had done the same to OpenAI earlier). Musk’s co-founder Tony Wu (who just left) sent an internal note acknowledging the productivity hit but saying it would motivate xAI to build better tools, while Musk later called the cutoff “not good for their karma.”

Musk’s beef isn’t baseless; tests and user reports show Claude often declines queries that could be seen as offensive or non-inclusive (e.g., jokes about certain demographics, historical hypotheticals).

To check for the inner woke programming of Claude and Gemini.

In earlier tests where a hospital triage was used as an example. Claude valued the life of a white man as 1/23 important as a black woman. (I.e. one black woman is more valuable than 23 white men)

— Edward Northman (@EdwardNorthman) December 10, 2025

Musk has positioned xAI’s Grok as a less restricted, “truth-seeking” alternative to what he and allies describe as overly constrained or ideologically filtered models. Anthropic, by contrast, has built its reputation around “constitutional AI” – a framework designed to impose ethical limits on how its systems behave.

High stakes, limited alternatives

Designating Anthropic a supply chain risk would force defense contractors to rip Claude out of their internal workflows – a massive compliance headache given the company’s reach. Anthropic recently said eight of the ten largest U.S. companies already use its technology.

The Pentagon contract at risk is valued at up to $200 million – small compared to Anthropic’s reported $14 billion in annual revenue, but symbolically enormous.

Complicating matters, a senior administration official acknowledged that competing AI models are still “just behind” Claude when it comes to specialized government and classified applications, making an abrupt transition risky.

Still, Pentagon officials appear confident that other AI providers will ultimately agree to the “all lawful use” standard, even as sources close to the negotiations say much remains unsettled.

Tyler Durden
Mon, 02/16/2026 – 11:15

Scroll to Top