President Donald Trump on Friday ordered every federal agency to immediately stop using Anthropic’s AI technology, escalating a simmering dispute over whether private AI safety rules can limit military use once Washington becomes the customer.
The directive, described by the White House as effective immediately, targets Anthropic and its Claude family of AI models.
Agencies already using the tools would be given a limited wind-down period to transition away, according to officials familiar with the guidance.
The administration framed the move as a national security and procurement decision, arguing that the government cannot depend on a supplier that refuses to support the full range of lawful defense activity.
In a Truth Social post, Trump accused Anthropic of imposing political ideology on defense planning and warned of legal consequences if the company fails to cooperate during the phase-out.
“The Left-wing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump said in the post.
“Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” the US President added.
What Anthropic refused?
At the center of the fight is a deceptively simple phrase: “all lawful purposes.”
Defense officials, according to people briefed on the negotiations, wanted Anthropic to sign terms that would permit broad military applications, with minimal carve-outs.
Anthropic resisted, arguing that “lawful” can still include uses a company may consider dangerous at scale, such as autonomous weapons decisioning, mass surveillance, or targeting workflows without meaningful human oversight.
That stance is not just philosophical.
It goes to governance: who sets the rules for an AI model once it is deployed inside government systems/
For the Pentagon, the concern is operational.
For Anthropic, the concern is precedent: if it relaxes restrictions under pressure in one jurisdiction, it becomes harder to enforce “red lines” anywhere.
Also Read: Anthropic-Pentagon standoff unites industry, exposes AI’s new fault lines
The Pentagon deadline and the pressure tactics
The confrontation intensified this week after Defense Secretary Pete Hegseth privately set a deadline for Anthropic to accept revised terms or face a designation as a supply-chain risk, according to people familiar with the matter.
That label, if applied, would effectively wall the company off from future federal contracting, even beyond the current dispute.
Officials also discussed extraordinary legal options to compel cooperation, including the Defense Production Act, a Cold War-era law that can force prioritized production for national needs.
Using it to override an AI vendor’s safety conditions would be a major expansion of how the statute is understood in the tech sector.
What the ban means for markets and AI policy
For Anthropic, the immediate hit is commercial and strategic: federal work is sticky, high-value, and reputationally important.
For its backers and cloud partners, it complicates the pathway to government-scale deployment, which has become a key growth narrative for top AI labs.
For the broader industry, the signal is sharper: Washington is asserting that national security buyers will not accept vendor-imposed ethics constraints that limit mission flexibility.
The next question is whether other AI leaders hold similar lines, or quietly adjust policies to avoid becoming the next test case.
The post Trump says Anthropic put 'American lives at risk,' orders immediate halt appeared first on Invezz





