Donald Trump has told every federal agency to stop using Anthropic’s AI tools right away, turning a contract fight into a big national security story. Departments now have six months to get Anthropic’s tech out of their systems, especially in sensitive military and intelligence work.
This move throws Anthropic—one of the top AI players in the US—right into the middle of the debate on how (and if) advanced AI should be used in defence.
Pentagon and Anthropic clash over access
The real issue is the standoff between the Pentagon and Anthropic over who controls access to Claude, Anthropic's flagship AI. Defence officials wanted less restricted access for special missions. Anthropic pushed back, saying it’s happy to work with the government but won’t allow its AI to power domestic mass surveillance or fully autonomous weapons without real human oversight.
After Anthropic stood its ground, defence officials called the company a “supply chain risk to national security". That’s a big deal. This kind of label can block a company from government contracts and scare off other contractors from using its tech.
Six months to make the switch
Now, agencies have half a year to pull Anthropic’s AI out of classified networks, military operations, and intelligence workflows. If Claude is built deep into defence systems, switching to a new provider could get messy. Still, Anthropic says its regular API services and consumer products won’t be affected.
Anthropic prepares for a legal fight
Anthropic is not backing down quietly, as the company has signalled that it is ready to take this to court, calling the move unprecedented for a US-based AI firm. They argue that there is no clear legal reason for this and the decision could set a troubling example for how the government handles other AI companies.
What happens in court could reshape how the government buys artificial intelligence and draw the line between private AI rules and national security demands in the US.
Artificial intelligence and ethics over US government power
The whole episode of halting Anthropic AI puts a spotlight on the growing tension between AI companies and governments across the world, and not just the US. Agencies want the newest AI for defence and intelligence, but top labs are drawing ethical lines on how their tech will get to be used.
We are witnessing a shift in the conversation around AI, from building and competing to hard questions about procurement power, national security and setting rules for the artificial intelligence in the most sensitive places.
