The Pentagon has removed Anthropic from its list of approved systems and plans to switch to Claude alternatives in six months.
Short news summary
* As of today, Anthropic is the only company whose AI model *Claude* has received approval from the U.S. Department of Defense (DoD) for use with classified government systems.
* However, Anthropic refused to grant the Pentagon unlimited rights to its technologies. In response, the DoD placed the company on a “blacklist” and stopped using its services.
1. What caused the conflict
Anthropic will not allow Claude to be used for surveillance of U.S. citizens or in fully autonomous armed systems without human oversight. The Pentagon, after investing $200 million in a contract, believes it can freely control the software and apply it for all lawful purposes. Attempts by the DoD to sway Anthropic’s position were unsuccessful. By Friday evening, the company was added to the list of suppliers deemed a national‑security risk—typically firms with close ties to U.S. geopolitical adversaries.
2. Reaction and next steps
* Anthropic is prepared to challenge the decision in court, but no lawsuit has been filed yet.
* The Pentagon requires contractors to discontinue using Claude within six months and switch to solutions from other developers.
* President Donald Trump (via Truth Social) demanded an immediate halt to all Anthropic products across government agencies.
3. Possible alternatives
* OpenAI – the Pentagon is considering a transition to this company’s models, although they have similar restrictions on intelligence and warfare use.
* Palantir will also need to seek an alternative.
* Recently, Elon Musk’s startup xAI signed a contract with the DoD to use Grok in classified systems. This may only partially replace Claude, as a fully equivalent model does not yet exist.
4. Public reaction
* Google and OpenAI backed Anthropic, launching open petitions.
* Other public groups also support the company.
Thus, the conflict between Anthropic and the U.S. Department of Defense escalated into a complete refusal to use their AI technologies, forcing the government to seek alternatives and sparking an active public debate.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment