Pentagon threatens a harsh response from Anthropic for banning the use of Claude in surveillance and autonomous weapons systems
Brief about the situation
U.S. Secretary of Defense Pete Hegseth is considering ending all business ties with Anthropic—a leading AI developer—and labeling it a “supply‑chain threat.” This would mean any Department contractor must sever relations with Anthropic. Axios reported this, citing a Pentagon source.
What exactly is being discussed
Topic | Essence | Severing ties
Hegseth says that “breaking these ties will be extremely painful,” but the Department intends to make Anthropic pay for forced measures.
Security principle
Sean Parnell (Pentagon spokesperson) emphasized: “We require our partners to help troops win in any battle.”
Current relationship with Anthropic
Claude is the sole AI model used in U.S. secret military systems. It also dominates the commercial market and was deployed in a recent operation in Venezuela.
System requirements
The Department wants to ensure AI isn’t used for mass surveillance of Americans or for autonomous weapons. Anthropic calls such conditions “excessively restrictive” and points out “gray areas” that make them unworkable.
Negotiations with other giants
Pentagon is also discussing with OpenAI, Google, and xAI, demanding the right to use their models “for all lawful purposes.”
Why it matters
1. Contract scale
* The Department contract is valued at $200 billion, while Anthropic’s annual revenue is about $14 billion.
2. Technological potential
* Claude has proven reliability and is used in U.S. secret networks; 8 of the top 10 U.S. companies are its clients.
3. Security policy
* The Pentagon believes AI can significantly enhance data‑collection capabilities (social media messages, permits for covert weapon carrying) and potentially threaten civilians.
4. Industry impact
* A hard stance on Anthropic sets the tone for future talks with OpenAI, Google, and xAI. While these companies have agreed to lift restrictions for non‑secret military systems, their models are still not used in fully secret operations.
Reactions
- Anthropic
- Confirmed commitment to using AI for national security and noted that Claude was the first chatbot in secret networks.
- Willing to engage in “productive, good‑faith negotiations” with the Department.
- Pentagon
- Sources say it has long been dissatisfied with Anthropic and sees an opportunity to spark conflict.
- Expects confirmation of no ties with Anthropic to continue cooperation.
What’s next
* A decision on Anthropic’s status in the supply chain has not yet been made.
* The Pentagon intends to solidify a rule of “use for all lawful purposes” for OpenAI, Google, and xAI models, but a final decision is still pending.
Thus, the ministry is considering serious action against one of the key AI companies while simultaneously trying to set new safety standards and control over technological partners.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment