Google and OpenAI are ready to support Anthropic in the lawsuit over a conflict with the Pentagon

Google and OpenAI are ready to support Anthropic in the lawsuit over a conflict with the Pentagon

15 hardware

Brief summary of the case on “unreliable” suppliers

WhatKey facts
Nature of disputeAnthropic filed a lawsuit in federal court against the Pentagon, demanding its removal from the list of “unreliable” AI suppliers.
ParticipantsSupport came from representatives of OpenAI and Google – nearly 40 employees (including Jeff Din, head of the Gemini project). They act as private individuals, not official business partners.
BasisThe court filing claims that Anthropic’s exclusion is a “murky act of revenge” that harms public interests.
Main argumentsThe possibility of total surveillance of citizens threatens democracy. Fully autonomous weapons systems require special oversight.
Who is in court?The letter’s authors are described as scientists, engineers and developers of American AI systems. They consider themselves competent to warn authorities about the risks of military AI use.
Importance of the caseThe letter is not aimed at a specific company but at protecting industry-wide interests: “We want authorities to understand potential dangers,” experts say.
Current data situationAccording to them, U.S. citizen data are now scattered and not unified by real‑time AI analysis. In theory, the government could compile dossiers on hundreds of millions of people given constant changes.
Dangers of military AI useDifferences between training conditions and actual combat situations can lead to errors. AI cannot assess potential collateral damage as well as a human. “Hallucinations” in models make their use in weapon systems especially risky without human oversight.
Expert conclusionAt present, the AI application areas proposed by the Pentagon pose a serious threat. Either technical restrictions or administrative control are needed.

Thus, Anthropic and its allies at OpenAI/Google aim not only to protect their reputation but also to set boundaries for safe development of AI technologies in the United States.

Comments (0)

Share your thoughts — please be polite and stay on topic.

No comments yet. Leave a comment — share your opinion!

To leave a comment, please log in.

Log in to comment