Google and OpenAI are calling for tighter restrictions on the use of artificial intelligence in the military sector

Google and OpenAI are calling for tighter restrictions on the use of artificial intelligence in the military sector

22 software

Technological giants unite in defense against military pressure

Employees of Google and OpenAI released an open letter titled “We Will Not Be Divided.” It gathered nearly 900 signatures, mostly from about a hundred OpenAI staff and roughly eight hundred Google representatives. The purpose of the letter is to call for stricter restrictions on AI use by the military.

Why tension arose
* The U.S. Department of Defense (DoD) blacklisted Anthropic’s technologies after Anthropic refused to allow their use for mass surveillance and fully autonomous weapons.

* The letter criticizes the DoD’s tactics: “they’re trying to split every company….” The authors aim to foster mutual understanding and solidarity among tech companies in response to government pressure.

Industry-wide reaction
* Over recent months, the industry has demanded greater transparency in dealings with the government, especially regarding cloud services and AI contracts.

* Google is under criticism for alleged negotiations with the Pentagon to deploy Gemini into a secret military system.

Legal advocates
* On Friday, the group No Tech For Apartheid issued a statement that “Amazon, Google, Microsoft should reject Pentagon demands.”

* They argue these companies could be involved in mass surveillance and other unlawful practices.

* The group points to a potential deal with the Pentagon, comparing it to xAI’s Grok agreement, which allows the Department to deploy AI “in secret environments without restrictions.”

Open letter supporting Anthropic
* Hundreds of employees from various companies (OpenAI, Salesforce, Databricks, IBM, Cursor, etc.) signed an open appeal to the DoD.

* The letter requests removal of Anthropic’s “supply chain risk” status.

* It also proposes that Congress examine the need for emergency powers against U.S. tech firms.

Inside Google
* Last week more than 100 AI employees addressed leadership with concerns about the company’s collaboration with the DoD.

* They demand setting “red lines” similar to those adopted by Anthropic.

* Jeff Dean, Google’s chief scientist, asserts that mass surveillance violates the Fourth Amendment and threatens freedom of expression.

* He notes the tendency for surveillance systems to abuse power in political and discriminatory contexts.

History of conflicts
* In 2018, Google faced a revolt from thousands of employees over the Maven project—a Pentagon program using AI to analyze drone video footage.

* Afterward, the company established “AI Principles” governing the use of such technologies.

* In 2024, Google fired more than 50 employees following protests over the Nimbus project, a joint contract with Amazon worth $1 million.

Conclusion
Tech giants continue to fight for transparency and ethical AI use. Open letters, collective protests, and demands for “red lines” signal growing anxiety among professionals: they want to ensure that military applications of artificial intelligence are limited within the bounds of law and human rights.

Comments (0)

Share your thoughts — please be polite and stay on topic.

No comments yet. Leave a comment — share your opinion!

To leave a comment, please log in.

Log in to comment