ChatGPT was denied because OpenAI is collaborating with the Pentagon.

ChatGPT was denied because OpenAI is collaborating with the Pentagon.

12 hardware

Question about cooperation with the defense ministry

Concerned about the possible consequences of OpenAI’s partnership with the U.S. Department of Defense (MoD), a segment of American society created the “Cancel ChatGPT” movement. The initiative aims to support Anthropic, a company that was blacklisted by Pentagon after refusing to allow its AI models to be used for certain tasks.

Why Anthropic ended up on the list

The conflict with MoD involved two “red lines” that the AI developer did not want to cross:

1. Models must not have autonomous authority to decide on the use of weapons.
2. Models cannot be used to surveil American citizens.

When Anthropic refused, the contract with MoD was terminated and the company was added to the black list, prohibiting all defense contractors from working with it.

Transition to OpenAI

After Anthropic was excluded from defense programs, MoD turned to OpenAI. Company head Sam Altman stated that their models would not be used for mass surveillance. However, government officials denied this: they emphasized that AI would only be used within “lawful scenarios,” and the Patriot Act of 2001 allows collection of metadata in communication networks even if some provisions were limited.

Public reaction and other AI companies

The event sparked a sharp negative reaction in online communities. People who announced stopping use of ChatGPT began receiving positive feedback. It is important to note that abandoning the “red lines” is not mandatory for all major AI producers:

- Google previously introduced a similar ban in its internal rules but has since rescinded it.
- Microsoft allows AI use in weapons provided a human fires the shot.
- Amazon limits itself to the general principle of “responsible use” without specific details.

OpenAI’s stance and legal framework

Sam Altman reiterated the promise to observe the “red lines” on autonomous weaponry and mass surveillance but did not specify implementation mechanisms. He cited existing U.S. legislation that permits data collection on non‑American citizens for security purposes.

For the general public, OpenAI presented a flexible interpretation of what is considered lawful to Pentagon, while Anthropic maintained strict control over the application of its technologies. As a result, Anthropic’s Claude AI product became a leader among official applications for Android and iOS, and also received a release on Windows 11.

Comments (0)

Share your thoughts — please be polite and stay on topic.

No comments yet. Leave a comment — share your opinion!

To leave a comment, please log in.

Log in to comment