Claude is gaining a million new users daily after Anthropic’s split with the Pentagon
The United States has declared Anthropic a "Supply Chain Threat"
U.S. authorities have officially designated the startup Anthropic as a “Supply Chain Threat.” The decision was made because the company refused to allow its AI models to be used in ways that officials deemed unethical.
In response, Anthropic said the ruling is “legally baseless” and plans to challenge it in court. At present, civilian users of the company do not feel any consequences—they can freely use its products, and their numbers are growing by about a million new registrations each day.
What the “Supply Chain Threat” status means
* This is the first time such a classification has been applied to an American company.
* The U.S. Department of Defense and all contractors are prohibited from working with Anthropic on military projects.
* Citizens remain outside the scope of the restriction: they can continue to use Claude AI solutions without limits.
Rising popularity of the Claude platform
* The Claude platform, created by Anthropic, is experiencing a “burst” of users.
* According to company employee Mike Kreiger, more than a million new accounts are registered daily.
* While official figures are not published, experts estimate that at the beginning of 2024 about 20 million people used Claude each month.
* Many of them switched from ChatGPT—OpenAI’s popular chatbot—which is now losing audience after Anthropic secured a contract with the Pentagon.
Why people support Anthropic
* The startup positions itself as an “ethical” AI provider and refuses to participate in military projects.
* This earns sympathy among the general public, who see the company as an alternative to more “dark” technologies.
Thus, despite the official government declaration of a supply chain threat, Anthropic continues to attract millions of users and is preparing for legal proceedings.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment