Social media will soon endure mass attacks by AI agents, scientists warn
Massive AI‑bot attacks: a new threat to social media
In the near future, large‑scale campaigns organized by artificial intelligence (AI) bots may appear on social networks. These bots will model real human behavior and use the psychological predisposition toward “the wisdom of the crowd” to manipulate public opinion. As a result, they can:
* Spread false information
* Pressure individual users
* Influence political processes
Thus, AI‑bots could become a new type of weapon in information wars.
How future bots work
Norwegian professor Jonas Kunsth warns that AI bots will be able to:
1. Mimic people – their activity will look natural, making them hard to detect.
2. Create the illusion of a crowd – people will follow the “wisdom,” but it will actually be controlled by an unknown operator (an individual, group, political party, company, or state actor).
3. Target those who refuse to join – bots can suppress counterarguments and amplify their own narrative.
There are no exact dates for the appearance of such “robots,” but experts believe they are already deployed in some places. The threat is amplified by digital ecosystems weakened by a loss of rational discourse and general uncertainty about reality among citizens.
From simple to complex bots
* Primitive bots already make up more than half of web traffic. They perform only basic tasks: posting generic messages, but they can be easily detected if operators exist.
* AI‑bots based on large language models will be much more sophisticated.
* Adapt to specific communities.
* Create multiple “personalities” with memory and identity.
* Self‑organize, learn, and specialize in exploiting human weaknesses.
Kunsth compares them to a “self‑sufficient organism” capable of coordinating actions without constant human intervention.
Early signs are already visible
Last year, Reddit’s administration announced a lawsuit against researchers who used chatbots to manipulate the opinions of 4 million users. Results showed that AI responses were three to six times more convincing than real people’s posts.
* The scale of an attack depends on the attacker’s computational power and the platform’s ability to resist it.
* Even a small number of agents can have a significant impact in local communities, as new participants are viewed with suspicion.
What social media administrations can do
1. Strengthen authentication – require confirmation that a user is human (not a panacea, but will complicate attackers’ tasks).
2. Scan traffic in real time – detect statistical anomalies and deviations from normal behavior.
3. Create expert communities – bring together specialists and institutions to monitor attacks, respond, and raise public awareness.
Ignoring these measures can lead to serious disruptions in elections and other important events.
Conclusion
Massive AI‑bot attacks are no longer purely theoretical. They pose a real threat to honest information exchange, democracy, and social stability. To protect themselves, platforms and users must actively implement new detection technologies and also raise the level of critical thinking in society.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment