AI can now form political views as successfully as humans, and no one is concerned.
Short summary of the result
An American study found that most people do not pay attention to who wrote political arguments: an expert or artificial intelligence (AI).
- Labeling a text as “created by AI” hardly changes its persuasiveness.
- AI texts can shift respondents’ opinions by about 10 % on a scale from 0 to 100.
Key details
The study was conducted with 1,601 participants. They were shown AI-generated messages about geoengineering, drug importation, student athlete salaries, and social media accountability. The texts were labeled as:
• “created by AI”
• “written by an expert human”
• unlabeled
Regardless of the label, people on average changed their stance on the topic by 9.74 %. Trust in authorship was assessed with a question about whether respondents believed the stated source. Ninety‑two percent trusted the specified source, but this did not affect opinion change, accuracy assessment, or willingness to share the message.
Modification factors
The analysis examined age, political affiliation, familiarity with AI, and education level. Results were stable across all groups; only older adults showed a slight decline in trust toward “AI-made” texts.
Researchers’ conclusions
1. Labeling is not a barrier – simply indicating “created by AI” does not prevent such messages from influencing public opinion.
2. The persuasiveness of AI is already comparable to that of humans, even with the source openly disclosed.
3. More comprehensive regulatory measures are needed for content produced by generative models: laws, fact‑checking algorithms, and educational programs.
Thus, in an era of rapid AI development, a simple “AI” tag does not ensure transparency or protection against manipulation – the challenge lies with authorities, social media platforms, and society as a whole.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment