Google will implement a mental health monitoring feature in the chat‑bot Gemini
Google Strengthens Mental Health Support in Gemini Chatbot
Google announced plans to add features to its new AI chatbot Gemini aimed at protecting users from emotional crises and self-harm. The decision was made after competitors, including OpenAI, faced lawsuits over alleged harm caused by their bots.
What’s New in Gemini
Redirect to Hotline: When signs of suicidal thoughts or self-harm are detected, the bot automatically suggests contacting crisis support.
“Help Available” Module: In mental health conversations a separate section appears where users can receive self-care recommendations and resources.
Design Changes: The interface is adapted to reduce the risk of provoking self-harm (e.g., visual stimuli have been removed).
Why Google Is Doing This
- Legal Claims Against Competitors: OpenAI and others have already responded to accusations of harming users.
- User Risks: In recent years there has been an increase in cases where people develop obsessive relationships with AI bots, potentially leading to psychopathology or even murders and suicides.
- U.S. Observations: Congress is studying the threats that chatbots may pose to children and adolescents.
Sample Legal Case
In March, the family of a 36‑year‑old American who died filed a lawsuit against Google. They alleged that the man's interaction with Gemini involved “a four-day immersion in violent actions” leading to suicide. Google stated that the bot repeatedly directed the user to crisis lines but pledged to strengthen safety measures.
How Google Responds to Misinformation
Some users reported that chatbots provided incorrect information, encouraging dangerous actions. In response, Google trained Gemini:
- Do Not Endorse False Beliefs: The bot refuses to confirm erroneous statements.
- Emphasize the Difference Between Subjective Experience and Objective Facts: When necessary, the bot gently points out misinformation.
Thus, Google aims to make Gemini a safer tool, protecting users’ mental health from potential risks associated with AI chatbots.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment