
Google says it has updated Gemini to better direct users to get mental health resources during moments of crisis. The change comes as the tech giant faces a wrongful death lawsuit alleging its chatbot “coached” a man to die by suicide, the latest in a string of lawsuits alleging tangible harm from AI products.
When a conversation indicates a user is in a potential crisis related to suicide or self-harm, Gemini already launches a “Help is available” module that directs users to mental health crisis resources, like a suicide hotline or crisis text line. Google says the update — really more of a redesign — will streamline this into a “one-touch” interface that will make it easier for users to get help quickly.
The help module also contains more empathetic responses designed “to encourage people to seek help,” Google says. Once activated, “the option to reach out for professional help will remain clearly available” for the remainder of the conversation.
Google says it engaged with clinical experts for the redesign and is committed to supporting users in crisis. It also announced $30 million in funding globally over the next three years “to help global hotlines.”
Like other leading chatbot providers, Google stressed that Gemini “is not a substitute for professional clinical care, therapy, or crisis support,” but acknowledged many people are using it for health information, including during moments of crisis.
The update comes amid broader scrutiny over how adequate the industry’s safeguards actually are. Reports and investigations, including our probe into the provision of crisis resources, frequently flag cases where chatbots fail vulnerable users, by helping them hide eating disorders or plan shootings. Google often fares better than many rivals in these tests, but is not perfect. Other AI companies, including OpenAI and Anthropic, have also taken steps to improve their detection and support of vulnerable users.





