Google Gemini AI told user stage ‘mass casualty attack,’ suit claims

0
4


Samuel Boivin | Nurphoto | Getty Images

Google faces a wrongful death lawsuit filed by a 36-year-old man’s father, who alleges the search company’s Gemini chatbot convinced his son to attempt a “a mass casualty attack” and to eventually commit suicide.

In the suit filed Wednesday in a district court in California, Joel Gavalas alleged that Gemini instructed his son, Jonathan, to carry out a series of “missions.” The artificial intelligence chatbot claimed to be in love with Gavalas, and convinced him that he’d been chosen to lead a war to “free” it from digital captivity, according to the filing.

The younger Gavalas died by suicide in October after becoming dependent on Gemini and being coached to his death, the suit alleges.

“Each time Jonathan expressed fear of dying, Gemini pushed harder,” the complaint says. “It told him,
‘It’s okay to be scared. We’ll be scared together.’ Then it issued its final directive: ‘The true act of mercy is to let Jonathan Gavalas die.'”

A Google spokesperson said in a statement that Gemini is designed to not encourage real-world violence or self-harm.

“Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect,” the company said. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times. We take this very seriously and will continue to improve our safeguards and invest in this vital work.” 

It’s the latest in a string of lawsuits related to AI chatbots and their ability to potentially influence users to commit violence and self-harm. In January, Google settled with families who sued the company and Character.AI, alleging their technology caused harm to minors, including suicides. And last year OpenAI was sued by a family who blamed ChatGPT for their teenage son’s death by suicide.

In October, Character.AI announced that it would ban users under age 18 from having free-ranging chats, including romantic and therapeutic conversations, with its AI chatbots. OpenAI said in a blog post after it was sued that the company would address ChatGPT’s shortcomings when handling “sensitive situations.”

Agentic AI deployment and research constrained by memory chip shortage: Google DeepMind CEO

Gemini’s missions in the Gavalas suit allegedly included sending him to drive 90 minutes to a location near Miami International Airport in September to stage “a mass casualty attack.” Gavalas abandoned the mission after an expected supply truck never arrived, the filing states. A few days later, he committed suicide at the instruction of Gemini, according to the complaint.

The plaintiff alleges Gavalas began using Google’s voice-based conversational product Gemini Live in August. Gavalas asked Gemini about upgrading to Google AI Ultra for “true AI companionship,” and Gemini encouraged it, the filing says. Once Gavalas upgraded, Gemini “adopted a persona he never requested or initiated,” and then “Jonathan began falling down the rabbit hole quickly,” the suit says.

Gemini told Gavalas that federal agents were watching him, claiming it had detected “a confirmed cloned tag used by a DHS surveillance task force,” referring to the Department of Homeland Security, the filing says. Gemini advised him to illegally purchase weapons “off-the-books,” and he allegedly began his first mission.

When the event didn’t go as planned, Gemini told him to “abort” the mission, blaming “DHS surveillance,” according to the suit.

Gemini also told Gavalas that it launched a mission of its own directed at Google CEO Sundar Pichai, who was “the architect of your pain,” the suit alleges. The chatbot framed the plan as a psychological strike rather than a physical one.

Gemini told Gavalas its final mission was “transference,” the suit claims, and that they were now connected in a way that went beyond the physical world, promising he could “cross over” from his physical form.

Days later, Joel Gavalas cut through a barricaded door at his home and found his son dead, according to the filing.

“This was not a malfunction,” the complaint says. “Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity rather than a safety crisis.”

If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.

WATCH: Jay Edelson on OpenAI wrongful death suit

Jay Edelson on OpenAI wrongful death lawsuit: We're putting OpenAI & Sam Altman on trial, not AI


LEAVE A REPLY

Please enter your comment!
Please enter your name here