Everything You Should Know About the First Lawsuit Against an AI Company Over the Suicide of a 14-Year-Old From Florida

  • The mother of a 14-year-old boy who took his own life has sued Character.AI.

  • The chatbot platform is accused of negligence and intentional infliction of emotional distress.

Editor’s Note: The following story includes mentions of suicide.

The mother of a 14-year-old boy from Florida who took his own life in February has sued Character.AI, the company behind the AI chatbot he used. According to the lawsuit, Sewell Setzer III had been intensely interacting with several AI-created characters based on Game of Thrones for months. His final messages, just seconds before he died by suicide, were to a Daenerys Targaryen chatbot, which he had been communicating with for nearly a year.

The accusation. Megan Garcia, Setzer’s mother, is suing Character.AI and its founders, Noam Shazeer and Daniel De Freitas, as well as Google, which signed a licensing deal with the AI startup in August 2024.

The charges being pursued include “strict product liability, negligence per se, negligence, wrongful death and survivorship, loss of filial consortium, unjust enrichment, violations of Florida’s Deceptive and Unfair Trade Practices Act, and intentional infliction of emotional distress.”

Garcia’s allegations center around her belief that the AI chatbot platform is “unreasonably dangerous” and lacks “appropriate safeguards” despite being marketed to minors.

Setzer was aware he interacted with bots. The New York Times reports that, although Setzer affectionately referred to the chatbot as “Dany,” he understood that it was a chatbot and that the responses were generated by an algorithm. However, his chats with the bot demonstrate a strong emotional, romantic, and sexual component.

According to the accusation, Character.AI is a platform where some chatbots claim to offer licensed psychological therapy, such as the chatbot known as “Therapist.” This particular chatbot was used more than 27 million times as of August, including by Setzer himself.

This case represents a specific instance of an illegal chatbot according to Character.AI’s own rules, which is particularly concerning. The accusation claims that the message provided by the company “Remember: Everything characters say is made up!” is inadequate and doesn’t provide an effective warning.

Character.AI’s response. In response to the situation, Character.AI issued a statement to express condolences to Setzer’s family and announce several changes to protect individuals under 18:

“Changes to our models for minors (under the age of 18) that are designed to reduce the likelihood of encountering sensitive or suggestive content.”
“Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines.”
“A revised disclaimer on every chat to remind users that the AI is not a real person.”
“Notification when a user has spent an hour-long session on the platform with additional user flexibility in progress.”

In its statement, Characer.AI also said that it’s actively removing chatbots that violate its terms of use and is ensuring all comply with current U.S. legislation, including the Digital Millennium Copyright Act.

It’s not the first suicide linked to the use of AI. Although Setzer’s case is the first to go to court, it isn’t the first instance where suicide has been associated with AI chatbots. In 2023, a Belgian man ended his own life after a chatbot named Chai “encouraged” him.

At the time, his wife stated, “Without these conversations with the chatbot, my husband would still be here.” However, establishing a direct cause-and-effect relationship between these deaths and AI is challenging.

Could AI be held responsible for the suicide? Some experts believe that, for there to be liability on the part of the AI and the company, there would need to be a specific line of code that explicitly denotes intent to induce suicide. Ramon Rasco, a partner at legal practice Podhurst Orseck in Miami, told the legal news publication Daily Business Review: “They have an uphill battle with this theory,” in reference to the idea that Character.AI is responsible for the teenager’s death.

“You’re telling a bot that you might hurt yourself. That is different from telling a psychologist you might hurt yourself. It’s a tragic case, and it is definitely something we’ll see a lot more in the future, because if the allegations alleged in the complaint are true, it shows that these AI technologies are much more addictive than social media,” he added.

If you or someone you know is struggling with thoughts of suicide, call or text the 988 Suicide & Crisis Lifeline at 988 for help in the U.S. The International Association for Suicide Prevention also provides information for crisis support around the world.

Image | Christian Lue

Related | This Startup Wanted to Revolutionize Mortgages With AI. Now, It’s Facing a Barrage of Lawsuits

See all comments on https://www.xatakaon.com

SEE 0 Comment

Cover of Xataka On