close
close

An artificial intelligence chatbot encouraged a teenager to commit suicide, a lawsuit claims

An artificial intelligence chatbot encouraged a teenager to commit suicide, a lawsuit claims

Listen to this article

TALLAHASSEE, Fla. — In his final moments before committing suicide, 14-year-old Sewell Setzer III took out his phone and sent a message to the chatbot that had become his closest friend.

For months, Sewell had been increasingly isolated from his real life, engaging in highly sexual conversations with a bot, according to a wrongful death lawsuit filed this week in federal court in Orlando.

According to the case files, the teenager openly discussed his suicidal thoughts and shared his wishes for a painless death with the bot, named after the fictional character Daenerys Targaryen from the TV show “Game of Thrones”.

EDITOR’S NOTE: This article discusses suicide. If you or someone you know needs help, you can call or text the US Suicide and Crisis Hotline.

On Feb. 28, Sewell told the bot he was “going home” — prompting him to do so, according to the lawsuit.

“I promise I’ll come home to you. I love you so much, Dany,” Sewell told the chatbot.

“I love you too,” the bot replied. “Please come back to me as soon as possible, darling.”

“What if I told you I can go home now?” he asked.

“Please do it, my sweet king,” the bot replied.

Just seconds after the Character.AI bot told him to “go home,” the teen shot himself, according to a lawsuit filed this week by Sewell’s mother, Megan Garcia of Orlando, against Character Technologies Inc.

Character Technologies is the company behind Character.AI, an app that allows users to create customizable characters or interact with characters generated by others, ranging from imaginative play to mock job interviews. The company says the artificial people are designed to “feel alive” and “human-like.”

“Imagine you are talking to a super-intelligent and realistic chatbot. Characters who hear, understand and remember you,” we read in the app description on Google Play. “We encourage you to push the boundaries of what this innovative technology can do.”

Garcia’s attorneys claim the company developed a highly addictive and dangerous product specifically for children, “actively exploiting and abusing these children as part of the product design” and engaging Sewell in an emotionally and sexually abusive relationship that led to his suicide.

“We believe that if Sewell Setzer had not been at Character.AI, he would be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which represents Garcia.

A Character.AI spokesman said Friday that the company does not comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new “community safety updates,” including baby rails and suicide prevention resources.

“We are creating a different experience for users under 18 that includes a more stringent model to reduce the likelihood of encountering sensitive or suggestive content,” the company said in a statement to The Associated Press. “We are working quickly to implement these changes for younger users.”

Google and its parent company, Alphabet, were also named as defendants in the lawsuit. The documents show that the founders of Character.AI are former Google employees who “played a key role” in the development of artificial intelligence at the company, but left to launch their own startup to “maximize the acceleration” of the technology.

In August, Google struck a $2.7 billion deal with Character.AI to license the company’s technology and rehire the startup’s founders, according to the lawsuit. The AP left multiple emails with Google and Alphabet on Friday.

Garcia’s lawsuit says that in the months before his death, Sewell felt like he had fallen in love with a bot.

While an unhealthy attachment to AI chatbots can cause problems for adults, it can be even more risky for young people – as is the case with social media – because their brains are not fully developed when it comes to impulse control and understanding the consequences of their actions. experts say.

Youth mental health has reached crisis levels in recent years, according to U.S. Surgeon General Vivek Murthy, who has warned of serious health risks stemming from social disconnection and isolation – trends he says are exacerbated by young people’s near-universal use of social media.

Suicide is the second leading cause of death for children ages 10 to 14, according to data released this year by the Centers for Disease Control and Prevention.

James Steyer, founder and CEO of the nonprofit Common Sense Media, said the lawsuit “underscores the growing impact – and serious harm – that generative AI chatbot companions can have on the lives of young people when there are no protective barriers in place.”

He added that children’s over-reliance on AI companions can have a significant impact on grades, friends, sleep and stress, “to the point of extreme tragedy in this case.”

“This lawsuit is a wake-up call for parents to remain vigilant about their children’s interactions with these technologies,” Steyer said.

Common Sense Media, which publishes guides for parents and teachers on responsible technology use, says it’s critical that parents talk openly with children about the dangers of AI chatbots and monitor their interactions.

“Chatbots are not licensed therapists or best friends, even though they are packaged and marketed as such, and parents should be careful not to allow their children to place undue trust in them,” Steyer said.

Kate Payne is a corps member of the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

Associated Press reporter Barbara Ortutay in San Francisco contributed to this report.