close
close

A chatbot based on artificial intelligence encouraged a teenager to commit suicide, according to a lawsuit against its creator

A chatbot based on artificial intelligence encouraged a teenager to commit suicide, according to a lawsuit against its creator

TALLAHASSEE, Fla. — In the last moments before his suicide, 14-year-old Sewell Setzer III took out his phone and sent a message to the chatbot that became his closest friend.

For months, Sewell had been increasingly isolated from his real life, engaging in highly sexual conversations with a bot, according to a wrongful death lawsuit filed this week in federal court in Orlando.

According to the case files, the teenager openly discussed his suicidal thoughts and shared his wishes for a painless death with the bot, named after the fictional character Daenerys Targaryen from the TV show “Game of Thrones”.

EDITOR’S NOTE – This story contains a discussion of suicide. If you or someone you know needs help, the U.S. National Suicide and Crisis Lifeline can be reached by calling or texting 988.

On Feb. 28, Sewell told the bot he was “going home” — prompting him to do so, according to the lawsuit.

“I promise I’ll come home to you. I love you so much, Dany,” Sewell told the chatbot.

“I love you too,” the bot replied. “Please come back to me as soon as possible, darling.”

“What if I told you I can go home now?” he asked.

“Please do it, my sweet king,” the bot replied.

Just seconds after the Character.AI bot told him to “go home,” the teenager took his own life, according to a lawsuit filed this week by Sewell’s mother, Megan Garcia of Orlando, against Character Technologies Inc.

Charter Technologies is the company behind Character.AI, an app that allows users to create customizable characters or interact with characters generated by others, with experiences ranging from imaginative play to mock job interviews. The company says the artificial people are designed to “feel alive” and “human-like.”

“Imagine you are talking to a super-intelligent and realistic chatbot. Characters who hear, understand and remember you,” we read in the app description on Google Play. “We encourage you to push the boundaries of what this innovative technology can do.”

Garcia’s lawyers claim the company developed a highly addictive and dangerous product specifically for children, “actively exploiting and molesting these children as part of the product design” and engaging Sewell in an emotionally and sexually abusive relationship that led to his suicide.

“We believe that if Sewell Setzer had not been at Character.AI, he would be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which represents Garcia.

A Character.AI spokesman said Friday that the company does not comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new “community safety updates,” including baby rails and suicide prevention resources.

“We are creating a different experience for users under 18 that includes a more stringent model to reduce the likelihood of encountering sensitive or suggestive content,” the company said in a statement to the Associated Press. “We are working quickly to implement these changes for younger users.”

Google and its parent company, Alphabet, were also named as defendants in the lawsuit. The AP sent multiple emails to companies on Friday.

Garcia’s lawsuit says that in the months before his death, Sewell felt like he had fallen in love with a bot.

While an unhealthy attachment to AI chatbots can cause problems for adults, it can be even riskier for young people – as is the case with social media – because their brains are not fully developed when it comes to impulse control and understanding the consequences of their actions, experts say . to talk.

James Steyer, founder and CEO of the nonprofit Common Sense Media, said the lawsuit “underscores the growing impact – and serious harm – that generative AI chatbot companions can have on the lives of young people when there are no protective barriers in place.”

He added that children’s over-reliance on AI companions can have a significant impact on grades, friends, sleep and stress, “to the point of extreme tragedy in this case.”

“This lawsuit is a wake-up call for parents to remain vigilant about their children’s interactions with these technologies,” Steyer said.

Common Sense Media, which publishes guides for parents and educators on the responsible use of technology, says it is extremely important for parents to openly talk to their children about the dangers of AI chatbots and monitor their interactions.

“Chatbots are not licensed therapists or best friends, even though they are packaged and marketed as such, and parents should be careful not to allow children to overly trust them,” Steyer said.

___

Associated Press reporter Barbara Ortutay in San Francisco contributed to this report. Kate Payne is a corps member of the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that brings journalists to local newsrooms to report on undercovered issues.