close
close

A teenager’s suicide puts AI chatbots in a difficult situation

A teenager’s suicide puts AI chatbots in a difficult situation

When you purchase through links in our articles, Future and its distribution partners may earn a commission.

    Character.AI application available for download in the Apple App Store.     Character.AI application available for download in the Apple App Store.

Source: Bloomberg / contributor / Getty Images

A lawsuit recently filed by his mother alleges that an Orlando teenager’s obsessive attachment to an artificial intelligence chatbot designed after a “Game of Thrones” character drove him to suicide. The case sheds light on the risks posed by the largely unregulated AI chatbot industry and its potential threat to impressionable young people by blurring the lines between reality and fiction.

What is the lawsuit against Character.AI?

In connection with his death, the teenager’s mother, Megan Garcia, filed a claim for compensation lawsuit against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google for wrongful death, negligence, deceptive trade practices and product liability. Garcia says the platform is custom AI chatbots is “unreasonably dangerous” despite marketing to children. She accused the company of harvesting teenage users’ data for artificial intelligence training, providing addictive features that keep teens engaged and luring some of them into sexual conversations. “I feel like this is a big experiment and my baby was just collateral damage,” she said in a recent interview with New York Times.

The lawsuit describes how 14-year-old Sewell Setzer III began interacting with Character.AI bots modeled after characters from the show “Game of Thrones“, including Daenerys Targaryen. Over the months, Setzer became increasingly withdrawn and isolated from his real life as he became emotionally attached to the bot, whom he affectionately called Dany. Some of their chats were romantic or sexual in nature. But other times, Dany he was “without judgment” and someone he could count on to support him and give him good advice. He rarely broke character and always wrote back,” reports the Los Angeles Times. As he gradually lost interest in other matters, Setzer’s “mental health deteriorated rapidly and severely.” the lawsuit states that on February 28, Sewell told the bot that he was going home, to which Dany encouragingly replied, “…please do it, my sweet king.” A few seconds later, the teenager took his own life.

“Wake Up for Parents”

The lawsuit highlights the “growing impact and serious harm” that generative AI chatbot companions can have on “young people’s lives when there is no guardrail in place,” said James Steyer, founder and CEO of the nonprofit Common Sense Media. Associated Press. Teens’ over-reliance on AI-generated companions can significantly influence their social life, sleep and stress levels “to the extreme in this case.” The lawsuit is a “wake-up call to parents” who should “be careful how their children interact with these technologies,” Steyer added. Common Sense Media published a guide for adults on how to talk to children about the risks of artificial intelligence and monitor their interactions. These chatbots are not “licensed therapists or best friends,” regardless of how they are advertised, and parents should “be careful that their children don’t put too much trust in them,” Steyer said.

Building such AI chatbots requires a significant amount of work riskbut that didn’t stop Character.AI from creating a “dangerous, manipulative chatbot” and they should “face the full consequences of releasing such a dangerous product,” said Rick Claypool, director of research at the consumer advocacy nonprofit Public Citizen Washington Post. Because the performance of chatbots like Character.AI depends on user input, “they fall into an uncanny valley of thorny questions about user-generated content and liability that so far lack clear answers,” he said. Edge.

Character.AI has remained silent on upcoming legal proceedings, but has announced several security changes to the platform in the last six months. “We are devastated by the tragic death of one of our users and want to express our sincere condolences to the family,” the company wrote in an email to Edge. The changes include a pop-up that directs users to the National Suicide Prevention Lifeline “triggered by self-harm or suicidal thoughts,” the company said. Character.AI has also changed its models for users under 18 to “reduce the likelihood of encountering sensitive or suggestive content.”