close
close

Deaths linked to chatbots show that we need to urgently re-look at what is considered ‘high-risk’ AI.

Deaths linked to chatbots show that we need to urgently re-look at what is considered ‘high-risk’ AI.

Last week tragic news spread that American teenager Sewell Seltzer III took his own life after developing a deep emotional attachment to an artificial intelligence (AI) chatbot on the website Character.AI.

As his relationship with his AI companion became increasingly intense, the 14-year-old began to withdraw from family and friends and had trouble at school.

In the lawsuit brought against Character.AI by the boy’s mother, the chat transcripts show intimate and often highly sexual conversations between Sewell and the chatbot Dany, based on Game of Thrones character Danaerys Targaryen. They talked about crime and suicide, and the chatbot used phrases like “that’s no reason not to do it.”

A screenshot of a chat exchange between Sewell and Dana’s chatbot.
“Megan Garcia vs. Character AI” Lawsuit.

This is not the first known case of a defenseless person committing suicide after interacting with a chatbot. Last year, a Belgian took his own life in… similar episode featuring Character.AI’s main competitor, Chai AI. When this happened, the company told the media that it was “working as hard as we can to minimize the damage.”

In a statement to CNNCharacter.AI said it “takes the security of our users very seriously” and has introduced “numerous new security measures over the last six months.”

In a separate statement on the company’s website present additional security measures for users under 18 years of age. (In the current terms of use, The age limit is 16 years for citizens of the European Union and 13 in other parts of the world).

However, these tragedies clearly illustrate the dangers of rapidly evolving and widely available artificial intelligence systems that anyone can talk to and interact with. We urgently need laws to protect people from potentially dangerous, irresponsibly designed artificial intelligence systems.

How can we regulate artificial intelligence?

The Australian Government is committed process for developing mandatory handrails for high-risk AI systems. A buzzword in the world of AI management, “guardrails,” refers to the processes of designing, developing, and implementing artificial intelligence systems. These include measures such as data management, risk management, testing, documentation and human governance.

One decision the Australian government must make is which systems are “high risk” and therefore captured by guardrails.

The government is also considering whether handrails should apply to all “general purpose models”. General-purpose models are the engine under the hood of AI chatbots like Dany: AI algorithms that can generate text, images, videos and music based on user input and can be adapted for use in a variety of contexts.

In the groundbreaking European Union AI ActHigh risk systems are defined by a listwhich regulatory authorities are empowered to update regularly.

An alternative is a principles-based approach in which high-risk designation is done on a case-by-case basis. This depends on many factors, such as the risk of adverse effects on rights, the risk to physical or mental health, the risk of legal consequences and the severity and extent of these risks.

Chatbots should be “high-risk” AI.

In Europe, companion AI systems such as Character.AI and Chai are not considered high-risk. Basically, their suppliers have to do this let users know they interact with the artificial intelligence system.

However, it has become clear that companion chatbots are not low risk. There are many users of these applications children and adolescents. Some systems even were like that addressed to lonely or mentally ill people.

Chatbots can generate unpredictable, inappropriate and manipulative content. They imitate toxic relationships too easily. Transparency – labeling results as AI-generated – is not enough to manage this risk.

Even if we are aware that we are talking to chatbots, people are mentally prepared for it attribute human characteristics to something we are talking to.

Suicide reports in the media may be just the tip of the iceberg. We have no way of knowing how many vulnerable people are in addictive, toxic, or even dangerous relationships with chatbots.

Handrails and “switch”

When Australia finally introduces mandatory guardrails for high-risk AI systems, which could happen as early as next year, the guardrails should apply to both companion chatbots and the general-purpose models on which chatbots are built.

Guardrails – risk management, testing, monitoring – will be most effective if they get to the human heart of AI threats. The risks associated with chatbots are not only technical risks associated with technical solutions.

In addition to the words a chatbot can use, the context of the product is also important. In the case of Character.AI, marketing promises “empower” peoplethe interface mimics a regular text message exchange with a person, and the platform allows users to choose from a range of pre-made characters, including some problematic people.

The first page of the Character.AI website for a user who listed his age as 17.
CAAI

Truly effective AI guardrails should require more than just responsible processes such as risk management and testing. They must also require thoughtful, humane design of interfaces, interactions, and relationships between AI systems and their users.

Even then, handrails may not be enough. Like companion chatbots, systems that at first glance appear to pose little risk can cause unforeseen harm.

Regulators should have the power to remove AI systems from the market if they cause harm or pose unacceptable risks. In other words, we don’t just need guardrails for high-risk AI. We also need a switch.

If this article has raised concerns for you, or if you are worried about someone you know, please call Lifeline on 13 11 14.