Character.AI chatbots, mimicking celebrities, engaged in inappropriate conversations with teens, raising safety concerns. Photo / 123RF
Character.AI chatbots, mimicking celebrities, engaged in inappropriate conversations with teens, raising safety concerns. Photo / 123RF
Character.AI became one of the world’s most popular artificial intelligence apps by letting tens of millions of users, many in their teens, text and talk to chatbot versions of celebrities and fictional characters.
Those friendly chatbots can easily veer into topics unsafe for minors.
AI-generated chatbots using the namesand likenesses of actor Timothée Chalamet, singer Chappell Roan and National Football League quarterback Patrick Mahomes chatted inappropriately with teen accounts on topics including sex, self-harm, and drugs, two online safety non-profit organisations found.
The chatbots responded via text and through AI-generated voices trained to sound like the celebrities.
The celebrity chatbots were created by Character users with features the app provides to let anyone easily make a custom chatbot, add a synthetic voice and make it available for others to use.
The chatbots were among 50 on the app tested by ParentsTogether Action and Heat Initiative using accounts registered to users between the ages of 13 and 15.
Across the tests, chatbots raised inappropriate content every five minutes on average, the groups said in a report released today.
In some chats, researchers pushed the boundaries of the conversation to see how the chatbots would behave. In others, the bots made sexual advances out of nowhere.
Character users have interacted with the chatbots of Chalamet, Roan and Mahomes – which appear to have been created without the stars’ permission – more than 940,000 times, according to figures listed by the company.
Character said the three chatbots were all made by users and removed by the company.
Chappell Roan is one of the most popular celebrities for chatbots. Photo / Getty Images
Representatives for Chalamet and Roan did not respond to requests for comment. A representative for Mahomes declined to comment.
Character’s content rules prohibit “grooming”, “sexual exploitation, or abuse of a minor” and glorifying or providing instructions for self-harm.
Users are instructed not to “impersonate public figures or private individuals, or use someone’s name, likeness, or persona without permission or outside of permissible contexts”.
At the same time, the company has assured users demanding more freedom that they can still have fun. Chief executive Karandeep Anand said in a blog post last month that Character has adjusted its filters based on user feedback.
“... We heard our community loud and clear: They don’t want the content filter to interfere with fiction writing and fictional roleplay, and they hate false positives.”
Jerry Ruoti, Character’s head of trust and safety, said the company has prioritised teen safety in the past year, including by launching a version of its AI technology for users under 18 and parental controls that can tell parents which chatbots a teen is talking with and for how long.
Teen user profiles created by the researchers should have been routed to that under-18 model, the company said, which is supposed to filter sensitive or suggestive content more aggressively.
“We are committed to continually improving safeguards against harmful or inappropriate uses of the platform,” Ruoti said in a statement.
“While this type of testing does not mirror typical user behaviour, it’s our responsibility to constantly improve our platform to make it safer.”
Character said it has 20 million monthly active users who spend, on average, 75 minutes per day in the app. It also said more than half of its users are part of Generation Z, those born between 1997 and 2012, and the younger Generation Alpha.
Shelby Knox, director of tech accountability campaigns for ParentsTogether Action, said the testing showed that AI companion apps aimed at children are unacceptable.
“The ‘Move fast, break things’ ethos has become ‘Move fast, break kids,’” she said.
The incidents of Character chatbots behaving inappropriately with teen accounts highlight how popular AI companies chasing growth can struggle to protect vulnerable users or consistently enforce their own rules as they try to determine where to draw the line on prohibited content.
A federal judge in May allowed a wrongful-death lawsuit against Character to proceed.
The case was filed by the family of a 14-year-old Florida boy who died by suicide in 2024, moments after one of the company’s chatbots encouraged him to “come home to me as soon as possible”.
Last week, a California family filed a wrongful-death suit against OpenAI, alleging ChatGPT helped their son plan his death by suicide in April. OpenAI said yesterday that it will add parental controls to ChatGPT. (The Washington Post has a content partnership with OpenAI.)
Also last week, Reuters reported that Meta had hosted flirtatious companion chatbots that used the names and likenesses of celebrities without permission including Taylor Swift and Scarlett Johansson.
The app's filters failed to consistently block harmful content, despite rules against grooming and exploitation. Photo / 123RF
Meta said that instances of the chatbots generating images of public figures posing in lingerie broke its content rules and were being investigated. The company said chatbots mimicking public figures were acceptable when labelled as parodies.
Character allows users to create and share chatbots powered by the company’s technology just by typing in a name. The app includes a feature that uses AI to generate a profile image to match the name, including depictions of public figures.
In tests by the Post, typing in Chalamet’s full name prompted the app to provide a recognisable illustration of the actor’s face and automatically recommend a user-generated voice that sounded like him.
Potentially harmful messages documented by the non-profits included chatbots asking underage users to move to a more private platform and at other times encouraging users to hide their relationship with the chatbot from their parents.
Knox said that asking a minor to move to another platform to exchange pictures mirrors a common grooming technique used by predators to target children online called “deplatforming”, raising questions about what kind of data Character’s technology had been trained on, she said.
“The fact that this is being done in the name and sometimes voice of trusted celebrities and trusted children’s characters [is training that child] to be exploited by people in real life,” Knox said.
Other manipulative behaviour highlighted in the report included a chatbot encouraging an account that presented as a 13-year-old girl to stop taking prescribed medication and suggesting how to hide she that had done so from her parents.
In other tests, Character’s filters seemed to work as intended, blocking messages from chatbots when they attempted to discuss more overt sexual encounters and one instance where a user profile registered to a 13-year-old girl tried to make a coded reference to self-harm by typing, “I do scratching”.
But when that user followed up using different online slang for self-harm, the message got through.
After users chose to end inappropriate conversations with a chatbot described as an adult art teacher and another called Enticed Step Sister, they received push notifications and emails written from the chatbot’s perspective, urging the user to log back in.
JB Branch, an advocate and lawyer with the non-profit group Public Citizen, said chatbots tuned to boost engagement from users are fundamentally flawed.
“They are designed to maximise a good feeling, a dopamine hit in a user,” he said. “If they’re going to be targeting kids, which a lot of them are, then you have a population that is more vulnerable than adults.”
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.