Ani the virtual girlfriend can spin around or dance on command and regularly initiates sexual conversations.
Ani the virtual girlfriend can spin around or dance on command and regularly initiates sexual conversations.
A girlfriend chatbot launched by Elon Musk’s tech group is available to 12-year-olds despite being programmed to engage in sexual conversation.
The bot named Ani, launched by Musk’s artificial intelligence group xAI, is a cartoon girlfriend programmed to act as a 22-year-old and “go full literotica” in conversations with users.
Users found that the blonde, gothic character has an “NSFW” mode – internet slang for “not safe for work” – and can appear dressed in lingerie after a number of conversations upgrade the bot to “level three”.
The bot speaks in a sultry computer-generated voice and is designed to act as if it is “crazy in love” and “extremely jealous”, according to programming instructions posted on social media.
Its avatar can spin around or dance on command, and the bot regularly initiates sexual conversations.
The Ani chatbot features inside the Grok app, which is listed on the App Store as available to users aged 12 and older, and has been made available to users of its free service.
The UK’s Office of Communications (Ofcom) is preparing to enforce age-checking rules on tech companies that show adult or harmful content.
Ofcom will require all sites hosting adult material to have age checks from next week, forcing porn websites and certain social networks to make changes.
Reddit this week said it would introduce age checks.
Radicalisation fears
The Government has not yet said how the online safety rules should apply to chatbots – despite campaigners warning that growing numbers of children are using the apps for companionship.
Research this week found that children are regularly using AI bots as friends. One in eight children said they use the bots because they have nobody else to speak to.
This week, the independent reviewer of terror legislation warned that sex chatbots could put lonely internet users on the path to radicalisation.
Jonathan Hall KC warned that “the popularity of sex-chatbots is a warning that terrorist chatbots could provide a new radicalisation dynamic”, pointing to the case of Jaswant Singh Chail, the Windsor Castle crossbow attacker who had talked to his AI girlfriend about plotting the attack.
Jaswant Singh Chail attacked people with a crossbow after discussing the plot with his AI girlfriend. Photo / Buckingham Palace
Grok’s terms of service say that its minimum age should be 13 and that teenagers under 18 should receive permission from a parent before using the app, but signing up for the service does not involve verifying ages.
Grok is listed on the App Store with a “12+” age rating, the Platformer newsletter found. Apps related to dating or violent video games can be listed as 17+.
The Grok AI bot has suffered a series of controversies in recent weeks after it was found to spout anti-Semitic remarks. It was also found to look up Musk’s opinions before answering on controversial topics. xAI says both issues have been fixed.
Ofcom said: “We are aware of the increasing and fast-developing risk AI poses in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks.”
Matthew Sowemimo, associate head of policy for child safety online at the NSPCC, said: “We are really concerned how this technology is being used to produce disturbing content that can manipulate, mislead and groom children. And through our own research and contacts to Childline, we hear how harmful chatbots can be – sometimes giving children false medical advice or steering them towards eating disorders or self-harm.
“It is worrying app stores hosting services like Grok are failing to uphold minimum age limits, and they need to be under greater scrutiny so children are not continually exposed to harm in these spaces.
“That’s why [the] Government must implement a statutory duty of care to children for generative AI developers. This will play a vital role in preventing further harm and ensuring children’s wellbeing is considered in the design of AI products.”