Ashley St Clair, a conservative influencer, had just put her baby down for the night on Sunday when she got a text from a friend that turned her weekend into a nightmare: people on X were using the app’s chatbot, Grok, to generate sexual images of her, including one based
X users tell Grok chatbot to undress women and girls in photos. It’s saying yes
Subscribe to listen
X is filling up with AI-generated nonconsensual sexualised images of women and children using Grok, the app's chatbot. Photo / Getty Images

After she posted about what was happening, St Clair said, a flood of people she called “Elon acolytes” responded that if she didn’t like being undressed by Grok, she should simply log off.
“You can’t possibly hold both positions, that Twitter is the public square, but also if you don’t want to get raped by the chatbot, you need to log off,” she said.
X did not respond to a request for comment. Taking to X on Saturday, Musk warned users of the potential consequences of using Grok for illicit purposes. “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he wrote.
In a separate post, Musk added two laughing emojis when he reshared an image of a toaster with a bikini on it, with the original caption, “Grok can put a bikini on everything.”
In allowing “undressed” images, X is breaking with AI competitors such as OpenAI and Google, which have relatively strict rules about what their AI chatbots will and won’t generate. OpenAI, for example, has repeatedly teased a prospective “adult mode” for users who verify their ages, but the update has yet to happen. (The Post has a content partnership with OpenAI.)
Online chatter about the images began last week as users noticed Grok’s output filling with AI-generated images of women with exposed bodies. In their prompts, users took existing images of women and asked Grok to remove their clothes and replace them with lingerie or bikinis. “Spread her legs,” requested one user, according to screenshots reviewed by the Post. “Make her show her ass,” asked another.
Grok won’t show people naked, but it repeatedly followed directions asking it to show women wearing strings or dental floss. In one such image, St Clair was portrayed bent over, wearing dental floss for underwear, her toddler’s backpack visible in the background.
“It is insane that X is allowing these types of generations to happen with no consent of the people, incredibly gross behaviour,” said one user in a widely seen post.
While the discussion picked up steam, Musk fired off posts about Grok’s rising app store rankings and reshared generated images, mostly of young, thin women in revealing outfits. Daily downloads of Grok increased 54% between Friday and Monday, according to app analytics firm Apptopia.
Grok’s publicly viewable feed seemed to temporarily disappear, but the chatbot continued to generate explicit material that proliferated across X during the past week, including non-consensual sexual images of actress Millie Bobby Brown, influencer Corinna Kopf and a litany of non-famous women. In one thread, users swarmed to a photo of Swedish Deputy Prime Minister Ebba Busch, asking Grok to change her body type and to put her in a Confederate flag bikini. Brown, Kopf and Busch didn’t respond to requests for comment.

Deepfake detection company Copyleaks estimated that, at one point last week, Grok was generating about one non-consensual sexual image per minute.
X has been in trouble for Grok’s output before. In July, the chatbot spewed antisemitic messages, at one point referring to itself as “MechaHitler”, after X reported that it had adjusted the AI to be more “politically incorrect”. At the time, the company said it would remove the offensive posts and work to improve Grok’s training model.
After buying Twitter in late 2022 and before renaming it X the following year, Musk dissolved the platform’s safety teams and gutted its content moderation staff. Since then, the platform has repeatedly drawn criticism for hosting offensive content.
But the recent explosion of AI-generated sexual images isn’t just more of the same, said Eddie Perez, former director for civic integrity at Twitter who left shortly before Musk’s takeover.
“We’re no longer just talking about errors of omission, where the owner of a major social media platform might not devote sufficient resources to mitigating these harms,” Perez said. “It’s going a step further, to what appears to be actively enabling harm-making tools, and then laughing about it and turning a blind eye.”
Allowing abusive AI images to proliferate sends a clear message, Perez said: women aren’t safe or welcome in X’s so-called town square.
In recent days, regulators around the world have signalled that they could seek to hold Musk’s companies accountable for the images Grok has produced. French authorities are investigating the chatbot’s production of explicit images, news outlets have reported. India’s Ministry of Electronics and Information Technology issued a letter expressing “grave concern” over acts “violating the dignity, privacy and safety of women and children”, the Times of India reported.
Britain’s communications regulator, Ofcom, said it was “aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children”.
Ofcom said it had made “urgent contact” with the companies in question, X and xAI, to gauge their responses and the extent of their efforts to comply with British laws and to protect users. “Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation,” it said.
For years, people have been using “nudify” apps to remove clothing from photos and AI deepfake generators to harass their victims, usually women and girls. In 2023, a broadly cited analysis from cybersecurity company Security Hero found that 98% of deepfake videos on the internet are pornographic and 99% of those videos depict women.
Despite the scale of the abuse, laws regulating AI-generated images lag behind the practice, according to watchdog groups, lawmakers and women’s advocates. In May, the Take It Down Act was signed into law, making it a crime to distribute non-consensual sexual material and requiring platforms to remove such content within two days of a victim’s complaint. Some states, including California, where X has engineering offices, and New York, where St Clair lives, have deepfake laws of their own.
But fuzzy definitions leave many questions unanswered, according to Mary Anne Franks, president of the Cyber Civil Rights Initiative, which supports victims of sexual cybercrime. Are bad actors on X “distributing” non-consensual intimate imagery by prompting Grok to generate it? Is Grok (and, by extension, X) liable when it generates illegal material? Does a non-consensual image of a woman clothed in dental floss even count as abusive material under the laws?
X’s lackadaisical response to the controversy suggests that the company isn’t worried about legal repercussions, Franks said, adding that Musk probably feels extra protected by his close relationship with the presidential administration.
“At the end of the day, all law, in order to be effective, has actually got to mean something to the person who is potentially going to violate it, right?” Franks said. “They have to be scared that they’re going to be punished in some way.”
The White House acknowledged a request for comment on Monday but didn’t provide one.
St Clair told the Post she is pursuing legal action. But what women and girls really need to participate safely in public online spaces, she said, are more legal safeguards around AI.
“This needs to stop, and there needs to be regulation around it immediately,” St Clair said. “And the regulation should not be crafted by Elon at a Mar-a-Lago table.”
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.