AFP’s fact-checkers traced hundreds of such videos on Instagram, many in Hindi, that purportedly show male interviewees casually delivering misogynistic punchlines and sexualised remarks – sometimes even grabbing the women – while crowds of men gawk or laugh in the background.
Many videos racked up tens of millions of views – and some further monetised that traction by promoting an adult chat app to “make new female friends”.
The fabricated clips were so life-like that some users in the comments questioned whether the featured women were real.
A sample of these videos analysed by the United States cybersecurity firm GetReal Security showed they were created using Google’s Veo 3 AI generator, known for hyper-realistic visuals.
‘Gendered harm’
“Misogyny that usually stayed hidden in locker room chats and groups is now being dressed up as AI visuals,” Nirali Bhatia, an India-based cyber psychologist, told AFP.
“This is part of AI-mediated gendered harm,” she said, adding that the trend was “fuelling sexism”.
The trend offers a window into an internet landscape now increasingly swamped with AI-generated memes, videos, and images that are competing for attention with – and increasingly eclipsing – authentic content.
“AI slop and any type of unlabelled AI-generated content slowly chips away at the little trust that remains in visual content,” GetReal Security’s Emmanuelle Saliba told AFP.
The most viral misogynistic content often relies on shock value – including Instagram and TikTok clips that Wired magazine said were generated using Veo 3 and portray black women as big-footed primates.
Videos on one popular TikTok account mockingly list what so-called gold-digging “girls gone wild” would do for money.
Women are also fodder for distressing AI-driven clickbait, with AFP’s fact-checkers tracking viral videos of a fake marine trainer named “Jessica Radcliffe” being fatally attacked by an orca during a live show at a water park.
The fabricated footage rapidly spread across platforms including TikTok, Facebook and X, sparking global outrage from users who believed the woman was real.
‘Unreal’
Last year, Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, found 900 Instagram accounts of likely AI-generated “models” – predominantly female and typically scantily clothed.
These thirst traps cumulatively amassed 13 million followers and posted more than 200,000 images, typically monetising their reach by redirecting their audiences to commercial content-sharing platforms.
With AI fakery proliferating online, “the numbers now are undoubtedly much larger,” Mantzarlis told AFP.
“Expect more nonsense content leveraging body standards that are not just unrealistic but literally unreal,” he added.
Financially incentivised slop is becoming increasingly challenging to police as content creators – including students and stay-at-home parents around the world – turn to AI video production as gig work.
Many creators on YouTube and TikTok offer paid courses on how to monetise viral AI-generated material on platforms, many of which have reduced their reliance on human fact-checkers and scaled back content moderation.
Some platforms have sought to crack down on accounts promoting slop, with YouTube recently saying that creators of “inauthentic” and “mass produced” content would be ineligible for monetisation.
“AI doesn’t invent misogyny – it just reflects and amplifies what’s already there,” AI consultant Divyendra Jadoun told AFP.
“If audiences reward this kind of content with millions of likes, the algorithms and AI creators will keep producing it. The bigger fight isn’t just technological – it’s social and cultural.”
- Agence France-Presse