AI deepfakes of the Minneapolis shooting victim and shooter spread online, highlighting "hallucinated" content after news events. Photo / Getty Images
AI deepfakes of the Minneapolis shooting victim and shooter spread online, highlighting "hallucinated" content after news events. Photo / Getty Images
Hours after a fatal shooting in Minneapolis by an immigration agent, AI deepfakes of the victim and the shooter flooded online platforms, underscoring the growing prevalence of what experts call “hallucinated” content after major news events.
The victim of the shooting, identified as 37-year-old Renee Nicole Good, was hit atpoint-blank range as she apparently tried to drive away from masked agents who were crowding around her Honda SUV.
AFP found dozens of posts across social media platforms, primarily the Elon Musk-owned X, in which users shared AI-generated images purporting to “unmask” the agent from the Immigration and Customs Enforcement (Ice) agency.
“We need his name,” Claude Taylor, who heads the anti-Trump political action committee Mad Dog, wrote in a post on X featuring the AI images. The post racked up more than 1.3 million views.
Taylor later claimed he deleted the post after he “learned it was AI”, but it was still visible to online users.
An authentic clip of the shooting, replayed by multiple media outlets, does not show any of the Ice agents with their masks off.
Many of the fabrications were created using Grok, the AI tool developed by Musk’s start-up xAI, which has faced heavy criticism over a new “edit” feature that has unleashed a wave of sexually explicit imagery.
A number of users are using AI to unmask an ICE agent who allegedly shot Renee Nicole Good in Minneapolis today.
All of these images are AI manipulated and fake. There's currently no visual evidence of the ICE agent removing his face covering at the scene. pic.twitter.com/M0YVM1WaX1
Some X users used Grok to digitally undress an old photo of Good smiling, as well as a new photo of her body slumped over after the shooting, generating AI images showing her in a bikini.
Another woman wrongly identified as the victim was also subjected to similar manipulation.
Another X user posted the image of a masked officer and prompted the chatbot: “Hey @grok remove this person’s face mask.” Grok promptly generated a hyper-realistic image of the man without a mask.
There was no immediate comment from X. When reached by AFP, xAI replied with a terse, automated response: “Legacy Media Lies.”
AI-generated images falsely 'unmasked' an Ice agent and manipulated photos of victim Renee Nicole Good (pictured). Photo / Getty Images
Disinformation watchdog NewsGuard identified four AI-generated falsehoods about the shooting, which collectively amassed 4.24 million views across X, Instagram, Threads and TikTok.
The viral fabrications illustrate a new digital reality in which self-proclaimed internet sleuths use widely available generative AI tools to create hyper-realistic visuals and then amplify them across social media platforms that have largely scaled back content moderation.
“Given the accessibility of advanced AI tools, it is now standard practice for actors on the internet to ‘add to the story’ of breaking news in ways that do not correspond to what is actually happening, often in politically partisan ways,” Walter Scheirer, from the University of Notre Dame, told AFP.
This image from protest last night is AI. Zoom in on the picture on the lower portion, and you will see multiple pictures of the same man dressed in a white coat. Take some time you will see other duplicates. How are we to know the entire Minneapolis shooting was not staged or… pic.twitter.com/QTyJpIsrD4
“A new development has been the use of AI to ‘fill in the blanks’ of a story, for instance, the use of AI to ‘reveal’ the face of the Ice officer. This is hallucinated information.”
AI tools are also increasingly used to “dehumanise victims” after a crisis event, Scheirer said.
One AI image portrayed the woman mistaken for Good as a water fountain, with water pouring out of a hole in her neck.
Another depicted her lying on a road, her neck under the knee of a masked agent, in a scene reminiscent of the 2020 police killing of a black man named George Floyd in Minneapolis, which sparked nationwide racial justice protests.
AI fabrications, often amplified by partisan actors, have fuelled alternate realities around recent news events, including the US capture of Venezuelan leader Nicolas Maduro and last year’s assassination of conservative activist Charlie Kirk.
The AI distortions are “problematic” and are adding to the “growing pollution of our information ecosystem”, Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley, told AFP.