In June, DeSantis’ campaign shared an attack ad against his Republican primary opponent Donald Trump that used AI-generated images of the former president hugging infectious disease expert Anthony Fauci.
Last month the Federal Election Commission began a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election. Such deepfakes can include synthetic voices of political figures saying something they never said.
Democratic US Senator Amy Klobuchar, co-sponsor of pending legislation that would require disclaimers on deceptive AI-generated political ads, said Google’s announcement was a step in the right direction but “we can’t solely rely on voluntary commitments”.
Several states have discussed or passed legislation related to deepfake technology.
Google is not banning AI outright in political advertising. Exceptions include synthetic content altered or generated in a way that’s inconsequential to the claims made in the ad. AI can also be used in editing techniques such as image resizing, cropping, colour, defect correction or background edits.
The ban will apply to election ads on Googlown platforms, particularly YouTube, as well as third-party websites that are part of Google’s ad display network.
Google’s action could put pressure on other platforms to follow its lead. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but already restricts “faked, manipulated or transformed” audio and imagery used for misinformation. TikTok doesn’t allow any political ads. X, formerly Twitter, didn’t immediately reply to an emailed request for comment.