The rise of deepfake technology has sparked debate in New Zealand over whether existing legislation, particularly the Harmful Digital Communications Act 2015 (HDCA), is sufficient to address the harms caused by AI-generated content.
Act MP Laura McClure has introduced the Deepfake Digital Harm and Exploitation Bill, a private member’s bill proposing amendments to the Crimes Act and the HDCA. But is new legislation truly necessary?
The HDCA was enacted to deter and mitigate harm from digital communications, and to provide victims with swift avenues for redress. The act defines “digital communication” broadly, encompassing any form of electronic communication – including text, photos, recordings or other electronically conveyed content.
Crucially, the HDCA targets the means of communication as well as the content. This includes voice calls via digital networks and content shared on platforms such as Facebook. Under section 22, it becomes an offence to post digital communications with the intent to cause harm – defined as “serious emotional distress”.
“Post” is interpreted broadly: any transfer, sending, publishing, or dissemination electronically, whether or not the information is true. Truth, interestingly, is not a defence under the HDCA, unlike in defamation law, underscoring a distinction between online and offline speech.
A deepfake is synthetic media – images, video, or audio – generated using artificial intelligence, particularly generative adversarial networks. These tools convincingly imitate a person’s appearance or voice, often making it difficult to distinguish fake from real.
McClure demonstrated this in Parliament by presenting a blurred deepfake nude image of herself, generated using easily accessible online tools. Her concern: current laws do not explicitly cover AI-generated intimate imagery made without a person’s consent.
While the HDCA does address harm from digital communications, its existing definition of “intimate visual recording” presumes a real-world subject captured without consent. AI-generated images – where no camera was involved – fall outside this scope. Thus, while a deepfake may cause harm, it might not meet the legal threshold for an “intimate visual recording”.
Deepfakes can be addressed by the HDCA but only to a point. If a deepfake image is electronically posted and causes serious emotional distress, it can be actionable under section 22 of the HDCA. However, section 22 requires proof of intent to cause harm, which can be difficult.
This is where section 22A is more useful. It covers non-consensual posting of intimate visual recordings but does not require proof of intent to cause harm. Unfortunately, as currently drafted, 22A applies only to actual, not AI-generated, recordings. Again, the posting of a synthetic nude of a real person might escape prosecution under this section.
McClure’s bill aims to close this loophole by expanding the definition of “intimate visual recording” to include AI-generated or altered content that appears to depict someone naked, engaged in sexual activity, or in a state of undress – regardless of whether the image was captured with a camera or created synthetically. This change would make it easier to prosecute creators of intimate deepfakes under section 22A, removing the burden of proving intent and ensuring synthetic abuse is treated similarly to real-image abuse.
The proposed changes are narrow and targeted. They do not seek to ban deepfakes outright or regulate their use in satire, parody or entertainment. Instead, they focus specifically on harmful, non-consensual synthetic intimate images – an area where existing laws are ambiguous.
The HDCA already provides some remedies for harmful deepfakes, but without these clarifying amendments, a troubling legal gap remains. Deepfakes that replicate intimate content should not escape accountability merely because they are synthetic. In that sense, the deepfake bill is not an overreach – it is a necessary evolution.