The latest in "deep fake" technology is DeepNude - an app that uses AI to transform an image of a clothed woman into a realistic rendering of what she might look like naked.
DeepNude surged in popularity after being covered by Vice last week, swamping its anonymous creator's servers. It was soon offline, but not before a claimed half-million downloads.
The software's author initially promised it would be back online after a few days, but has now indicated it will be knocked on the head, posting "If 500,000 people use it, the probability that people will misuse it is too high. The world is not yet ready for DeepNude."
But Auckland legal researcher and AI expert Curtis Barnes, who co-authored a recent Law Foundation-funded report on deep fakes, warns about knock-offs.
"Despite DeepNude being pulled, copies of the application are and will remain in circulation," he says.
"Already it appears that there are new versions arising for download, some for free, and some supposedly 'improved'."
The knock-off versions of DeepNude appeared so quickly because the app was based on
pix2pix, an open-source (that is, free and public) algorithm developed by University of California, Berkeley researchers in 2017. Pix2pix uses generative adversarial networks (GANs), which work by training an algorithm on a huge dataset of images - in the case of DeepNude, more than 10,000 nude photos of women, the programmer said - and then trying to improve against itself.
"Something like DeepNude has been threatened for a long time. The original deep fakes were used almost exclusively to produce videos of non-consenting women engaged in pornography. Where DeepNude differs is only by allowing users to create content quickly and easily with no technical skills," says Barnes, who wrote his Law Foundation report with Tom Barraclough, another legal researcher and fellow member of the AI Forum NZ.
"We warned of a proliferation of these technologies via our research, and it continues to prove correct. This was always going to happen, and there is every reason to expect more – and worse – in the future," says Barnes.
"What DeepNude reveals is that harmful uses are not a distant problem. They are a problem right now.
"While most of the focus is on political misuse, the people currently most vulnerable are women - the original targets of deep fakes. The fact DeepNude only works for photos of women reinforces that this is a gendered issue. Women's dignity, sexual autonomy and privacy is at risk."
(DeepNude's author maintained he had a male version of the app in the works. They had developed a female version first because it was easier to find nude photos of women.)
"There should be an urgent look at the law surrounding intimate visual recordings, and whether changes or additions are necessary to account for new synthetic media technologies," Barnes says.
The government needs to take seriously the issue of harmful synthetic media.
Barnes and Barraclough's broad stance is that existing laws, including the Crimes Act, the Harmful Digital Communications Act and the Privacy Act give authorities all tools they need to crack down on people who perpetuate deep fakes.
Dedicated legislation, such as the Malicious Deep Fake Prohibition Act introduced to the US Senate last year, risks violating human rights, Barraclough says, and undermining legitimate such as political satire (NZ comedian Tom Sainsbury, who uses Snapchat video filters to imitate Paula Bennet and Simon Bridges, among others, is used as an example in the Law Foundation report).
The pair want to see more interagency cooperation and coordination - in particular, a clearer picture of who will take charge, and more public education so people know where to turn for help.
"We also need to be educating the public, and young people in particular, that this sort of content is out there and is extremely harmful, that it hurts real people, and is totally unacceptable to engage with. As a priority, we need to be investing time and money in developing the resources necessary to detect inauthentic images and audio. Ideally, New Zealand's premier audiovisual effects industry would get on board with this," Barnes says.
Facebook currently does not take down all copies of a deep fake, but last week said it was reviewing that policy. A cheeky ad agency fake video of the social network's boss, Mark Zuckerberg, might have helped to crystallise his thinking).
Communications and Digital Media Minister Kris Faafoi told the Herald he generally agreed with the Law Foundation report's stance.
Faafoi hosted a cybersecurity event on Tuesday, in part to spruik a Budget 2019 funding boost for the three-year-old Computer Emergency Response Team (aka Cert NZ) - setup by the government as a "triage centre" for cybersecurity incidents. The idea is that if you or your small business are hit by an online threat, you can contact Cert NZ who will direct you to the right contact at the police or another appropriate agency.
Cert NZ complements NetSafe, the lead agency to enforce the Harmful Digital Communications Act. Facebook and Twitter might ignore your calls, but they'll pick up the phone to Netsafe if it advocates on your behalf about cyberbullying, hate speech or other online abuse. Netsafe also plays an education role.
Faafoi rebuts criticism from a Deloitte partner Anu Nayar that the $2m a year (or $8m over four years) Budget 2019 boost for cybersecurity was meagre. He told the Herald it came on top of an earlier $9.1m boost for Cert NZ for a total of $32m over four years). And that, more broadly, online threats could only be dealt with through cooperation between the public and private sector.
Crimes (Intimate Covert Filming) Amendment Bill is in its final stages. The legislation updates the Crimes Act (1961) for the age of internet sharing, but doesn't explicitly address the issue of fakes.
Faafoi sad the law would always be more principles-based than addressing any trend or product of the moment, but he did not rule out new legislation. "Who knows what new technology we might have to deal with in six months time," he said.
But he said that, as things stand, today's legal framework could cope.
Social networks shaping up?
Faafoi said the Facebook, Google and Twitter "are showing good signs of realising they have to be more responsible. Their social license isn't as great as it has been in the past and in [at the Christchurch Call summit] Paris it was obvious they realised that."
Like the Prime Minister, his focus is not on unilateral efforts to regulate the social networks. He says only internally co-ordinated action, which began in Paris and continued at the G20, will be effective.
Faafoi's push morning was to raise the profile of Netsafe and Cert NZ.
"That lies at the heart of it," Faafoi said. "People need to be ready and reduce risk, and know where to go to if they do have a risk."