Facebook is taking new measures to stop the sharing of 'revenge porn' on all three of its major platforms, the company announced this morning.
The social media giant will now remove and create a "digital fingerprint" of any image reported as being non-consensually shared, disabling the account of any user who tries to re-upload reported images to Facebook, Instagram or Messenger - including secret groups.
Until now, Facebook users were able to report offensive or harmful content, but other users were still able to re-upload the offensive material.
Preventing this kind of content, often referred to as revenge porn, from being shared on Facebook's social media platforms was part of the company's effort to build safer communities online.
Facebook's director of policy in New Zealand and Australia, Mia Garlick, said the move followed extensive talks with women's rights groups and other community groups about the distress caused to victims of revenge porn.
"There was one study, which found that 93 per cent of people affected by the sharing of intimate images reported significant emotional distress, and 82 per cent reported feeling significant impairment in social, occupational and other areas of their life," she said.
"Even if there's a small percentage of this occurring, the harm is sufficient that we want to take these extra measures."
Facebook's community policies already banned content used to harass, shame or bully a person, but this was an additional protection, Garlick said.
"Using image hashing technology, when content has been reported to us and we have removed it because it is a non-consensual nude image, we will now create a hash of that image."
Hashing, or image-matching, was like creating a "digital fingerprint" which marked the image and would flag if it reappeared on a platform.
"If someone tries to re upload that image, they will be blocked from doing so."
In most instances the person's account would also be disabled and they would have to contact Facebook to re-open it.
Garlick said she hoped that the threat of deactivation would make people think twice before uploading harmful images.
An image only had to be reported once to come to the attention of Facebook's team, who would review the complaint and remove the image within 24 hours.
The team would "triage" reports to try and remove seriously offensive or harmful content faster, Garlick said.
For users, the process for reporting would remain the same, but at Facebook's end the extra step of image-matching would be added.
"What we're hoping to do is really drive home the message that this kind of content is not acceptable online," Garlick said.
"We really want to end any distress the sharing of this content causes to people who appear in the images."
Netsafe spokesman Sean Lyons said image hashing had been shown to be hugely effective in reducing the amount of child sex abuse material on the internet already and the online watchdog thought Facebook's utilising of the tool was fantastic.
Image hashing would provide a two-fold benefit, Lyons said.
"One is the fact that it doesn't keep going up and we have to keep chasing it down. The second is the [reduction of] psychological harm for the person in the image."
Knowing that Facebook wouldn't be playing "whack-a-mole" with an image constantly being re-uploaded would be a huge relief to victims of revenge porn, he said.
"That's a benefit for how quickly an individual can recover from harm and how quickly the distress can fade."
In New Zealand, the Harmful Digital Communications Act has been used to try and prevent cyber-bullying.
On Monday the High Court upheld the first appeal under the Act, saying posting intimate images on Facebook met the harm threshold detailed in the act.
In the original decision, a man was charged with breaching a protection order in relation to his estranged wife and causing her harm through posting photos on Facebook.
According to the appeal document, the man said he would post photos of the woman online if she did not stay away from other men, and told her to cancel the protection order.