This week's Herald story debunking false accusations levelled at Clarke Gayford revealed a worrying divide in the media landscape.
Immediately after the story appeared, mainstream publications and some well-established blogs received a letter from law firm Kensington Swan, warning that publication of any details relating to the allegations would be met with legal action.
While the Herald, Stuff, The Spinoff and Public Address all received the legal letter, it is unclear whether Facebook and Twitter did too.
Kensington Swan special counsel Linda Clark, who is acting for Gayford, yesterday would not comment on which media companies were included in the correspondence.
Twitter is yet to reply to a request for comment, while the Facebook communications team said it was unaware of having received a legal letter in respect of the matter.
It could be argued that Facebook and Twitter should have been the first to get the letter, given that the false information about Gayford was disseminated and discussed largely through those two platforms.
Even yesterday, despite the legal warning, anonymous users were still taking to social media to defame Gayford.
Interestingly, the major social media platforms were mentioned in the Kensington Swan correspondence, which warned local media companies that they would be responsible for any comments made on their Facebook or other social media pages.
But what of the numerous comments made elsewhere across Facebook and Twitter? Are they held to the same standard?
A law firm is within its rights to protect its client in whatever way it deems necessary, but media commentator and co-founder of The Spinoff Duncan Greive believes the bigger issue at play here is that new media companies are playing by new rules.
"They're getting all the media money, but they're not sticking to the media rules," Greive told the Herald.
"Just because they invented social media doesn't mean they now get to opt out of everything."
While the Broadcasting Standards Authority, Advertising Standards Association and Media Council have established and enforced local publishing rules, Greive said they had struggled with the new players.
The social media dream of turning everyone into a publisher has also given an instant platform to trolls, bigots, extremists and, in this case, gossip mongers who would previously have been restricted to muttering their views out of general earshot.
Until now, the likes of Facebook and YouTube have relied heavily on users to flag offensive content, but recent faux pas have illustrated the ineffectiveness of this approach.
Facebook has been slammed for live streaming murders and suicides, while YouTube battled an advertiser exodus when it was revealed that major brands had their ads playing alongside extremist content.
More recently, Facebook has also been implicated as the main platform used in the dissemination of hate speech during the Rohingya crisis in Myanmar — something Facebook CEO Mark Zuckerberg has since committed more resources to.
Greive said the Trump election and Brexit referendum both illustrated the power of social platforms in spreading misinformation and that more had to be done to keep them in line.
While Europe has proposed a number of legal changes, Greive doesn't believe they go far enough and that this issue presents an opportunity for New Zealand to be a leader.
"If this Government is as bold and brave as it says it is, then it needs to recognise that the biggest media company in the country is Facebook and that it lacks all the responsibility.
"Somewhere needs to challenge that. And why can't that be New Zealand?"
Thumbs down for Facebook's new move
An advertising boss has given Facebook's new downvote button the thumbs down.
The feature, now being trialled in New Zealand, allows users to thumb up and thumb down comments.
It's part of Facebook's effort to push the most "thoughtful and engaging" comments to the top of the discussion thread, while pushing down those which are just filled with vitriol.
However Simon Lendrum, the managing director at ad agency JWT, isn't convinced the feature will deliver the desired result.
"It's basically an upvote/downvote option, similar to Reddit," Lendrum told the Herald.
"I don't see how it assists in the fight against fake news and manipulated content, and it seems just as likely to suffer from click farms sending legitimate comments to the bottom and nonsense to the top," said Lendrum.
Troll armies, so proficient at mobilising on Reddit, could also potentially vote up questionable comments and give added legitimacy to toxic remarks made just to offend, with the solitary goal of upsetting others.
Most worryingly, given recent events, is that the tool gives Facebook another way to capture data on the type of content users don't like. Used in the wrong way, this could further strengthen the social media bubbles that some users already live in.
"If it results in changes to algorithms then in theory it adds to 'The Narrows', the self-fulfilling bubbles we build around ourselves, so we're less likely to discover new things or hear alternate points of view," says Lendrum.