Facebook's New Year resolution is to take more aggressive action against realistic-looking-but-fake videos or 'deepfakes' - such as last year's effort that made Democrat leader Nancy Pelosi seem like she was slurring her words, which spread like wildfire for 24 hours over the social network before limited action was taken against it.
• Five Kiwi tech startups to watch in 2020
• I got busted for using my phone while driving - and got off lightly
• 10 tech trends you're guaranteed to see in 2020
• The mystery of Teen Vogue's disappearing Facebook article on 'How Facebook Is Helping Ensure the Integrity of the 2020 Election'
But while deepfakes have hogged the headlines, it's much more simple tweaks that have caused the most damage.
January 1 saw a 20-second clip of Joe Biden appearing to make a racist comment go viral.
The Democratic presidential hopeful told an Iowa audience, "Folks, this is about changing the culture, our culture, our culture, it's not imported from some African nation or some Asian nation. It is our English jurisprudential culture, our European culture that says it is all right."
A few words were snipped out of the middle to create an out-of-context video where Biden simply says:
"Our culture is not imported from some African nation or some Asian nation."
Twitter, did take action, but not until after 24 hours - by which time the clip had been retweeted by various verified accounts, and by supporters of rival Bernie Sanders.
The Washington Post saw Biden episode as a harbinger of things to come. "If you thought the 2016 election was awash in disinformation and lies, get ready: The 2020 election is going to make that affair look like a knitting session."
Facebook tightens policy against deepfakes ahead of election
And it's hundreds of small things that confuse debate, from National's creative proportions with graphs here to fake photos and fake claims about environmental action harming fire-fighting efforts across the Tasman.
So what to do?
Late last year, an NZ Law Commission-funded study (below) found that deepfakes and other deceptive online content is a problem, and is likely to become more of a problem.
But authors Tom Barraclough and Curtis Barnes also argued that any new legislation to clamp down on deepfakes could be abused by politicians - for example, to suppress satire (which is exempt from Facebook's new crackdown, incidentally). They also feared that any legislation targeting deep fakes would hinder the ability of oppressed groups to find a multimedia voice on social media.
Barraclough said we already have multiple laws and guidelines that cater to the risk - primarily the Crimes Act, which covers when deception is used for gain, the Harmful Digital Communications Act, which covers when it's used for malice and the Privacy Act because "the wrong personal information is still personal information".
My take is that government agencies need to stand up and hold the likes of Facebook, Google and Twitter to the same legal standards as traditional publishers.
I know some tech industry types like to say this is an old-world view and that social media services are neutral platforms. But Twitter and Facebook don't provide you with a neutral feed. They have human staff and algorithms that decide which posts and news stories you see, and when, which makes them publishers.
I think politicians should lean on enforcement authorities to go on the front-foot. But I also accept that it's very unlikely to happen - because MPs from both parties are addicted to their own use of social media, and government departments spend more on Facebook advertising than regulating the social network.
Barraclough went easier on Facebook and Co when I talked to him about his report.
He said the ever-evolving nature makes it difficult for the likes of Facebook to trace and police deep-fakes.
"We have in-built biological trust in the data derived by our eyes and ears," he says.
But he also said the public needed to be credited with more nous.
"People understand the limits to which what they see and hear through video and audio recording is only a partial representation of reality," he said.
The biggest danger the researcher said, "is probably over-skepticism".
If something like the Jami-Lee Ross audio recordings of Simon Bridges was released in future, people might not believe it was true, he said.
Meanwhile, keep an eye out for more installments in the Herald's fact-checker series as our election approaches.