As accused Christchurch massacre gunman Brenton Tarrant appears in court this morning, Facebook is still struggling to stamp out copies of his video.

Earlier today, New York-based hate content researcher Eric Feinberg sent the Herald details about three copies of the video on still on Facebook, two on Facebook-owned Instagram and three on Google-owned YouTube.

All are "highlight" clips.

While Facebook has flagged technical difficulties in automatically detecting the offending content (one staffer told a US Congressional hearing that the clip was not "gory enough" to trigger AI systems), all three copies on Facebook also have straightforward text descriptions - albeit in Arabic script (translated by Feinberg) - that include a number of key words related to the attack.


One of the Facebook clips has been tagged by the social network as with a messaging saying "This video may show violent or graphic content" but still plays.

The gunman's clip has been banned by NZ's Chief Censor, meaning it is illegal to view or share, but Feinberg has established his bona fides with previous finds that have been verified by The New York Times and other media, and Facebook itself.

Brenton Tarrant, the man charged in relation to the Christchurch massacre. Photo / File.
Brenton Tarrant, the man charged in relation to the Christchurch massacre. Photo / File.

Facebook had no further comment when approached today.

On May 23, after Feinberg identified further copies of the clip, plus an online "game" on Facebook that included Christchurch mosque massacre footage, a spokesman for the social network told the Herald:

"We continue to automatically detect and prevent new uploads of this content on our platforms, using a database of more than 900 visually unique versions of this video. When we identify isolated instances of newly edited versions of the video being uploaded, we take it down and add it to our database to prevent future uploads of the same version being shared.

"One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People - not always intentionally - shared edited versions of the video, which made it hard for our systems to detect.

"Although we deployed a number of techniques to eventually find these variants, including video and audio matching technology, we realised that this is an area where we need to invest in further research."

"That's why we announced last week that we're partnering with The University of Maryland, Cornell University and The University of California, Berkeley on a US$7.5 million research piece to identify new techniques to detect manipulated media and distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs.


"This work will be critical for our broader efforts against manipulated media, including deep fakes - videos intentionally manipulated to depict events that never occurred. We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack."

Resisting change

The alleged gunman streamed his 17-minute, March 15 attacks on Facebook Live, and it took the social network an hour to take the clip down after its automated safeguards failed and it was ultimately alerted to the video's presence by NZ law enforcement.

Facebook says it has beefed up its filters since the attacks, and blocked more than 1.5 million attempts to upload the clip.

However, every few days since March 15, Feinberg has been able to locate copies of the clip on Facebook, Facebook-owned Instagram and, at times, Google-owned YouTube.

Facebook has so far resisted putting a slight delay on Facebook Live, or placing any universal restrictions on the service (such as YouTube's new requirement for a mobile user to have at least 1000 followers before they are allowed to livestream)

But it has introduced a new policy that will see users who break "certain rules" including its dangerous individuals or groups policy, potentially barred from using the service.


The recent Christchurch Call summit is Paris, which sought ways to eliminate violent extremist content on social media, was called a good start by most commentators.

However, the refusal of the US to support the initiative - with the White House citing free-speech concerns - undermined its modest proposals and kept the pressure off Facebook.

PM: Chch Call research will address problem

A spokesman for Prime Minister Jacinda Ardern said "the clip is being cut and edited in ways that see it slip through the cracks of the social network's systems. That is what the Christchurch Call commitments are trying to solve. The shared research will address these issues."

The Paris summit saw tech giants Amazon, Facebook, Google, Microsoft and Twitter agree to collaborate with the 17 participating governments in research to prevent and remove violent, extremist content.

Facebook is chipping in US$7.5m for its aforementioned collab with UC Berkeley and Cornell.

The spokesman agreed that was not enough money, but he noted that all the tech companies who attended would be contributing funds to the effort.


Facebook is so far the only company to quantify its contribution, he said.