Facebook's message to New Zealand, made through this newspaper today, is one that people worldwide will welcome. It is to the company's shame that it has taken two weeks to say a word in public about the part its networks played in mass murder in Christchurch mosques.

If it seems harsh to suggest the company was complicit in the crime, its disinclination to monitor its platforms properly was obviously known to the accused killer who livestreamed his terrorism.

He wanted it to be seen around the world and Facebook facilitated that, not deliberately or even consciously but with negligence. That is an easy charge to make with hindsight of course, but hindsight is not wrong.


Knowing what it knows now, Facebook is taking remedial steps which it outlines in some detail in its statement today. But it should not have taken the events of March 15 to show the company how its platforms could facilitate something such as this.

It was always running the risks it now faces to its reputation and freedom from regulation, and placing others at far more serious risk, not just from hateful and incendiary speech — which is damaging enough — but at risk of their lives.

The statement from Facebook's chief operating officer, Sheryl Sandberg, today describes the scale of the company's problem.

"We know [the shooter's] video spread mainly through people re-sharing it and re-editing it to make it harder for our system to block it," she writes. "We have identified more than 900 different videos showing portions of those horrifying 17 minutes."

It almost defied belief that some people could not only watch that video but post extracts of it for their Facebook friends.

Sandberg says, "We have long had policies against hate groups and hate speech" and reveals, "We are using our artificial intelligence tools to identify and remove a range of hate groups in Australia and New Zealand", going on to name them.

Not only can the company block those groups from its Facebook and Instagram platforms but it will also remove praise and support for them "when we become aware of it".

That can be too late, as the Christchurch accused killer has shown. All open source "social" media need much more robust filtering mechanisms against murderous and hateful material.


Sandberg says Facebook is strengthening its policies against hate speech by banning "praise, support and representation of white nationalism and separatism" on its networks.

That "policy" might not mean very much in practice. Terms such as "white nationalism" and "white supremacy" may be more likely to be used by critics of these groups than by the groups themselves. Few people holding these views conveniently identify themselves with such terms. Hate speech is unlikely to be identified simply by words a robot can recognise.

But Facebook has made a start. Sandberg concedes it has more work to do, "strengthening our policies, improving our technology and working with experts to keep Facebook safe".

It hardly needs to say it will co-operate with the police and with a Royal Commission in New Zealand in response to the atrocity two weeks ago. It can be assured its efforts to clean up its act will be watched closely in this country and others. Nothing like that deadly transmission can be allowed to happen again.