Writing about technology after the atrocities in Christchurch feels empty and meaningless.

But it is undeniable that cheap gear, fast networks and online forums where people veer from rickrolling to racism in just a few clicks have become terrorist accessories.

Terrorists spread their poison to others with shocking ease as well.


Algorithms are no match against myriads of people — whether they're doing it on purpose or not thinking straight — uploading awfulness that should not be shared.

Putting faith in hashing functions to automatically recognise disgusting content that should be filtered out and when they fail because it's been altered, using human moderators, is futile.

This is nothing new, and global content sharing sites know full well that they have become so big that they cannot keep up with their unco-operative users.

The internet is a multi-headed hydra and it is very difficult to stop information of any kind from spreading on it.

An emergency protocol that forces social media and content sharing sites to quickly turn off uploads for a few days after horrors like Christchurch might have reduced the dissemination of the murderer's video.

That doesn't take care of livestreaming though, and it only takes one person to share the video on peer-to-peer networks to get around an upload block. This is already happening with P2P links shared on social media.

Australian Prime Minister Scott Morrison's call to crack down on social media for failing to remove terrorist content will probably result in Facebook, YouTube and others improving their automated censorship.

Maybe that will partly clean up social networks and content sharing sites especially if — and I feel a bit sick typing this — if they're penalised for earning advertising revenue
from offensive material.


Nobody would be surprised if a system similar to the proposed European Union upload filtering is proposed to stop users from sharing and posting extremist content.

Hotly debated currently, the idea is that content sharing sites will screen out copyrighted material or face big fines. Except nobody knows how to do this yet.

Ten years ago, Pakistan decided it had had enough of YouTube and managed to knock it offline by asking internet providers in the country to advertise bogus routes that redirected traffic to the wrong places.

It was messy and broke bits of the internet. Nowadays, it has become much harder to remove access to sites that way.

For instance, hate sites hide behind resilient content delivery networks such as Cloudflare that forward their traffic without revealing where the actual servers are. Privacy regulation means it's more difficult to trace hate site operators as well.

Maybe in the years ahead we will come up with a solution that addresses the fact that some of us are just awful people who shouldn't be online.

Until then, who can fault those who want to ban and round up smartphones and personal computers and switch off mobile and fixed-data networks?

It's not going to happen but this is one time when it really would feel right and respectful to press the off-switch for tech, at least for a while.