Terrorism has gone viral. The Christchurch massacre at two New Zealand mosques that left more than 50 people dead was livestreamed on Facebook. In 2015, two reporters in Virginia were shot while broadcasting live and the shooter uploaded videos to social media channels that are still available today. And in 2014, the beheading of James Foley by Isis (Islamic State) forces in Syria spread across the internet within minutes of being uploaded. The world can't figure out what to do about staunching the flow of this content online.
Now the leaders of France and New Zealand have a proposal. This week, they meet in Paris to seek voluntary pledges from other world leaders and major tech platforms aimed at eliminating terrorist and violent extremist content online.
A leaked draft of the Christchurch Call, as it is been dubbed, includes assurances about maintaining a free and open internet and respecting human rights. It also calls for increased censorship, expanded government regulation, and more intensive coordination across platforms. The pledge fails to explain how to reconcile the goals of preventing and removing violent extremist content without empowering government censors or privatising censorship.
The focus on content moderation seems a losing tactic in the never-ending game of whack-a-mole defining efforts to prevent the circulation of violent content.
The question over what to do about terrorist content and graphic violence online is not new, but it has taken on new urgency in the aftermath of the Christchurch massacre because the attack was designed to leverage the algorithmic power of the social network, making its removal near impossible. Facebook, YouTube and Twitter all tried unsuccessfully to scrub their services of the video and its progeny. It's also noteworthy that the video of the Christchurch massacre was livestreamed on Facebook for 17 minutes, during which time not a single person flagged it as problematic content.
The Christchurch Call includes commitments by online service providers to prevent the upload and dissemination of violent extremist and its permanent removal. But the sweeping focus on online service providers risks pushing censorship into the infrastructure layer, commonly thought of as the layer that makes the internet work. This could include domain name service providers, internet service providers and cybersecurity providers. When Cloudflare - a company working at the infrastructure level to provide DDoS protection to about 10 per cent of the world's websites - decided to remove the far-right Daily Stormer website from its platform, it provoked a controversy over such a blunt approach to removing extremist content.
The draft of the pledge also calls for broadcasting standards to stave amplification of extremist content and encourages media outlets to apply ethical rules when reporting on terrorism. One of the basic roles of the media is to provide information and coverage of events or proclamations depicted in content disseminated by terrorist or extremist groups such because it is newsworthy.
Yet the unintended consequences of anti-terrorism efforts leading to censorship of news media have been seen around the world, from Australia to Syria.
Last year, an Australian regulator deemed an article about Isis recruiting published on one of the country's top news sites to be promoting terrorism, forcing the outlet, news.com.au, to remove it, even though the self-regulatory press council had determined it in the public interest.
Kris Gledhill: Why hate speech laws are a decent idea
Editorial: Ardern's French mission into uncharted waters
Kathy Errington: Social media regulation needs to confront hate head-on
When the self-proclaimed Islamic State established its capital in Raqqa, Syria, an online search for Raqqa would be overwhelmed by Isis propaganda, especially in English, the founder of the citizen journalism organisation Raqaa is Being Slaughtered Silently told me. Despite the great peril for its reporters, RBSS had its accounts shuttered and content removed from various platforms because it was incorrectly identified as violent extremist content.
Furthermore, the rush to eradicate terrorist and violent extremist content from the internet and impose broadcasting standards and media ethics carries high risks of being misused by governments around the world. Cameroonian journalists Ahmed Abba spent more than two years in jail for reporting on Boko Haram, and nearly all of the journalists jailed in Egypt worked for Muslim Brotherhood outlets.
It's not easy to stand up for press freedom when extremists use our media to disseminate their propaganda, and the power of online platforms to take their evil viral. The recognition by countries seeking to regulate away this problem that they and the platforms must seek to respect human rights and civil liberties is important and welcome.
Perhaps it is time to deal with a difficult truth. Preventing the dissemination of terrorist content online while respecting human rights and free expression is hard but essential work. The Christchurch call is an understandable response to tragic events. But it is more likely to strengthen the hand of the censors than mute the voices of the terrorists.
• Dr Courtney C Radsch is advocacy director for the Committee to Protect Journalists, based in New York.