US President Donald Trump's order for the review of laws governing the internet that he issued after Twitter put a fact check on two of his posts has come back with its recommendations, which could impact on the internet all around the world.
Trump ordered section 230 of the Communications Decency Act, added to the law in 1996, be reviewed to clarify the immunity it provides to internet platforms by treating them as distributors of information and not publishers of it.
Although it's an American law, most of the big tech companies and social media it governs are also American, so compliance with the local law can have a global impact as they change the way they run their platforms to abide by it.
The review Trump ordered in May has been conducted and the US Department of Justice "has concluded that the time is ripe to realign the scope of section 230 with the realities of the modern internet" and believes there is "productive middle ground" between those arguing for the law's repeal and those arguing it should be left alone.
The Department has identified what it calls "measured, yet concrete proposals that address many of the concerns raised about section 230".
The current law states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".
This law has been credited with helping to create the modern, open internet, but it was written years before social media, and currently allows sites like Facebook and Twitter to be absolved of responsibility for the content that appears on their sites.
Much of Trump's executive order focused on what the sites take down rather than what they let stay up however, and the order stemmed from perceived censorship against himself after Twitter encouraged viewers of his tweets to "get the facts" about what they were reading.
"The immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints," the executive order issued in May read.
THE MODERATOR'S DILEMMA
The last of the proposed reforms involves "explicitly" overruling a 1995 court decision that created what's known as the "Moderator's Dilemma".
An online messaging board was held liable for defamatory comments made about the president of Jordan Belfort's fraudulent brokerage firm Stratton Oakmont.
The message board promoted itself as family-friendly, and had removed user posts from its site in the past, but because it didn't delete the defamatory comments it was held liable for them.
This created the dilemma: Websites could either attempt to moderate everything posted on their site, making them liable for things they miss, or they can moderate nothing at all and simply be information vessels – deleting things if they violate the law or the site's own policies – but not having editorial oversight over the content or ideas it spreads like traditional publishers of books or newspapers.
That dilemma is what section 230 was supposed to address, but the internet and the companies the law was brought in for have changed significantly since it was added 24 years ago, and although the law has huge influence over how social media functions, it predates social media as we know it today by several years.
THE PROPOSED REFORMS
The proposed reforms advise the act should "protect responsible online platforms" and not "immunise from civil liability an online platform that purposefully facilitates or solicits third-party content" that breaks the law, and this would cover things like piracy sites or ones where people share illegal material like child exploitation material.
The Department also wants to add a clear definition to identify when a platform's moderation decision has been made "in good faith" which would limit a platform's ability to remove only content that violates its rules.
This would be identified in its own "plain and particular" terms of service (meaning anyone who read the terms would be informed ahead of time of what they can and can't do, so couldn't argue they'd been censored for no reason).
The Department has also advised "carve-outs" to specifically address child abuse, terrorism and cyberstalking and allow victims to "seek civil redress".
It also recommended replacing "vague terminology" that polices "otherwise objectionable" language with clauses that specifically identify content that is "unlawful" or which "promotes terrorism".
The US government would also get new power to police the world's internet in order to "protect citizens from harmful and illicit conduct".
It also wants to change the law so that big tech companies can't use it as a defence when they're subject to antitrust investigations, which is a separate issue.
"The avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players," the Department's recommended reforms read.
"It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech."
The reforms still need to be voted on in Congress before they become law and the Congress that eventually votes on it may be different to the current one.
US citizens head to the polls to elect (or re-elect) a President and Congressional representatives in November.
Proposed reforms to Section 230
The US Department of Justice identified four areas for reform:
1. Incentivising Online Platforms to Address Illicit Content
The first category of potential reforms is aimed at incentivising platforms to address the growing amount of illicit content online, while preserving the core of Section 230's immunity for defamation.
a. Bad Samaritan Carve-Out. First, the Department proposes denying Section 230 immunity to truly bad actors. The title of Section 230's immunity provision — "Protection for 'good Samaritan' Blocking and Screening of Offensive Material" — makes clear that Section 230 immunity is meant to incentivise and protect responsible online platforms. It therefore makes little sense to immunise from civil liability an online platform that purposefully facilitates or solicits third-party content or activity that would violate federal criminal law.
b. Carve-Outs for Child Abuse, Terrorism, and Cyber-Stalking. Second, the Department proposes exempting from immunity specific categories of claims that address particularly egregious content, including (1) child exploitation and sexual abuse, (2) terrorism, and (3) cyberstalking. These targeted carve-outs would halt the overexpansion of Section 230 immunity and enable victims to seek civil redress in causes of action far afield from the original purpose of the statute.
c. Case-Specific Carve-outs for Actual Knowledge or Court Judgments. Third, the Department supports reforms to make clear that Section 230 immunity does not apply in a specific case where a platform had actual knowledge or notice that the third party content at issue violated federal criminal law or where the platform was provided with a court judgment that content is unlawful in any respect.
2. Clarifying Federal Government Enforcement Capabilities to Address Unlawful Content
A second category reform would increase the ability of the government to protect citizens from harmful and illicit conduct. These reforms would make clear that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government. Civil enforcement by the federal government is an important complement to criminal prosecution.
3. Promoting Competition
A third reform proposal is to clarify that federal antitrust claims are not covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.
4. Promoting Open Discourse and Greater Transparency
A fourth category of potential reforms is intended to clarify the text and original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users.
a. Replace Vague Terminology in (c) (2). First, the Department supports replacing the vague catch-all "otherwise objectionable" language in Section 230(c) (2) with "unlawful" and "promotes terrorism". This reform would focus the broad blanket immunity for content moderation decisions on the core objective of Section 230 — to reduce online content harmful to children — while limiting a platform's ability to remove content arbitrarily or in ways inconsistent with its terms or service simply by deeming it "objectionable."
b. Provide Definition of Good Faith. Second, the Department proposes adding a statutory definition of "good faith," which would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and accompanied by a reasonable explanation, unless such notice would impede law enforcement or risk imminent harm to others. Clarifying the meaning of "good faith" should encourage platforms to be more transparent and accountable to their users, rather than hide behind blanket Section 230 protections.
c. Explicitly Overrule Stratton Oakmont to Avoid Moderator's Dilemma. Third, the Department proposes clarifying that a platform's removal of content pursuant to Section 230(c) (2) or consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.
Source: US Department of Justice