The Department of Justice (DOJ) has submitted to Congress draft legislation that could obliterate legal protections for internet companies and their users. The proposal takes broad aim at Section 230, a (widely misrepresented) law that helps protect First Amendment rights across the internet while also protecting private companies and individuals that want to filter out certain types of content.
Passed in 1996, Section 230 has been under obsessive attack from both Democrats and Republicans for at least the past decade, and especially in recent years, as online ideas, speech, and content became more and more decentralized and less gatekept. Politicians on both sides have proposed actions aimed at incrementally chipping away at the law's protections. But this new Justice Department draft legislation strikes at Section 230's very heart in a number of ways.
If the DOJ gets its way, private web service providers—think: social media, video platforms, consumer review sites, online marketplaces, petition and crowdfunding services, dating apps, newspaper comment sections, blogging platforms, private message-boards, and so much more—and the people who use those services could be punished for attempts to filter out objectionable content.
Under the DOJ proposal, employees and users of online services could only "restrict access to" content if "the provider or user has an objective reasonable belief" that a specific piece of content "violates its terms of service or use" or falls into one of a few categories. While the DOJ doesn't provide an example of content that is currently restricted but that would be unrestricted under the version of Section 230 it wants Congress to pass, it's possible the new language is meant to appease prominent Republicans who believe popular social media platforms have arbitrarily banned conservatives and that politics-based content moderation shouldn't be allowed.
As it stands now, a web platform's "terms of service or use" have no bearing on Section 230 protection—although President Donald Trump acted as if they did in a recent executive order concerning Twitter. Despite what Trump suggested, Twitter doesn't (yet) risk losing Section 230 protection if it can't prove that every single suppressed tweet was treated in strict accordance with a specific plank of its terms of service.
It seems the Justice Department is now pushing to revise federal law to conform the law itself to the president's (currently erroneous) interpretation of Section 230.
In a section titled "GOOD FAITH," the DOJ draft legislation says that a service provider would benefit from Section 230 protections only if its terms of service "state plainly and with particularity the criteria the service provider employs in its content-moderation practices," and only so long as the company did "not restrict access to or availability of material on deceptive or pretextual grounds," among other things.
These proposals fly in the face of the main problem Section 230 was created to address, which was the "moderator's dilemma." In trying to filter out any content created or uploaded by users, a digital service risked becoming legally liable for whatever defamation, obscenity, or otherwise illegal content it allowed through. Without Section 230 protections, a digital service like Facebook or Twitter would be better off filtering no user content, for any reason, or dedicating a vast amount of resources to vetting essentially all user content and only allowing the most anodyne through.
Neither option is desirable, which is why Section 230's first part declares that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"—i.e., Facebook is not automatically responsible for your speech, and you're not automatically responsible for the speech of every other Facebook user.
Section 230's second part—the "Good Samaritan" clause—says that neither internet services nor their users will lose this protection over attempts "to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."
These two protections mean that a service provider can host a forum for another person without necessarily being responsible for what that person says in the forum, and that the service provider can restrict some of what gets said in the forum it owns without taking on liability for forum content it doesn't remove or restrict.
The DOJ revision to Section 230, however, would take away many of a digital services options for "restrict[ing] access to or availability of material." In the DOJ's preferred legal framework, content filtering and moderation can only be done if a service provider or user "has an objective reasonable belief" that a specific piece of content "violates its terms of service or use," or "has an objectively reasonable belief" that the content is "obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful
or otherwise objectionable." [Bolding mine, strikethrough the DOJ's.]
At first glance, this appears to expand the scope of content that's allowed to be filtered out. But it actually narrows it, replacing the much broader "objectionable" with "unlawful."
The DOJ revision would also insert a vague new standard that content moderation be based on "objective reasonable belief," which is a phrase so unspecific and debatable that it would likely spur endless litigation.
If Congress adopts DOJ's recommendations, expect to see features that help individual users control their internet experience dwindle (you blocking someone for a non-federally-approved reason could cost Twitter big time!), coupled with a serious ramping up of what is prohibited by companies' terms of service. The end result will almost certainly be less user content on the wider web and an ever-growing list of rules governing what we can say to and share with each other online.
The DOJ proposal doesn't just strike at the ability to filter out bad content, however. It also takes away certain Section 230 protections if illegal content does make it through moderation filters, or if a company is deemed to "promote, solicit, or facilitate" content or activity that is determined to be illegal. You can find the DOJ's full proposed changes—which are numerous and beyond the scope of this post—here.