Censorship

U.K. 'Celebrates' Its New Freedom From the E.U. by Pushing Massive Online Censorship Orders

Government wants to force social media platforms to accept a “duty of care” to protect users from whatever they deem harmful.

|

A new policy will require online platforms to eliminate content the U.K. government decides is harmful, or face massive fines and possibly even criminal sanctions.

Lest anybody think Brexit was truly about freeing the United Kingdom from burdensome European regulations, this week the government announced that it was putting its Office of Communications (known as Ofcom) in charge of developing a massive regulatory framework to require further online moderation of content and communications online.

The United Kingdom wants to force online platforms like Facebook, YouTube, and others to have a "duty of care" for their users' safety, a legal term that obligates the companies to protect its users from certain harms.

Ofcom and the U.K. government are selling this new regulatory regime as a way to both protect children from sex trafficking and abuse and stop terrorist organizations from recruiting online. But what they're actually proposing is a much broader plan to shape social media communications to their liking. The new policy would also force platforms to tackle online "bullying," prevent their users from encouraging suicide, and prevent the spread of what the U.K. government deems "disinformation."

The Online Harms White Paper released last summer is being used as a framework. Here's what it suggests:

Companies must fulfill their new legal duties. The regulator will set out how to do this in codes of practice. The codes will outline the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users.

Companies will still need to be compliant with the overarching duty of care even where a specific code does not exist, for example assessing and responding to the risk associated with emerging harms or technology.

There are so many potential problems with this plan that it's hard to pick a point of entry. The first and most obvious issue is that there is no global consensus as to what constitutes a "harm," especially when it comes to speech. The U.K. has hate speech laws that wouldn't fly here in the United States—and a chunk of the organizing paper here discusses what sort of "duty of care" will be involved in monitoring and removing hate speech online.

As part of its so-called "war on obesity," the U.K. has implemented policies that censor advertisements of what it deems "junk food" (often inaccurately) in the media. This effort is referenced in the white paper in a section discussing online advertising and ethical practices. Fortunately, there's nothing listed in the "duty of care" demands that indicates Facebook will have to start censoring pictures of your home-baked cookies, but the inclusion of junk food advertisement bans in the context of a paper about "preventing harms" certainly does raise the specter of something focused on user content down the line.

The paper states that regulatory policies will be based on empirical data, and yet it also cites public polling results and contains fact-free, irresponsible, fear-mongering statements like this one: "Sexual exploitation can happen to any young person—whatever their background, age, gender, race or sexuality or wherever they live." While this is true on the most abstract of levels, the paper's reluctance to narrow the focus of a sexual exploitation policy casts doubt on the entire process: Are white, wealthy, legal-age males facing the same risk of sexual exploitation as every other demographic? Of course not. Saying that it can happen to everyone is not necessarily untrue, but it is a uselessly broad statement. We don't need more panicked Facebook posts from moms who think various strangers at the grocery store are plotting to snatch their children.

The paper also considers justifications for social media platforms' duty of care when it comes to stopping online advertisements for opioids, not just because people may be deceived by those ads, but also because there could be second- and third-order effects for first responders who "will continue to be exposed to potentially harmful environments" when they respond to emergency calls. Leaving aside the fact that first responders are not actually at risk of accidentally consuming illicit opioids when responding to emergency calls, is censoring social media really the best route to reduce what little exposure they do face?

Then there's the self-serving goals and rent-seeking of media outlets. The U.K. government wants social media platforms to play more of a role in moderating and fighting the spread of "disinformation," particularly as it involves the government and elections. The white paper declares:

Companies will need to take proportionate and proactive measures to help users understand the nature and reliability of the information they are receiving, to minimise the spread of misleading and harmful disinformation and to increase the accessibility of trustworthy and varied news content.

This includes potentially requiring that platforms partner with independent fact-checking organizations and "promoting authoritative news sources." Traditional media outlets in the U.K., like The Telegraph, have endorsed new laws controlling social media platform content, and it's easy to see why. With the government's help, these outlets can require social media platforms to promote their stories and suppress or remove those of alternative media outlets and independent journalists, all under the guise of fighting "disinformation" and protecting social media users. This is pure protectionism.

We've seen what happens in other countries when the government decides to play a role in declaring what is and isn't "fake news." In Singapore, the government is attempting to force Facebook to censor its critics. Part of the "duty of care" framework involves protecting public figures (like politicians) from online harassment, so watch what you say about woodchippers.

The U.K.-based Index on Censorship warns against the potential harms of implementing regulations based on this white paper. The vagueness of the proposals combined with the possibility of fines (and even possible jail time) all but guarantees that platforms will feel the pressure to err on the side of censorship of content. The Index on Censorship concludes that even though the white paper invokes "freedom of expression," it shows no intent to actually protect it:

The white paper gives far too little attention to freedom of expression. The proposed regulator would have a specific legal obligation to pay due regard to innovation. When it comes to freedom of expression the paper only refers to an obligation to protect users' rights "particularly rights to privacy and freedom of expression."

It is surprising and disappointing that the white paper, which sets out measures with far-reaching potential to interfere with freedom of expression, does not contain a strong and unambiguous commitment to safeguarding this right.

Whatever good it might possibly do for vulnerable social media users, this "duty of care" policy poses an even bigger threat to their free speech.