The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Today's Facebook Oversight Board decision on the Trump ban raises many interesting questions: When should politicians' speech be blocked on the grounds that it "supports" or "legitimizes" riots (whether right-wing or left-wing)? When can Facebook ban claims of electoral fraud on the grounds that they are "unfounded," and when should it conclude that the debate about the often uncertain events in a recent election is ongoing and legitimate? (Say what you will about bans on Holocaust denial, but those came after a consensus built on decades of comprehensive scholarly study of the subject.) What sort of further transparency should Facebook provide for its decisions? How long should Trump be banned from Facebook?
But for this post, I want to set those questions aside, because the question at the heart of the matter is much more far-reaching: It is,
What rules should govern how Americans communicate with other Americans, including on Facebook, which "has become a virtually indispensable medium for political discourse, and especially so in election periods"?
There are at least three possible answers:
- American free speech principles, coupled with the judgment of American speakers and American listeners.
- An immensely rich and powerful corporation, and its immensely rich and powerful owners and managers.
- International law principles, as made and enforced by an international group of decisionmakers (on which Americans are represented but are understandably not in control).
There is a lot to be said for each of the possible answers. Obviously, many libertarians and other supporters of private property rights would support option 2: By default, property owners are entitled to control what is said and done on and with their property. That of course has historically been so for many other powerful entities, such as newspaper, magazine, and book publishers. It isn't always so: For instance, phone companies (land-line or cell) and delivery services (such as UPS and FedEx) are "common carriers," which can't just cancel someone's phone number or delivery service because they think that person's or group's viewpoints are dangerous or evil.
But many libertarians would oppose such common-carrier rules as well. And even people who think private-property principles can be overcome in unusual situations might view Big Tech power as not being perilous enough to justify restrictions on private property here. (Plus there are few problems so bad that they can't be made even worse by badly crafted or implemented government regulations.)
Likewise, many people might support option 3—the option that the Facebook Oversight Board seemed to take (you can start by counting the number of references to international norms in their decision). They might think that international law principles related to free speech are better than American free speech principles. Or they might think that it's just better to have uniform principles, especially for decisions of a multinational corporation. Or they might think that American free speech principles are better when it comes to threat of jail or fines, but the international free speech principles are better when it comes to decisions about whom to eject from social media platforms.
I have to say, though, that I'm tentatively inclined towards option 1 (and of course towards similar options for other democracies' laws as to speech among their residents). That's so for speech by a sitting President to his citizens. It's so for speech by other officials, who may have fewer other ways than Trump did of reaching the public. It's so for speech by candidates for office, and activists, and ordinary citizens. This ability to effectively communicate with each other is vital to American democracy, not just to our individual rights.
I was just on a radio program on this with an expert from the Brennan Center, who noted that Facebook couldn't "totally suppress" people's speech. Of course that's right—it can only partially suppress their speech. But such partial suppression can still do much to shape the course of American political debate. Should it be able to?
Of course, historically newspapers and other publishers have had substantial power over public debate, and I think they have the First Amendment rights what to print in their pages. But I think (more on that in later posts) that, when it comes to their decisions about what to host, Facebook, Twitter, and Google's YouTube should be treated more like phone companies, who don't have such a right.
As Facebook's own Oversight Board mentioned, and as I quoted above, Facebook "has become a virtually indispensable medium for political discourse, and especially so in election periods." It is in effect close to a monopoly in its immensely important market niche—which is especially important in an environment of political competition, where even small restraints on speech can swing elections and other public decisions. Facebook shouldn't have the power to control political debate, any more than phone companies should.
That's particularly clear when you see the minority views mentioned in the Oversight Board opinion. (We don't know how large the minority was, but my guess is that it was repeatedly mentioned in the opinion because it wasn't just one or two people—and a minority today can become a majority soon enough.) Here is the passage; the citations are to international legal rules (emphases and paragraph breaks added added):
For the minority of the Board, while a suspension of an extended duration or permanent disablement could be justified on the basis of the January 6 events alone, the proportionality analysis should also be informed by Mr. Trump's use of Facebook's platforms prior to the November 2020 presidential election.
In particular, the minority noted the May 28, 2020, post "when the looting starts, the shooting starts," made in the context of protests for racial justice, as well as multiple posts referencing the "China Virus." Facebook has made commitments to respect the right to non-discrimination (Article 2, para. 1 ICCPR, Article 2 ICERD) and, in line with the requirements for restrictions on the right to freedom of expression (Article 19, para. 3 ICCPR), to prevent the use of its platforms for advocacy of racial or national hatred constituting incitement to hostility, discrimination or violence (Article 20 ICCPR, Article 4 ICERD). The frequency, quantity and extent of harmful communications should inform the Rabat incitement analysis (Rabat Plan of Action, para. 29), in particular the factors on context and intent.
For the minority, this broader analysis would be crucial to inform Facebook's assessment of a proportionate penalty on January 7, which should serve as both a deterrent to other political leaders and, where appropriate, an opportunity of rehabilitation. [The majority also noted that "[p]eriods of suspension should be long enough to deter misconduct."]
I'm all for deterrents to political leaders. I just think that the deterrent should be threat of voters throwing the bums out—not the viewpoint-based judgments of a massive corporation applying international law norms.
I don't want American candidates (or activists or citizens) to be pressured to adjust their messages to foreign legal standards, or to the judgment of Facebook employees or executives or oversight board members. And I don't want people who want to operate on an equal footing with their political opponents to feel the need to "rehabilitate" themselves, in international lawyers' eyes or a massive corporation's eyes.