The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The Facebook "Oversight" Board
You may have seen stories about the operation of Facebook's new and innovative "Supreme Court." Don't believe 'em.
In April 2018, Facebook CEO Mark Zuckerberg, facing intense public pressure to do something about the proliferation of false or misleading information appearing on the platform, said that he
"could imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don't work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world."
Later that year, he announced that Facebook would create an "Independent Governance and Oversight" committee by the close of 2019 "to advise on content policy and listen to user appeals on content decisions."
That body - now called the "Facebook Oversight Board" - recently began operation. In December of last year, the Board revealed the details of the first six cases it would be deciding, and it has recently issued its decisions in those cases. The Board has also agreed, at Facebook's request, to decide whether ex-President Trump's Facebook and Instagram accounts were improperly terminated in the aftermath of the January 6 riot at the Capitol.
This experiment raises some complicated issues about governance and decision-making, and I will try to be as concise as I can. I have two points to make. First, that it's a sham. The Board is not what it purports to be; it cannot and will not exercise anything that can remotely be described as "oversight" over Facebook's content management system. And second, that even were it not a sham – even if the Board were actually empowered to provide real oversight and direction to that system – that doesn't strike me as any sort of improvement in the current state of affairs; there are many fundamental policy choices embedded in that system which affect the fortunes of a large proportion of the individuals, corporations, and governments on the planet, and I fail to see why a hand-picked Council of the Wise is the proper repository of the power to make those choices.
Facebook's Content Management System
To begin with, here is a simplified outline of Facebook's existing content management system.[1] Facebook's speech rules* identify a number of categories of speech that are not permitted on the platform - e.g., Incitement of Violence, Adult Nudity and Sexual Activity, Bullying and Harassment, Hate Speech, Terrorist Propaganda, Violent and Graphic Content, Cruel and Insensitive Speech – along with definitions of, and some explanatory commentary about, each category.
* Facebook calls these rules its "Community Standards." I dislike the term. I prefer to call it Facebook's "Terms of Service" (ToS). "Community Standards" connotes, in the law and in ordinary speech, that the rules derive in some fashion from the community (however that community might be defined). Facebook's speech rules do not, however, derive from the Facebook community; they are, like ordinary website Terms of Service with which all Internet users are familiar, imposed on the Facebook community unilaterally by the site operator.
Facebook, as a private entity, has every right to impose on users whatever speech rules, derived from whatever source, it chooses (subject, of course, to the usual and generally-applicable rules and regulations regarding corporate conduct). It is also free to call those speech-rules anything it chooses to call them. But we don't have to follow suit when it is misleading to do so.
Facebook relies increasingly heavily on ex ante enforcement of these rules – removing (or placing warnings on) material deemed to be in violation of the ToS before that material is transmitted and displayed across the platform. As you would expect, given the astonishing volume of content involved – tens of billions of messages are posted every day at Facebook and Instagram - ex ante enforcement is, and must be, entirely automated and "algorithmic."
Facebook also enforces these rules ex post, removing (or placing warnings on) posts after they have been disseminated across the platform. Thousands of Facebook "content moderators" patrol the system, proactively seeking out posted content that violates the ToS. In addition, Facebook users can (and do, millions of times every month) "flag" content that they believe violates the ToS; this user-flagged content goes into a queue that is reviewed by a Facebook content moderator, who decides whether or not the content violates the ToS and should be removed from the system.
Since 2018 Facebook has provided an internal appeals process for users whose content has been removed from the platform (as well as for users who flagged content that Facebook did not remove). A content moderator then makes a final decision as to whether the content violated the ToS (and should remain off the system) or not (and should be restored).
The scale at which this system operates is astonishing. In Q2+Q3 of 2019, Facebook removed 4.3 billion individual pieces of content - about 300 every second of every day. Users appealed 40 million of those removals (about 1% of the total); 10 million of those appeals were successful in cancelling the removal and restoring the content involved.
The Board's "Oversight" Role
The Board currently consists of 19 members; it is "likely," in the words of the Board's Charter, to expand to 40 members. Facebook initially chose two Board Co-Chairs, and the remaining members were chosen by the co-chairs in consultation with Facebook. Going forward, additional members will be nominated by a Board Committee and approved by the Board. Board operations are controlled by a Trust, set up and funded by (but otherwise independent of) Facebook.
The Board is authorized to review individual cases brought by users challenging a Facebook "enforcement action" - either Facebook's removal of content, or its failure to remove user-flagged content. Cases can be heard only after they have completed Facebook's internal appeals process. Facebook also may propose cases to the Board for review. The Board has complete discretion to decide which cases it will review (except that in "exceptional circumstances," Facebook may require the Board to give a case "expedited review" for decision within 30 days).
The Charter sets forth the Board's job, with respect to all cases properly brought before it: to "interpret Facebook's Community Standards (sic) and other relevant policies in light of Facebook's articulated values" and to "instruct Facebook to allow or remove content [or] to uphold or reverse a designation that led to an enforcement outcome." The Board may also "provide policy guidance, specific to a case decision or upon Facebook's request, on Facebook's content policies."
The Charter is clear: "The board will have no authority or powers beyond those expressly defined by this charter."
The Board's decisions will, we are told, be "binding on Facebook."[2] That is literally true, but only in the most limited way imaginable. Facebook has indeed promised to implement each individual Board decision.[3] That is, if the Board finds, in any individual case, that Facebook improperly removed content posted by User X, Facebook will restore that content to User X's account. Similarly, if the Board finds that Facebook improperly allowed certain content posted by User X to remain on the platform, Facebook will remove that content from User X's account.
The Board's decisions are not binding on Facebook beyond the confines of that one case. Facebook has no obligation to treat similar – or even identical – content, now or in the future, in a manner consistent with the Board's decisions.[4] The Board's interpretation of Facebook's content policies controls only the individual case before it; Facebook certainly may, if it chooses to do so, implement any Board interpretation more generally across the platform, but it is not "bound" in any way to do so.[5]
And while the Board's Charter declares that "Board decisions will have precedential value," it turns out that only means that the Board will be bound by its prior precedent - not Facebook. Board decisions will have no precedential value as far as Facebook is concerned (unless Facebook itself decides to give them such); Facebook is not "bound" to behave in accordance with any Board decision in any case other than the single case the Board decided.[6]
Consider one of the cases the Board recently decided (2020-007-FB-FBR).
"In late October 2020, a Facebook user posted in a public group described as a forum for Indian Muslims. The post contained a meme featuring an image from the Turkish television show "Dirilis: Ertugrul" depicting one of the show's characters in leather armor holding a sheathed sword. The meme had a text overlay in Hindi. Facebook's translation of the text into English reads: "if the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath." The post also included hashtags referring to President Emmanuel Macron of France as the devil and calling for the boycott of French products. . . . Facebook removed the post under its Violence and Incitement Community Standard."
Whatever the Board decided, its decision would "bind" Facebook only as applied to this user's posting. If it agreed that the post fell under the "Violence and Incitement" rule, Facebook has no obligation to remove similar or identical content from other users' accounts. As it happens, the Board determined that the posting does not constitute "Violence [or] Incitement" within the meaning of Facebook's ToS. Facebook is obligated, therefore, to restore the posting to that user's account. It is not "bound" by the Board's decision to restore any of the 50, or 50 million, copies of the identical content that may have been deleted from other users' accounts, nor need it refrain from continuing to delete the identical content from other users' accounts in the future. Nor is Facebook "bound" to incorporate the Board's interpretation of "Violence and Incitement" into its interpretation of the term, or into its policies and procedures for implementing and enforcing its ToS.
Facebook will take 20 or 30 billion enforcement actions in 2021. The Board has the power – but only the power – to decide what happens in an infinitesimal handful of them (50? 100? 200?). With respect to everything else happening in this immense ocean of content, Facebook can continue to do whatever it wants to do regardless of what Board decides.[7]
This is not " oversight" by any stretch of the imagination - "undersight" would be more like it. It no closer to being an "Oversight Board" than Facebook's Community Standards are to being the "community's standards." To the extent that calling it an "Oversight Board" makes people think it is performing actual "oversight," it is a sham.
* * * * * * * * * *
The stakes here are very high, and the problems involved are very challenging. Facebook's speech rules have profound effects around the world – on which voices get heard and which do not, which businesses succeed and which fail, which governments (or anti-government insurgents) rise or fall, across the globe.
For the reasons outlined above (and others[8]), I very much doubt that the Advisory Board will have any appreciable impact, for good or for ill, on either the content of Facebook's speech rules or the manner in which those rules are interpreted, implemented, and enforced. A nothingburger, in the end – harmless, but ineffective.
But I could be wrong about that; as the saying goes, nothing is harder to predict than the future. Notwithstanding the absence of any obligation to do so, perhaps Facebook, will choose, given its own priorities and goals, to implement specific Board decisions or policy advisories more generally across the platform; indeed, it appears that something like that has recently happened in connection with the dissemination of Covid-19 misinformation.[9]
The question, however, remains: Would it be any better if it weren't a sham? If the Board actually did oversee Facebook's speech-management system? If its decisions actually were binding on Facebook?
I am skeptical about that as well. The members of the Oversight Board are all, I am quite sure, reasonable, intelligent, and thoughtful people. But what sense does it make to say: they get to make the rules that will affect all 2.5 billion Facebook users? More to the point: In what way would giving them that that power be an improvement over the current situation, in which Facebook employees, under the direction of the Facebook CEO and its Board of Directors, are in charge? Why does transferring decision-making power from one hand-picked group (of Facebook employees) to a different hand-picked group (the Oversight Board) solve the problem Facebook is trying to solve? What, exactly, is the problem Facebook is trying to solve?
The problem that Facebook is – or should be - trying to solve is the one Mark Zuckerberg himself identified: Who should make "the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world?" Who, indeed?
This is not, in my view, the sort of problem susceptible to the "expertise" that a select group of wise men and women brings to bear on things, because I cannot quite see how they, any more than Facebook employees, have an inside track on either the "social norms and values" of the Facebook community or the appropriate placement of the line between speech that is "acceptable" in that community and speech that is not.
Here's an idea: instead of (a) pretending to have "community standards," (b) choosing the Forty Smartest People in the World, and (c) pretending to give them "oversight" over Facebook's interpretation and enforcement of those standards, why not give – or at least try to give – the "community" that Facebook talks so much about a real voice in that process? After all, that's our preferred method of dealing with problems involving articulating and enforcing speech-rules in the real world: giving "the community" the final say, using the many institutions and instrumentalities of democratic self-governance – elected representatives, juries, plebiscites, vetoes, legislative confirmation of executive appointments, and all the rest. In the real world, we make the rules, and we control, ultimately, what they say and how they are enforced.
Those real-world institutions through which we exercise democratic self-governance, of course, have many imperfections, and in any event they surely need to be re-thought and re-modeled if they are to operate effectively in a virtual space at truly global scale. That is, to be sure, an enormously challenging undertaking, with difficult questions at every turn; having tried my hand at some of them[10], I do not minimize the complexity of the problems or the difficulties of the undertaking: What would a Facebook representative assembly look like? Or a Facebook "speech jury"? How many members should those bodies have? How should members be chosen? How can we get users to participate? What about language problem(s)?
Progress on this front can only be made if Facebook, or one of the other global platforms, takes these questions seriously and begins the hard work of confronting and resolving them. It does not appear - yet - to have done so. That is unfortunate. If Facebook is hoping to use this initiative as a way to persuade the governments of the world to defer to Facebook's internal rule-making processes, perhaps it should try somewhat harder to design a system more deserving of deference by virtue of its greater legitimacy. More importantly, it is unfortunate because this work needs to begin somewhere, at some point, by someone, or we will find ourselves living an increasingly substantial portion of our lives in online spaces that are governed in a manner that we wouldn't tolerate for ten seconds in the real world.
* * * * * * * * * *
NOTES
[1] Much of this discussion is condensed from the excellent treatment in Klonick, "The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression," 129 Yale L.J. 2418 (2020).
[2] Charter, Art. 4 ("The board's resolution of each case will be binding and Facebook will implement it promptly, unless implementation of a resolution could violate the law"). See Bylaws, Sec. 2.3 (same).
[3] Bylaws, Sec. 2.3.1 ("Facebook will implement board decisions to allow or remove the content properly brought to it for review within seven (7) days of the release of the board's decision on how to action the content").
[4] The Charter states that "in instances where Facebook identifies that identical content with parallel context — which the board has already decided upon — remains on Facebook, it will take action by analyzing whether it is technically and operationally feasible to apply the board's decision to that content as well." Art. 4. That is, Facebook has no obligation to treat other content, including "identical content with parallel context," as the Board may have directed in any particular case; its only obligation is to "analyze" the "feasibility" of doing so.
[5] And with respect to any "policy guidance" the Board may provide, the Bylaws expressly declare that those are merely "recommendations" that are not binding on Facebook. Charter, Sec. 2.3. With respect to any such recommendation, Facebook's only obligation is to "take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook, and transparently communicating about actions taken as a result." That is, Facebook will think about it, and implement it if it thinks it is a good idea.
[6] Imagine the kind of "oversight" that the US Supreme Court could exercise over the interpretation and enforcement of our laws if it operated like the Facebook Oversight Board, i.e., if its decisions in individual cases were binding on state and federal executive branches only in connection with the specific dispute being heard. So when the Court held, say, that the Social Security Administration was violating the Equal Protection Clause when it withheld Stephen Wiesenfeld's death benefit after the death of his wife, that would be "binding" on the SSA only in connection with Mr. Wiesenfeld's entitlement; he would get the death benefit, but the SSA could continue to deny it to all other identically situated persons.
[7] Imagine that every one of the billions of postings on Facebook each day was a single grain of sand, and that the sand was used by Facebook for building an immense pyramid. The extent of the Board's "oversight" would consist of deciding whether a few dozen grains get to be included in the pyramid or not. If the Board has anything of a more general nature to say about the construction of the building – if it were to decide, say, that grains of a particular hardness or size or color should not be used - Facebook will listen to what it says, and implement (or not) any changes that it thinks are (or are not) desirable. I very much doubt that we would describe the Board as exercising "oversight" over the construction process.
[8] Membership on the Board is a part-time job; precisely because members are all quite distinguished in their fields, they have many other calls on their time and attention. They all surely recognize that it is a part-time job in an institution that has, structurally, no real power to effect change in the system. I am skeptical that they have the requisite incentives to do the hard work required to forge coherent and persuasive Board positions on any of the challenging questions they might face – how to recognize inappropriate hate speech, say, or bullying – sufficient to persuade Facebook officers and employees who do have that power to change the way they manage the Facebook speech ecosystem. Moreover, because the Advisory Board decisions will be rendered by five-person panels of shifting composition, it will be especially difficult for the Board to develop any kind of "institutional voice" that might help to give its pronouncements persuasive force and power.
[9] See "Facebook says it plans to remove posts with false vaccine claims"
[10] See our proposal for a revocable proxy mechanism for a representative assembly here.
Show Comments (30)