The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The Facebook "Oversight" Board
You may have seen stories about the operation of Facebook's new and innovative "Supreme Court." Don't believe 'em.
In April 2018, Facebook CEO Mark Zuckerberg, facing intense public pressure to do something about the proliferation of false or misleading information appearing on the platform, said that he
"could imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don't work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world."
Later that year, he announced that Facebook would create an "Independent Governance and Oversight" committee by the close of 2019 "to advise on content policy and listen to user appeals on content decisions."
That body - now called the "Facebook Oversight Board" - recently began operation. In December of last year, the Board revealed the details of the first six cases it would be deciding, and it has recently issued its decisions in those cases. The Board has also agreed, at Facebook's request, to decide whether ex-President Trump's Facebook and Instagram accounts were improperly terminated in the aftermath of the January 6 riot at the Capitol.
This experiment raises some complicated issues about governance and decision-making, and I will try to be as concise as I can. I have two points to make. First, that it's a sham. The Board is not what it purports to be; it cannot and will not exercise anything that can remotely be described as "oversight" over Facebook's content management system. And second, that even were it not a sham – even if the Board were actually empowered to provide real oversight and direction to that system – that doesn't strike me as any sort of improvement in the current state of affairs; there are many fundamental policy choices embedded in that system which affect the fortunes of a large proportion of the individuals, corporations, and governments on the planet, and I fail to see why a hand-picked Council of the Wise is the proper repository of the power to make those choices.
Facebook's Content Management System
To begin with, here is a simplified outline of Facebook's existing content management system.[1] Facebook's speech rules* identify a number of categories of speech that are not permitted on the platform - e.g., Incitement of Violence, Adult Nudity and Sexual Activity, Bullying and Harassment, Hate Speech, Terrorist Propaganda, Violent and Graphic Content, Cruel and Insensitive Speech – along with definitions of, and some explanatory commentary about, each category.
* Facebook calls these rules its "Community Standards." I dislike the term. I prefer to call it Facebook's "Terms of Service" (ToS). "Community Standards" connotes, in the law and in ordinary speech, that the rules derive in some fashion from the community (however that community might be defined). Facebook's speech rules do not, however, derive from the Facebook community; they are, like ordinary website Terms of Service with which all Internet users are familiar, imposed on the Facebook community unilaterally by the site operator.
Facebook, as a private entity, has every right to impose on users whatever speech rules, derived from whatever source, it chooses (subject, of course, to the usual and generally-applicable rules and regulations regarding corporate conduct). It is also free to call those speech-rules anything it chooses to call them. But we don't have to follow suit when it is misleading to do so.
Facebook relies increasingly heavily on ex ante enforcement of these rules – removing (or placing warnings on) material deemed to be in violation of the ToS before that material is transmitted and displayed across the platform. As you would expect, given the astonishing volume of content involved – tens of billions of messages are posted every day at Facebook and Instagram - ex ante enforcement is, and must be, entirely automated and "algorithmic."
Facebook also enforces these rules ex post, removing (or placing warnings on) posts after they have been disseminated across the platform. Thousands of Facebook "content moderators" patrol the system, proactively seeking out posted content that violates the ToS. In addition, Facebook users can (and do, millions of times every month) "flag" content that they believe violates the ToS; this user-flagged content goes into a queue that is reviewed by a Facebook content moderator, who decides whether or not the content violates the ToS and should be removed from the system.
Since 2018 Facebook has provided an internal appeals process for users whose content has been removed from the platform (as well as for users who flagged content that Facebook did not remove). A content moderator then makes a final decision as to whether the content violated the ToS (and should remain off the system) or not (and should be restored).
The scale at which this system operates is astonishing. In Q2+Q3 of 2019, Facebook removed 4.3 billion individual pieces of content - about 300 every second of every day. Users appealed 40 million of those removals (about 1% of the total); 10 million of those appeals were successful in cancelling the removal and restoring the content involved.
The Board's "Oversight" Role
The Board currently consists of 19 members; it is "likely," in the words of the Board's Charter, to expand to 40 members. Facebook initially chose two Board Co-Chairs, and the remaining members were chosen by the co-chairs in consultation with Facebook. Going forward, additional members will be nominated by a Board Committee and approved by the Board. Board operations are controlled by a Trust, set up and funded by (but otherwise independent of) Facebook.
The Board is authorized to review individual cases brought by users challenging a Facebook "enforcement action" - either Facebook's removal of content, or its failure to remove user-flagged content. Cases can be heard only after they have completed Facebook's internal appeals process. Facebook also may propose cases to the Board for review. The Board has complete discretion to decide which cases it will review (except that in "exceptional circumstances," Facebook may require the Board to give a case "expedited review" for decision within 30 days).
The Charter sets forth the Board's job, with respect to all cases properly brought before it: to "interpret Facebook's Community Standards (sic) and other relevant policies in light of Facebook's articulated values" and to "instruct Facebook to allow or remove content [or] to uphold or reverse a designation that led to an enforcement outcome." The Board may also "provide policy guidance, specific to a case decision or upon Facebook's request, on Facebook's content policies."
The Charter is clear: "The board will have no authority or powers beyond those expressly defined by this charter."
The Board's decisions will, we are told, be "binding on Facebook."[2] That is literally true, but only in the most limited way imaginable. Facebook has indeed promised to implement each individual Board decision.[3] That is, if the Board finds, in any individual case, that Facebook improperly removed content posted by User X, Facebook will restore that content to User X's account. Similarly, if the Board finds that Facebook improperly allowed certain content posted by User X to remain on the platform, Facebook will remove that content from User X's account.
The Board's decisions are not binding on Facebook beyond the confines of that one case. Facebook has no obligation to treat similar – or even identical – content, now or in the future, in a manner consistent with the Board's decisions.[4] The Board's interpretation of Facebook's content policies controls only the individual case before it; Facebook certainly may, if it chooses to do so, implement any Board interpretation more generally across the platform, but it is not "bound" in any way to do so.[5]
And while the Board's Charter declares that "Board decisions will have precedential value," it turns out that only means that the Board will be bound by its prior precedent - not Facebook. Board decisions will have no precedential value as far as Facebook is concerned (unless Facebook itself decides to give them such); Facebook is not "bound" to behave in accordance with any Board decision in any case other than the single case the Board decided.[6]
Consider one of the cases the Board recently decided (2020-007-FB-FBR).
"In late October 2020, a Facebook user posted in a public group described as a forum for Indian Muslims. The post contained a meme featuring an image from the Turkish television show "Dirilis: Ertugrul" depicting one of the show's characters in leather armor holding a sheathed sword. The meme had a text overlay in Hindi. Facebook's translation of the text into English reads: "if the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath." The post also included hashtags referring to President Emmanuel Macron of France as the devil and calling for the boycott of French products. . . . Facebook removed the post under its Violence and Incitement Community Standard."
Whatever the Board decided, its decision would "bind" Facebook only as applied to this user's posting. If it agreed that the post fell under the "Violence and Incitement" rule, Facebook has no obligation to remove similar or identical content from other users' accounts. As it happens, the Board determined that the posting does not constitute "Violence [or] Incitement" within the meaning of Facebook's ToS. Facebook is obligated, therefore, to restore the posting to that user's account. It is not "bound" by the Board's decision to restore any of the 50, or 50 million, copies of the identical content that may have been deleted from other users' accounts, nor need it refrain from continuing to delete the identical content from other users' accounts in the future. Nor is Facebook "bound" to incorporate the Board's interpretation of "Violence and Incitement" into its interpretation of the term, or into its policies and procedures for implementing and enforcing its ToS.
Facebook will take 20 or 30 billion enforcement actions in 2021. The Board has the power – but only the power – to decide what happens in an infinitesimal handful of them (50? 100? 200?). With respect to everything else happening in this immense ocean of content, Facebook can continue to do whatever it wants to do regardless of what Board decides.[7]
This is not " oversight" by any stretch of the imagination - "undersight" would be more like it. It no closer to being an "Oversight Board" than Facebook's Community Standards are to being the "community's standards." To the extent that calling it an "Oversight Board" makes people think it is performing actual "oversight," it is a sham.
* * * * * * * * * *
The stakes here are very high, and the problems involved are very challenging. Facebook's speech rules have profound effects around the world – on which voices get heard and which do not, which businesses succeed and which fail, which governments (or anti-government insurgents) rise or fall, across the globe.
For the reasons outlined above (and others[8]), I very much doubt that the Advisory Board will have any appreciable impact, for good or for ill, on either the content of Facebook's speech rules or the manner in which those rules are interpreted, implemented, and enforced. A nothingburger, in the end – harmless, but ineffective.
But I could be wrong about that; as the saying goes, nothing is harder to predict than the future. Notwithstanding the absence of any obligation to do so, perhaps Facebook, will choose, given its own priorities and goals, to implement specific Board decisions or policy advisories more generally across the platform; indeed, it appears that something like that has recently happened in connection with the dissemination of Covid-19 misinformation.[9]
The question, however, remains: Would it be any better if it weren't a sham? If the Board actually did oversee Facebook's speech-management system? If its decisions actually were binding on Facebook?
I am skeptical about that as well. The members of the Oversight Board are all, I am quite sure, reasonable, intelligent, and thoughtful people. But what sense does it make to say: they get to make the rules that will affect all 2.5 billion Facebook users? More to the point: In what way would giving them that that power be an improvement over the current situation, in which Facebook employees, under the direction of the Facebook CEO and its Board of Directors, are in charge? Why does transferring decision-making power from one hand-picked group (of Facebook employees) to a different hand-picked group (the Oversight Board) solve the problem Facebook is trying to solve? What, exactly, is the problem Facebook is trying to solve?
The problem that Facebook is – or should be - trying to solve is the one Mark Zuckerberg himself identified: Who should make "the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world?" Who, indeed?
This is not, in my view, the sort of problem susceptible to the "expertise" that a select group of wise men and women brings to bear on things, because I cannot quite see how they, any more than Facebook employees, have an inside track on either the "social norms and values" of the Facebook community or the appropriate placement of the line between speech that is "acceptable" in that community and speech that is not.
Here's an idea: instead of (a) pretending to have "community standards," (b) choosing the Forty Smartest People in the World, and (c) pretending to give them "oversight" over Facebook's interpretation and enforcement of those standards, why not give – or at least try to give – the "community" that Facebook talks so much about a real voice in that process? After all, that's our preferred method of dealing with problems involving articulating and enforcing speech-rules in the real world: giving "the community" the final say, using the many institutions and instrumentalities of democratic self-governance – elected representatives, juries, plebiscites, vetoes, legislative confirmation of executive appointments, and all the rest. In the real world, we make the rules, and we control, ultimately, what they say and how they are enforced.
Those real-world institutions through which we exercise democratic self-governance, of course, have many imperfections, and in any event they surely need to be re-thought and re-modeled if they are to operate effectively in a virtual space at truly global scale. That is, to be sure, an enormously challenging undertaking, with difficult questions at every turn; having tried my hand at some of them[10], I do not minimize the complexity of the problems or the difficulties of the undertaking: What would a Facebook representative assembly look like? Or a Facebook "speech jury"? How many members should those bodies have? How should members be chosen? How can we get users to participate? What about language problem(s)?
Progress on this front can only be made if Facebook, or one of the other global platforms, takes these questions seriously and begins the hard work of confronting and resolving them. It does not appear - yet - to have done so. That is unfortunate. If Facebook is hoping to use this initiative as a way to persuade the governments of the world to defer to Facebook's internal rule-making processes, perhaps it should try somewhat harder to design a system more deserving of deference by virtue of its greater legitimacy. More importantly, it is unfortunate because this work needs to begin somewhere, at some point, by someone, or we will find ourselves living an increasingly substantial portion of our lives in online spaces that are governed in a manner that we wouldn't tolerate for ten seconds in the real world.
* * * * * * * * * *
NOTES
[1] Much of this discussion is condensed from the excellent treatment in Klonick, "The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression," 129 Yale L.J. 2418 (2020).
[2] Charter, Art. 4 ("The board's resolution of each case will be binding and Facebook will implement it promptly, unless implementation of a resolution could violate the law"). See Bylaws, Sec. 2.3 (same).
[3] Bylaws, Sec. 2.3.1 ("Facebook will implement board decisions to allow or remove the content properly brought to it for review within seven (7) days of the release of the board's decision on how to action the content").
[4] The Charter states that "in instances where Facebook identifies that identical content with parallel context — which the board has already decided upon — remains on Facebook, it will take action by analyzing whether it is technically and operationally feasible to apply the board's decision to that content as well." Art. 4. That is, Facebook has no obligation to treat other content, including "identical content with parallel context," as the Board may have directed in any particular case; its only obligation is to "analyze" the "feasibility" of doing so.
[5] And with respect to any "policy guidance" the Board may provide, the Bylaws expressly declare that those are merely "recommendations" that are not binding on Facebook. Charter, Sec. 2.3. With respect to any such recommendation, Facebook's only obligation is to "take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook, and transparently communicating about actions taken as a result." That is, Facebook will think about it, and implement it if it thinks it is a good idea.
[6] Imagine the kind of "oversight" that the US Supreme Court could exercise over the interpretation and enforcement of our laws if it operated like the Facebook Oversight Board, i.e., if its decisions in individual cases were binding on state and federal executive branches only in connection with the specific dispute being heard. So when the Court held, say, that the Social Security Administration was violating the Equal Protection Clause when it withheld Stephen Wiesenfeld's death benefit after the death of his wife, that would be "binding" on the SSA only in connection with Mr. Wiesenfeld's entitlement; he would get the death benefit, but the SSA could continue to deny it to all other identically situated persons.
[7] Imagine that every one of the billions of postings on Facebook each day was a single grain of sand, and that the sand was used by Facebook for building an immense pyramid. The extent of the Board's "oversight" would consist of deciding whether a few dozen grains get to be included in the pyramid or not. If the Board has anything of a more general nature to say about the construction of the building – if it were to decide, say, that grains of a particular hardness or size or color should not be used - Facebook will listen to what it says, and implement (or not) any changes that it thinks are (or are not) desirable. I very much doubt that we would describe the Board as exercising "oversight" over the construction process.
[8] Membership on the Board is a part-time job; precisely because members are all quite distinguished in their fields, they have many other calls on their time and attention. They all surely recognize that it is a part-time job in an institution that has, structurally, no real power to effect change in the system. I am skeptical that they have the requisite incentives to do the hard work required to forge coherent and persuasive Board positions on any of the challenging questions they might face – how to recognize inappropriate hate speech, say, or bullying – sufficient to persuade Facebook officers and employees who do have that power to change the way they manage the Facebook speech ecosystem. Moreover, because the Advisory Board decisions will be rendered by five-person panels of shifting composition, it will be especially difficult for the Board to develop any kind of "institutional voice" that might help to give its pronouncements persuasive force and power.
[9] See "Facebook says it plans to remove posts with false vaccine claims"
[10] See our proposal for a revocable proxy mechanism for a representative assembly here.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
F.O.B. - original use:
Fractional orbit bombardment; 'dropping' nukes from a sub-orbital platform, reducing warning and reaction times.
That, too.
"The Board's decisions are not binding on Facebook beyond the confines of that one case. ...The Board's interpretation of Facebook's content policies controls only the individual case before it."
Isn't that how civil law, as opposed to common law, systems work? Aren't the decisions of Germany's high courts - or France's or Poland's - binding on "only the individual case before it?" It sounds like your problem is with non-common law court systems in general, upon which Facebook's moderation appeals panel appears to be modeled.
They should call it the "Facebook Ministry of Truth."
Imagine caring so much about Facebook to write this many words.
Worst Post Ever.
What do you think about Prof. Volokh's continuing flattery of Parler (and the Volokh Conspiracy's courtship of the right-wing racists, conservative misogynists, and disaffected Republican gay-bashers who flock to Parler)?
Facebook has 2.8 billion users, Bob. You might consider caring about it.
Farcebook wishes to replace Title III courts with its own courts.
Farcebook wishes to replace Title III courts with its own courts. Don't know what Title III courts are but they certainly think they reign over the nation of Australia. Please excuse my agitation but I think this article and the subsequent comments are about how many angels can dance on the head of a pin given what FB is up to.
Interesting in it describes who I gather FB and Zuckerberg think Australia's government and its Ministers should go to, cap in hand, to have critical FB pages restored, its "Independent Governance and Oversight"
Facebook shut them down because they are news pages. Aside from the amazing spectacle of a western organisation shutting down Department of Health Facebook Pages during a pandemic and Bureau of Meteorology pages during bushfire/wildfire season it speaks to how these Tech Oligarchical organisations have to be thought of differently, that whatever law about speech and news and publishing has emerged over time this is not what societies, and economies, are confronted with now.
And what has Facebook told us,
“Government Pages should not be impacted by today’s announcement. The actions we’re taking are focused on restricting publishers and people in Australia from sharing or viewing Australian and international news content. As the law does not provide clear guidance on the definition of news content, we have taken a broad definition in order to respect the law as drafted. However, we will reverse any Pages that are inadvertently impacted.”
They are giving us the finger, in fact declaring war on Australia. Inadvertently impacted, arrogant sods. This much more than arguing about freedom of speech and public squares and who is a publisher and what rights and obligations these organisations have. "The actions we’re taking are focused on restricting publishers and people in Australia from sharing or viewing Australian and international news content." Blimey, we will hurt you as much as we can unless you surrender your sovereignty and do as we require, we are your masters and we have used the knowledge of all to create a monopoly with which we will force you to your knees.
We have taken a broad definition of to "respect the law". There is a theory that your Oligarch Tech organisations are populated by psychopaths, case proven.
And your current President prattles on about allies being important to America. Well do something about this.
Google chose to compromise. Initially they used their platform to direct us to read a letter, a letter that also told us who had the power but pretended it was for our own good that they would punish us. A surprising ally came to our defense, Microsoft
I would have posted this with Parler being one of whatever Parler users are supposed to be, but am struggling to get signed up.
Um, like what? We actually respect free speech (at least sometimes) in this country, which means that the government doesn't get to tell a private company what to say.
As for the rest of your hysteria, mentally ill or really stupid politicians in Australian government passed an insane law that said that Facebook couldn't allow links (links! not even the stories themselves!) to news sites unless it paid the news sites. The law did not specify an exception for links posted by the news sites themselves. So Facebook said, "You told us we had to pay if we wanted to allow the links. We don't want to pay, so we'll take down the links."
And, of course, Facebook is not the Internet. Anyone who wants information from the Department of Health or the Bureau of Meteorology can go to the websites of the Department of Health or Bureau of Meteorology, respectively.
Wait a second, they prohibit Adult nudity and sexual activity, but not child nudity and sexual activity?
Something is not right here.
That falls under the category of illegal content.
"Wait a second, they prohibit Adult nudity and sexual activity, but not child nudity and sexual activity?"
One hopes it means that they prohibit adult nudity and all sexual activity, but not baby pictures and such.
Well, yes, it's intentionally a joke, so that FB can be "overruled" in a handful of high profile cases, and say, "See? See? We have oversight, don't worry about us!"
Yeah, I'm guessing they'll use this "oversight board" for a few embarrassing cases which hit the headlines.
I could always be wrong.
Look, Brett's assuming other people are acting in bad faith! Again! What are the odds?
I don’t know if a Facebook Oversight Board is the best of ideas. But this particular criticism strikes me as missing the mark.
The US Supreme Court can’t possibly oversee every case in every court in the country. But it nonetheless exerts significant influence. It selects cases that it regards as important and exemplary, decides them, and generally leaves it to lower courts to apply its guidance to the myriad cases of the day.
I don’t see why a private “Supreme Court” can’t potentially have similar influence. And if it was able to, then I wouldn’t call the arrangement a lie or a sham.
But how much good could the Supreme court do if it were the only court in the entire country, and couldn't set binding precedents?
This "oversight board" might, if they were really dedicated, (Remember, this is a part time gig for them!) go through a couple hundred cases a year. It isn't a drop in a bucket, it's a drop in the ocean.
And, realistically, most of the cases are going to be ones Face Book sent to the board itself, not cases FB didn't want it hearing, but were sent to it by users.
It isn’t. The lower-level employees who make the ordinary decisions are the lower-level courts. I understand they are also the police and the prosecutors. But the question regarding the oversight board’s influence is whether they will follow its guidance, not wherher they meet a formal definition of a “court.” If they will, then the oversight board will have influence.
If I'm understanding right what Facebook has said in some of these moderation controversies, the lower-level moderation employees don't even follow Facebook's guidance. And why should they? They don't even work directly for Facebook, they're temps hired by a series of contractors, given seconds to judge each flagged item.
But as you say, they're the police and the prosecutors, and the lower level court, all wrapped up in one and safely anonymous. And well aware that the oversight board doesn't have the bandwidth to check them. So, why shouldn't they abuse their position to make trouble for people with views they don't like? The odds of being overruled are minute.
And, on top of that, there are organized groups that devote time to just looking through random FB content for things they don't like to flag. Some of them have even been empowered by FB to enter private groups and look around for content no member of the group objects to.
Also, in a common law system, judges are precisely the experts on community standards that Professor Post claims can’t exist.
Professor Post may disagree with the whole concept. It indeed implies the existance of an elite who are selected to become judges who are regarded as somehow more expert than ordinary people. But such approaches to judging have been existence for hundeds if not thousands of years. Indeed, nearly all US states have a common law system where judges are authorized, to varying degrees, to simply make up new common-law rules that they regard as reflecting the community’s standards.
It’s one thing to disagree with this type of common-law approach by expert judges. It’s quite another to call it a lie and a sham just because you disagree with it.
I, like many, have cut ties with Facebook.
But I think transparency is better than not.
"I, like many, have cut ties with Facebook."
Oh noes, how will they cope.
Number of users has gone up worldwide since 2019.
It is pretty standard corporate governance law that the entity vested with governance of that corporation cannot just delegate it away, at least not completely. Here Facebook is a private company and must answer to its shareholders. Those shareholders ultimately hold the keys and ought to if the company is publicly traded.
And you can't tell me that if the Facebook "supreme court" finds that a particular page should not getting taken down, but say a huge advertiser then pulls a large percentage of ad revenue making the corporation lose huge profits, it is going to just throw its hands up and say "oh well...." or that position will be just fine with the shareholders.
Facebook pretentiously describes its hand-picked sock puppet as a "supreme court," but usually these kinds of shams take the form of an "investigation" by a so-called "independent law firm." But no matter what the name, these obvious shams somehow always manage to bring in the result that is most desired by the sponsor's top management. But what's most disturbing about these kinds of shams is that the mainstream media repeatedly goes along with them and pretends that they are staffed by expert and totally objective decision makers. In fact, of course, there's no objectivity whatsoever because the people making the decisions are all on the payroll of one of the parties.
The terms of use are unconscionable and void.
The value of the personal information is half of revenue. That should be returned to the users in an aggregate claim.
I disagree with the author's view of the inherently binding power of legal precedent. When a court rules on a case, it doesn't directly bind non-party elements of the government. Precedent is binding on parties and inferior tribunals. Now when, say, the Ninth Circuit opines on EPA single-source emissions, the EPA will generally follow the ruling in the Second Circuit as well, but not because The Great And Powerful Court has said what the law is. Rather, the costs of regionally distinct implementation of policies, and the costs of following a wood-path in the Second before that circuit signs on to the rule makes national compliance more logical.
When a corporation creates a panel such as this, there are several potential analogues in legal history. Perhaps it's an Inquest, determined to find the facts of the case and rule according to prevailing law. Perhaps its a Star Chamber to handle questions involving powerful people who would otherwise use their power to corrupt the routine judicial processes. Perhaps it's an old Assize convened by ganging together a carriage-full of judges of different big-city courts to hear claims in distant parts of the world, and, as Blackstone says, take as much of the common law with them as they can, given the circumstances.
But say for the sake of argument that the corporation or its officers brought a gaggle of barons of the law onboard to write the law as well as implement it in wildly varying contexts. As they fashion a common and (very important) commonly accessible body of decisions, no sound principle of corporate governance says that they should be able to bind the employees or officers of the corporation in the latter's business decisions. What they can bind, however, are intermediate appellate tribunals. Once the high court has other courts to instruct, it will later become cost-effective to adjust the algorithms (the enforcement) to match the likely much larger pool of decisions from the inferior tribunals. But saying both that Courts E-Leet should be able to write the rules from their own consciences and that the corporation has a moral obligation to follow them asks far too much of the generally fearful and respectful manner in which the judicial function is usually thought about.
Mr. D.
Pretty interesting podcast on the implementation of this, if you have 45 mins...
https://www.wnycstudios.org/podcasts/radiolab/articles/facebooks-supreme-court
I don't see this as being all that difficult. You are absolutely on the right track here: the only way to fairly "police" 2.5 billion people is with those same 2.5 billion people -- via self determination of content.
Facebook should give users the ability to control ALL the information they see. They're already doing this partially through the mechanisms of "befriending" and "liking" content. Now just extend those same controls to include content they explicitly don't want to see.
There's already plenty of precedent for doing exactly this with ad blockers, anti-spam services, IP blacklists, "parental" controls and the like. Just be as transparent as possible with the controls, let the user community develop its own lists of bad content, and then rely on individuals to apply them as they see fit.
Problem solved. Truly. There's absolutely no reason to involve Facebook corporate in any content decision.
Furthermore, following the practices of representative democracy -- with its concept of courts, proxies, assemblies and the like -- would be an anti-pattern. Those practices are designed only to develop a common good. By its very nature, any wealth held in common must involve some form of compromise. But that absolutely isn't needed here. Here, the good applies directly and solely to the individual. That is why it must be the individual user who must decide for him or herself what they wish to see online.
I certainly Hope that Facebook evolves over to this attitude.