A list of people and groups you can't friend or follow on Facebook fell into the hands of The Intercept this week, and it sparked immediate debate. Pundits parsed the racial and political balance of the alleged terrorists, militants, and criminals forbidden from the social media platform for evidence of bias. Facebook pushed back, emphasizing the complexity of the task. That getting such a list right is essentially impossible has largely been overlooked, as has another important point: If this massive undertaking is what's expected of social media companies, it's an effective barrier to upstart outfits hoping to compete with established giants.
"To ward off accusations that it helps terrorists spread propaganda, Facebook has for many years barred users from speaking freely about people and groups it says promote violence," The Intercept's Sam Biddle notes of the social media platform's efforts to keep online interactions within what its staff consider acceptable boundaries. "Facebook's DIO [Dangerous Individuals and Organizations] policy has become an unaccountable system that disproportionately punishes certain communities, critics say. It is built atop a blacklist of over 4,000 people and groups, including politicians, writers, charities, hospitals, hundreds of music acts, and long-dead historical figures."
Characteristic of the response to the leaked list was a Daily Beast story pointing out that "Its 'Terrorism' category, which makes up more than half the list, disproportionately names Middle Eastern, South Asian, and Muslim groups and individuals." The piece added that "Predominantly Black and Latino names are 'violent criminal enterprises,' according to the list."
Brian Fishman, Facebook's head of policy for counterterrorism and dangerous organizations, immediately pushed back.
"Defining & identifying Dangerous Orgs globally is extremely difficult," Fishman tweeted. "There are no hard & fast definitions agreed upon by everyone. Take terrorism. Government lists reflect their political outlook and policy goals. And even government agencies define the problem differently."
The debate is all very popcorn-worthy, with no likely winners in sight. Assessing individuals and organizations as too dangerous to allow on Facebook is inherently subjective when there are no universally accepted definitions of "terrorism" or "hate speech," and when some bad actors have too much clout to ban, while others who believe themselves to be wrongfully included have too little leverage to effectively protest. Does anybody really think that the same standards are applied to government officials as to isolated individuals?
Inevitably, then, a list of 4,000 people and organizations considered beyond the pale by a giant social media platform is going to miss a lot of nasty types who might rate inclusion in a world of 7.8 billion people. At the same time, though, it's bound to incorporate some perfectly innocent people because of mistaken identity, questionable judgment calls, or bias. The U.S. government's no-fly list was found to be riddled with errors; there's no particular reason to think that Facebook did a perfect job with a blacklist that was secret until this week.
Of course, the consequences of being barred from Facebook are much lower than those of being included on a no-fly list that keeps you off commercial airline flights. Also, the social media company is a private entity that is free to ban people for reasons good, bad, or indifferent. The whole debate, and the effort behind it, is something of a high-intensity, low-impact battle.
But there is another way in which a very public debate over the merits of Facebook's list of banned individuals and organizations has important implications: It helps to establish expectations for social media companies. Keep in mind that critics are debating who is included and excluded from the list with Facebook's manager for counterterrorism and dangerous organizations. That means the company is large enough to have an internal bureaucracy to address terrorism and to produce a list (however imperfectly) of thousands of people and groups from around the world who are forbidden to use its services. What size does a company have to be to even consider taking on that task?
On a similar note, like other large companies in the past, Facebook has recently found a new affection for increased regulation, such as repeal or modification of Section 230 legal protections for online platforms from liability for user-generated content.
"Section 230 was vital to Facebook's creation, and its growth," Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy, told Reason earlier this week. "But now that it's a trillion-dollar company, Section 230 is perhaps a bit less important to Facebook, but it is far more important to smaller sites. Facebook can handle defending a bunch of defamation cases on the merits much more than a site like Yelp or Glassdoor."
That's why supposed "whistleblower" Frances Haugen's demands that Facebook engage in more content moderation and be subject to increased government oversight play to the tech giant's strengths. Large firms and small businesses compete on a level playing field when it comes to free speech. But regulatory compliance and content moderation give the advantage to established companies with lawyers and resources to spare. If Facebook hasn't paid Haugen a bonus for her congressional testimony, Mark Zuckerberg should at least send her a nice card for the holidays.
Setting an expectation that a social media company should have its own counterterrorism program isn't a regulatory barrier, but it is another hurdle for potential competitors. Future garage-based entrepreneurs may launch new and popular online services to challenge today's tech giants, but they're going to have a harder time compiling naughty lists of allegedly bad actors from around the world who won't be allowed as users. They might never get their efforts off the ground if that becomes a requirement for entry into the market.
The higher the bar is set in terms of regulations and public expectations for what is considered acceptable conduct by private enterprise, the harder it is for entrepreneurs to challenge established players in any industry. Facebook doesn't really have to care what critics think of its Dangerous Individuals and Organizations list. The company benefits so long as people come to believe that a social media platform should have to make the massive effort to compile such a document.