Facebook

How Trustworthy Does Facebook Think You Are?

Should we be concerned about a new system to keep track of real vs. fake news?

|

Facebook 'like'
Benoit Tessier/REUTERS/Newscom

With stories about misinformation, data usage, and censorship dominating the news, online platforms are scrambling to re-engineer their policies and algorithms in a way that pleases critics and users alike.

Facebook in particular has been in the hot seat in America following the election of President Donald Trump and the Cambridge Analytica scandal. CEO Mark Zuckerberg was wheeled into Congress earlier this year to answer for his company's purported peccadilloes. The social media giant has pledged to continue promoting "fact-checking" on its platform to please regulators. But might some of the "cures" for "fake news" end up being worse than the disease?

One new tool in Facebook's bag of anti-"misinformation" tricks definitely ticks the "creep box": Last week, the Washington Post reported that Facebook has been planning to assign a "trustworthiness score" for users on their platform for around a year. Users who are believed to more accurately flag news stories as valid or not will be given a higher score, while users suspected of flagging stories out of revenge or distaste—and thereby throwing off the algorithm—will be given a goose egg. The scale ranges from 0 to 1, and the system is now reportedly coming online.

The new system is framed as a necessary augmentation of Facebook's previous forays into third party-guided media ranking. In addition to partnering with self-deputized "fact-checkers" like PolitiFact and Snopes.com, Facebook created a button for average users to report when a story was "fake news." The idea was that human feedback, provided by both "experts" and the wisdom of the crowds, would supplement the brittle fallibility of algorithms, leading to a better overall curation of what is being shared on Facebook.

But humans are a fickle bunch. Not everyone wants to be a forthright participant in Facebook's tailored marketplace of ideas. The system was thrown off by users who would mob together to report stories as "fake news" because they merely disliked the story or source. Facebook hopes that its new reputation score will help to separate real flags from false ones, and improve its attempted moderation of truth and communications accordingly.

Somewhat surprisingly, the news of this secret scoring seems not to have been the result of an unauthorized leak, but comes courtesy of Facebook's own product manager in charge of fighting fake news. In her interview with the Post, she takes pains to assure the public that these scores are not intended to be an "absolute indicator of a person's credibility" but "one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk."

If you're not exactly reassured by that, I can't blame you.

The abstract concept of the credibility score on its own, in the context of curating what kind of information that people should see, could perform as intended without spillover risk. Whether or not such top-down information management goals are a good idea to pursue in the first place is a separate question, of course.

But when official Facebook representatives start talking about collecting "thousands of behavioral clues" on its users, anxieties are naturally inflamed. The Wall Street Journal recently reported that Facebook is seeking to partner up with banks to access our financial data so that it can "offer new services" to users, like in-app payments. Banks, understandably, are hesitant. But let's say some play ball. Might something like one's bank account balance be considered a "behavioral clue" for content trustworthiness? To what other processes might these aggregated behavioral profiles eventually be applied? How secure will these scores be, and with which other parties might they be shared?

Americans are already used to one kind of credit rating: our FICO score. These involve similar privacy and security concerns, and are also run by private companies. Yet most people accept these scores as a fact of life, perhaps because the credit ratings agencies have developed avenues to petition one's score and are considered well-regulated. So what if Facebook decides to try their hand at predictive monitoring?

Despite Facebook's assurances that these scores will not be used for other purposes, people have an understandable aversion to the mere idea of unaccountable ratings of "social credit" that can silently make or break one's opportunities in life with little recourse—particularly when they are managed by a single uncontested firm.

This dystopic vision was creatively illustrated in an episode of Netflix's popular science fiction program, Black Mirror. In "Nose Dive," a socially-striving character named Lacie navigates a pastel-hued world that is quietly coordinated by a comprehensive system of social credit. Every observable action that a person takes is meticulously judged and graded by social peers. Post something good online? Get some points. Make a social faux pas in front of a colleague? Lose points. Points are tied to access to different tiers of housing and social circles. A series of escalating mishaps causes Lucie to progressively lose her credit—and her mind. Negative inertia begets negative responses, and poor Lucie's digitally-directed life circles the drain.

This is just television. But it is unfortunately not as far-fetched as we might hope. The MIT Technology Review recently published an in depth look at the Chinese government's system of digital behavioral monitoring and control. In 2014, the state began what is called the "Social Credit System," and the program is expected to be fully operational by 2020. Citizens are expected to contribute to the "construction of sincerity in government affairs, commercial sincerity, and judicial credibility"—if they do, they get more credit, and more access to financial mechanisms, travel, and luxury options. If they don't, well, they will find it much harder to find a bank willing to lend them money.

This system strikes many Americans as a totalitarian nightmare, although the same publication ran an article arguing that this system might be an improvement over the largely ad-hoc and ephemeral system of social monitoring that it replaced. (I doubt China's Uighur minority would agree.)

Thankfully, Facebook is not the Chinese government. It is a private company that can one day go out of business, not a state with an internationally-recognized monopoly on violence. It is much easier for us to simply delete our Facebook accounts than it is for Chinese dissidents to reform the government's massive behavioral infrastructure or escape to a more hospitable locale. (It may be harder, however, for a non-user to delete their so-called "shadow profile.") And let's not be dramatic. The risks that Facebook's social credit system will be anywhere near as big of an influence on our daily activities as the Chinese social credit system is for Chinese citizens are pretty tiny.

Not everything has to be a conspiracy. Perhaps we should take Facebook at its word when it says that it will only use its social credit system for shuffling through which content it would like to allow to proliferate on its platform. If one is primarily concerned about stopping the scourge of "fake news," maybe trusting while scrutinizing is the best course of action.

But for those who just can't shake the heebie-jeebies about Facebook's social credit score, a reevaluation of priorities may be in order. Which is worse: Allowing user-directed viral content to freely proliferate, or trusting an opaque social credit algorithm to separate whose feedback is more trustworthy than others? Your answer is probably a function of your personal values.