How Trustworthy Does Facebook Think You Are?
Should we be concerned about a new system to keep track of real vs. fake news?

With stories about misinformation, data usage, and censorship dominating the news, online platforms are scrambling to re-engineer their policies and algorithms in a way that pleases critics and users alike.
Facebook in particular has been in the hot seat in America following the election of President Donald Trump and the Cambridge Analytica scandal. CEO Mark Zuckerberg was wheeled into Congress earlier this year to answer for his company's purported peccadilloes. The social media giant has pledged to continue promoting "fact-checking" on its platform to please regulators. But might some of the "cures" for "fake news" end up being worse than the disease?
One new tool in Facebook's bag of anti-"misinformation" tricks definitely ticks the "creep box": Last week, the Washington Post reported that Facebook has been planning to assign a "trustworthiness score" for users on their platform for around a year. Users who are believed to more accurately flag news stories as valid or not will be given a higher score, while users suspected of flagging stories out of revenge or distaste—and thereby throwing off the algorithm—will be given a goose egg. The scale ranges from 0 to 1, and the system is now reportedly coming online.
The new system is framed as a necessary augmentation of Facebook's previous forays into third party-guided media ranking. In addition to partnering with self-deputized "fact-checkers" like PolitiFact and Snopes.com, Facebook created a button for average users to report when a story was "fake news." The idea was that human feedback, provided by both "experts" and the wisdom of the crowds, would supplement the brittle fallibility of algorithms, leading to a better overall curation of what is being shared on Facebook.
But humans are a fickle bunch. Not everyone wants to be a forthright participant in Facebook's tailored marketplace of ideas. The system was thrown off by users who would mob together to report stories as "fake news" because they merely disliked the story or source. Facebook hopes that its new reputation score will help to separate real flags from false ones, and improve its attempted moderation of truth and communications accordingly.
Somewhat surprisingly, the news of this secret scoring seems not to have been the result of an unauthorized leak, but comes courtesy of Facebook's own product manager in charge of fighting fake news. In her interview with the Post, she takes pains to assure the public that these scores are not intended to be an "absolute indicator of a person's credibility" but "one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk."
If you're not exactly reassured by that, I can't blame you.
The abstract concept of the credibility score on its own, in the context of curating what kind of information that people should see, could perform as intended without spillover risk. Whether or not such top-down information management goals are a good idea to pursue in the first place is a separate question, of course.
But when official Facebook representatives start talking about collecting "thousands of behavioral clues" on its users, anxieties are naturally inflamed. The Wall Street Journal recently reported that Facebook is seeking to partner up with banks to access our financial data so that it can "offer new services" to users, like in-app payments. Banks, understandably, are hesitant. But let's say some play ball. Might something like one's bank account balance be considered a "behavioral clue" for content trustworthiness? To what other processes might these aggregated behavioral profiles eventually be applied? How secure will these scores be, and with which other parties might they be shared?
Americans are already used to one kind of credit rating: our FICO score. These involve similar privacy and security concerns, and are also run by private companies. Yet most people accept these scores as a fact of life, perhaps because the credit ratings agencies have developed avenues to petition one's score and are considered well-regulated. So what if Facebook decides to try their hand at predictive monitoring?
Despite Facebook's assurances that these scores will not be used for other purposes, people have an understandable aversion to the mere idea of unaccountable ratings of "social credit" that can silently make or break one's opportunities in life with little recourse—particularly when they are managed by a single uncontested firm.
This dystopic vision was creatively illustrated in an episode of Netflix's popular science fiction program, Black Mirror. In "Nose Dive," a socially-striving character named Lacie navigates a pastel-hued world that is quietly coordinated by a comprehensive system of social credit. Every observable action that a person takes is meticulously judged and graded by social peers. Post something good online? Get some points. Make a social faux pas in front of a colleague? Lose points. Points are tied to access to different tiers of housing and social circles. A series of escalating mishaps causes Lucie to progressively lose her credit—and her mind. Negative inertia begets negative responses, and poor Lucie's digitally-directed life circles the drain.
This is just television. But it is unfortunately not as far-fetched as we might hope. The MIT Technology Review recently published an in depth look at the Chinese government's system of digital behavioral monitoring and control. In 2014, the state began what is called the "Social Credit System," and the program is expected to be fully operational by 2020. Citizens are expected to contribute to the "construction of sincerity in government affairs, commercial sincerity, and judicial credibility"—if they do, they get more credit, and more access to financial mechanisms, travel, and luxury options. If they don't, well, they will find it much harder to find a bank willing to lend them money.
This system strikes many Americans as a totalitarian nightmare, although the same publication ran an article arguing that this system might be an improvement over the largely ad-hoc and ephemeral system of social monitoring that it replaced. (I doubt China's Uighur minority would agree.)
Thankfully, Facebook is not the Chinese government. It is a private company that can one day go out of business, not a state with an internationally-recognized monopoly on violence. It is much easier for us to simply delete our Facebook accounts than it is for Chinese dissidents to reform the government's massive behavioral infrastructure or escape to a more hospitable locale. (It may be harder, however, for a non-user to delete their so-called "shadow profile.") And let's not be dramatic. The risks that Facebook's social credit system will be anywhere near as big of an influence on our daily activities as the Chinese social credit system is for Chinese citizens are pretty tiny.
Not everything has to be a conspiracy. Perhaps we should take Facebook at its word when it says that it will only use its social credit system for shuffling through which content it would like to allow to proliferate on its platform. If one is primarily concerned about stopping the scourge of "fake news," maybe trusting while scrutinizing is the best course of action.
But for those who just can't shake the heebie-jeebies about Facebook's social credit score, a reevaluation of priorities may be in order. Which is worse: Allowing user-directed viral content to freely proliferate, or trusting an opaque social credit algorithm to separate whose feedback is more trustworthy than others? Your answer is probably a function of your personal values.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Don't use Facebook. Problem solved.
Yeah, a company acting like a dick when they've got competitors isn't a problem, unless they can use the government to coerce you into using their services.
I bailed on FB a while ago, though I haven't been able to take my profile offline despite trying - which tells you all you need to know about them, when they refuse to let you unsubscribe.
This isn't really as easy as you want to make it out to be. Most cellphones come with FB pre-loaded, and uninstalling disabled. It was recently "revealed" that even if you disable FB they still collect your data.
True dat! I've never wasted a minute on faecepuke.
Missed the link about the "shadow profiles", didn't you.
What about the Russian robots on the facebook that brainwashed Hillary Clinton voters into vote for trumps?
The peoples want answers !!!
Given that Facebook has no trustworthiness whatsoever, if you're trusting them to tell you who's trustworthy you are a moron should send me your credit card information and I'll check to see if any of your credit card information has been compromised by online scam artists.
Facewhat? Don't use it. Don't care.
How sure are you of this?
I would go as far to say that the internet itself has no trustworthiness whatsoever. It's a mass of Lolcats, scat porn, dead links, outdated information, poorly organized information, un-spellchecked opinion, malware, bloatware, viruses, scams, increasingly obnoxious advertising and libertarians (just kidding, guys!). It's become worthless as a source of communication.
I see Web 3.0 as either continuing that trend or becoming much more heavily curated.
What are you 80?
The interweb can never be a substitute for our own critical thinking skills.
So much of what I'm reading and hearing boils down to the question of, "Who can I trust to tell me the truth?"
The answer is that trusting what people say because of whom they are is fundamentally flawed, and "the truth" in the real world is never 100% certain.
Looking to the internet to solve those fundamental problems is barking up the wrong tree.
Wouldn't it be great if there was somewhere we could look to who we knew could tell us the truth with 100% certainty?
No such place can exist, but people invent such places in their minds because they want it so badly. Some people call it "church". Other people want it from "the internet".
The acronym that comes to mind with the internet is GIGO - Garbage In, Garbage Out. If all you have is garbage information then critical thinking isn't going to help any.
But you're right - it's safe just to assume it's all shit and work from there.
The big picture, libertarian observation about this is how much trouble you get into in the absence of price signals.
Facebook's customers are advertisers, and they're getting price signals from those advertisers, but because their model is free of charge to the eyeballs that are watching those advertisements, it's difficult or impossible to know much of anything about consumer tastes--beyond really broad strokes.
I.e., even an advertising platform built on individuals "liking" things has problems gauging consumer tastes in the absence of price signals. The question isn't just why we should believe that government regulation will do a better job of protecting consumer preferences than Facebook itself. It's also why, in the absence of price signals, government regulation should be expected to get anything right--when, despite all their data, Facebook can't figure out consumer preferences in the absence of price signals either.
I wonder what the results would be if Facebook had a Like system that had value. Instead of an unlimited number of Likes each user obtained so many Likes per Day (maybe 1?) and had some chances to obtain other Likes through actions (posting well Liked comments, etc,). I suspect the result would be much greater user discretion and well liked articles would tend to be of better quality.
I was hoping Steemit would take off. Maybe it will someday.
"User accounts can upvote posts and comments similar to other blogging websites or social news websites like Reddit, and the authors who get upvoted can receive a monetary reward in a cryptocurrency token named Steem and US dollar-pegged tokens called Steem Dollars. People are also rewarded for curating (discovering) popular content. Curating involves voting comments and post submissions. Vote strength and curation rewards are influenced by the amount of Steem Power held by the voter.[6]
Steemit includes third-party applications, such as d.tube, a decentralized video platform based on the InterPlanetary File System (IPFS) protocol. D.tube is similar to YouTube, but without advertisements, instead uses the built-in Steem currency which gets awarded by users upvoting videos.[7]
http://en.wikipedia.org/wiki/Steemit
I see it as like the difference between broadcast television (in which the content is intended to appeal to advertisers) and pay cable (in which the content is directed towards the consumers that pay for content). At first, consumers had a hard time imagining paying for HBO when they could just watch TV for free. Then HBO started making content consumers were willing to pay to see.
Same thing needs to happen with social media.
Or other would lead to the upvote behavior of systems like wapo comments. Mob mentality.
That's conceptually similar to what Slashdot has. Nice in theory, but sucks in practice. So now almost everyone posts as an Anonymous Coward to avoid getting mod bombed.
The ideal social media would be "Penny Post" and charge one penny for every post someone puts on his wall. There is a reason friends never send junk mail to friends via the USA post office.
except at christmas...
a dislike button would help dramatically with the price signals, but would also be likely linked to a few suicides.
"like PolitiFact and Snopes.com"
Both of these are accurate but lean to the Left. Referring to them would act to clean up a lot of obviously fake stories that get tagged and linked to, but at the end of the day, there's going to be a built in bias.
Exactly.
Who will fact check the fact checkers?
Accurat. Politifact is notorious for the "true but other things, therefore false" rating. They are completely biased.
Politifact is notorious for being flat-out wrong. Same with Snopes.
Correct. I have seen some of the most jaw-droppingly disingenuous, wrong arguments on both of those sites.
Facebook is not trustworthy and they are rating news and information.
I think the populist libertarian view on corporations is a big blind spot for the party. That they can have all of the 'rights' of a person and none of the drawbacks of actually being a person is ridiculous.
Facebook is a great example, the founders wanted freedom from religion because they realized the power the churches wielded at the time. Facebook was created on backbone lines that were/are was heavily subsidized by the Federal government and in some cases where imminent domain was used.
Facebook is/will be just another tool used to keep the Libertarian party in check. Essentially, Reason is very much in the position of cuck. A 'moral' cuck, is still a cuck.
I'm gonna give that a 2 out of 10 for trolling -- the cuck thing gave it away. That, and thinking "populist libertarian" is a possible combination. That's like a "conservative liberal".
Seeing as Madisonian Liberals, or original liberals, are closer to the small government conservatism...
Standing by and watching someone fuck you over falls under the definition of a cuck. It's laughable to think that Facebook isn't anything but Anti Libertarian.
Just how soon is that "imminent domain"?
"Standing by and watching someone fuck you over falls under the definition of a cuck."
You came to that conclusion after signing up for the one free week on Pron Hub?
Technically, that's a "sissy". A cuck watches someone fuck over his wife, because he believes in a woman's right to choose.
In all fairness, plenty of cucks become sissys out of desperation after going so long without physical romance. I mean, common. You don't expect the guy to be celibate forever.
Alternatively, Facebook can act like that fake resistance group in 1984. Libertarians will feel unwelcomed on Facebook, turn it off, and experience real life. Progressives will spend the equivalent of a full time job surrounding each other with fellow travelers in Facebook groups so they can change the world by agreeing with each other in their closed circles.
The founders did nothing about freedom from religion, they didn't want to establish a national church and get involved in sectarianism. Several states already had established churches. Connecticut didn't get rid of their's until 1818.
"Thankfully, Facebook is not the Chinese government"
I disagree with the author who I think is unaware of the money and influence FB wields in DC. FB is more powerful than the Chinese government and it is a much bigger threat to the USA political process than China. Russia. Israel, etc. FB can effectively destroy a political candidate it sees as a threat to FB and that alone is cause for alarm. FB is a monopoly which is far too powerful and has far too much influence, and it needs to be broken up.
And by the time Facebook finishes with that politician, the entire incident will be buried in a memory hole next to this event.
Remember, Facebook is a platform, not a publisher, right?
If cell phone companies could stop calls based on what is being said, that'd be comparable to social media today.
They are ALL publishers.
That's a good analogy.
+1
If it's good enough for china, it's good enough for us.
So, I guess it ultimately matters what Facebook does with the score, (which, it's just Facebook, how much can they possibly do other than simply deny you access to their platform?), but the basic idea of ignoring people who "cry wolf" too often seems perfectly reasonable to me.
I've heard that "creep box" was another one of Crusty's nicknames in college.
Not a peep here about Facebook partnering with the Atlantic Council-a NATO front- to "safeguard the democratic process". Of course not. Reason will not touch anything that hints that Russia may be the victim of US propaganda. Thank goodness not all libertarians are empire worshiping neocons lite like Kennedy, Welsh, etc. OH, yeah, I am linking to Russia today -so sue me. https://tinyurl.com/y8wde6py
I think Facebook is okay if you want to quickly browse your friends' vacation pictures or their attendance at some event or something without having to be bogged down with lengthy explanations.
If you are relying on Facebook for your news, or (even worse) to learn about candidates for an upcoming election, then you may have a problem.
And if you are so susceptible to fake news and crappy memes, whether created by Russians or local dimwits, that you actually change your opinions, then you may want to join a Facebook group for apartment rentals so you can at least have a chance of moving out of your mom's basement ahead of your 50th birthday.
This Facebook Page is an option ( https://www.facebook.com/liberationtinyhomes/ ). If boarders don't matter, Americans hard on their luck can just move their tiny homes to Mexico and start a new life there.
Seems like Facebook is learning well from its Chinese masters
Just delete your Facebook account already.
I came close to deleting mine, but simply must be able to see if I am getting fatter or balder than the guys I went to high school with, and of course how hot the hot girls then still are.
0 are bangable after 40
Newsflash: Social Media is not Media Media.
Social media is for communicating with your holiday card list on a more regular basis so they get updates more than once per year. It's not for making you famous or harnessing your social life to serve an ideology. Anyone depending too much on social media to form political opinions should move to Manhattan and help us turn into a post car society by leading by example. Once all the bridges and tunnels connecting Manhattan to the outside world are turned into walkways, bike paths, and rail roads, We can quarantine the planet with a rail road strike and focus our attention on knitting something useful. That, my dear, is a collection of Ayn Rand's novels in a nutshell.
One of the many mechanics on Facebook is the sorting of comments. There is a pattern of political bias there which anyone can observe - possibly due in part to left-leaning users being rated as more trustworthy as a result of aligning with the New York Times and other goodthink publications, even though they are not any more trustworthy than some publications labeled "far right" such as The Federalist.
You are never able to sort comments by the number of "likes" or other reactions. Usually there is a sorting option for "relevant" - whatever that means - and oldest/newest; sometimes there are no sorting options on items where you really wish there was. On third party sites with embedded facebook comments, there is sometimes a "top" option, but it's complete nonsense and typically yields results closer to the opposite of what you would expect.
In any case, the overwhelming pattern I've seen is that for any posted item that implicates politics, under it the comments will be sorted with the more "left" viewpoints on top, despite having far fewer likes. For instance, on a posted item of a more conservative nature, you can scroll down and see 5 or more "left" comments right at the top, with around 100 likes. Only if you persist in scrolling down will you encounter the more "right" comments which are buried despite having 5k or more likes.
This sort of paradigm of "you will see whatever we want you to see, not what you want to see" took a big step forward when Youtube changed its comment sorting. There used to be unbelievably hilarious comments under many Youtube videos. These were enshrined at the top of the list when sorting by "top rated" and would probably remain there for eternity. But such comments, being hilarious, were of course frequently irreverent, politically incorrect, even downright off-color. The model probably also did not suit Google's preference for always continually encouraging users to generate additional content (i.e. information about themselves for Google to sell).
It was a very sad day a few years ago when they changed it all beyond recognition. The entertainment value of Youtube was immeasurably harmed by ending the more user-preference-based meritocratic system in favor of the Big Brother paradigm that closely regulates every byte you see.
1. Facebook keeps getting it wrong and keeps getting called in front of congress.
2. Congress decides Facebook is simply too useful to be killed but too uncontrolled to be allowed freedom
3. Congress passed onerous regulations and in return gives Facebook the role of being the government publishing arm.
4. Zuckerberg is freed from competition, the news is once again under control, big government 'democracy' is once again assured for future generations.
5. NPR pouts that their status is no longer special.
So one of the original purveyors of fake news is now the watchdog.
"Facebook has been planning to assign a "trustworthiness score" for users... "
Back atcha' Fakebook. Your score is zero and I'm thinking of instituting negative numbers?so you can actually go lower, much lower.
Anybody who takes stuff like this at face value is a fool... This stuff is already used and abused, and it will only get worse. I would bet my life this stuff is already slanted against a site like Reason, let alone more edgy actual right wing places. It may not be direct government action, but its effects are probably likely to be greater in the real world than most stuff the government COULD even directly do to slant things.
Large organizations are rarely trustworthy.
In the case of Facebook, bear in mind that you're handing your content over to young, enthusiastic, tech-minded people.
I've no problem with it. Facebook is a privately owned entity therefore they have the right to monitor's people's posts, online behavior etc.