A New Book Reveals Facebook's Problems Started Way Before the 2016 Election
In Facebook: The Inside Story, even Steven Levy’s most generous conclusions about the tech giant are still pretty damning.
"It remains uncertain whether anything that happened on Facebook made a significant difference in the 2016 election," Steven Levy writes in his introduction to Facebook: The Inside Story. This is a sentence the reader has to keep in mind throughout Levy's new book, which documents how the seeds of what has gone wrong for the company were planted years before 2016 in a series of heedless or needlessly aggressive decisions that are deeply rooted in the history and culture of the company itself.
When it comes to what we actually can prove about the 2016 votes in the United Kingdom (which resulted in Brexit) and in the United States (which gave us President Donald Trump), it's never been demonstrated that Facebook and its ad policies made any difference. Even so, it's now indisputable that various political actors (including Russia) have tried to use Facebook that way, and that the company made it easy for them to try.
Levy's introductory chapter is designed to show both the reputational heights Facebook founder Mark Zuckerberg and his company briefly reached and the depths to which they've both now fallen. The author draws a brisk line between Zuckerberg's triumphant surprise visit to Lagos, Nigeria, in August 2016—where he meets a range of programmers and would-be startup entrepreneurs as well as top government officials—and the unexpected outcome of the American election in November that year. The latter inspired a gathering storm of critics to point directly at Facebook as the source of that outcome's unexpectedness.
This juxtaposition works to draw readers into the thematic heart of the book but also suggests, a bit misleadingly, that Facebook's massive loss of public regard starting in 2016 was an abrupt, unexpected one. But what the author documents in his introductory chapter and throughout the book is that Facebook's unforced errors have amounted to a car crash more of the slow-motion variety; you can't look away, but also it never seems to end.
Nothing in Levy's narrative arc for Facebook and its principal founder will come as a surprise to anyone who has followed the company's fortunes over the 15 years. You could fill a specialty bookstore with nothing but Facebook-related books published in the last decade or so, starting with Ben Mezrich's The Accidental Billionaires, later adapted by Aaron Sorkin for the David Fincher-directed movie The Social Network. Mezrich's book, and almost all of the books published about Facebook since then, fall into two broad categories: (1) Look At The Great Things These Geniuses Have Done, or (2) Look At All The Pathological Things These Twisted Guys Have Done. With the exception of David Kirkpatrick's The Facebook Effect (a book on the company's early years that Levy credits by name in his acknowledgments), the authors of most of these other books typically have tried to distill the story of Facebook into lessons—either what not to do or what must be done—regarding social media and big tech-company successes. One way or another, they've almost all had axes to grind.
That was never going to be the kind of story Levy—whose 2011 book on Google, In The Plex, is the best journalistic account of that company's history and its impact on the tech world—would write about Facebook. Levy's books and his admirably accessible body of tech journalism for journals from Newsweek to MacWorld to Wired consistently demonstrate how he's driven by the facts rather than by any philosophical or political agenda. And that's exactly why, once Levy has layered on so many new facts about Facebook, its principals, and its various lapses and betrayals, piling on the details from hundreds of interviews, putting all the pieces of every part of Facebook's story into one place, his most even-handed conclusions are still damning:
"The troubled post-election version of Facebook was by no means a different company from the one it was before, but instead very much a continuation of what started in Mark Zuckerberg's dorm room 15 years earlier. It is a company that benefits from and struggles with the legacy of its origin, its hunger for growth, and its idealistic and terrifying mission. Its audaciousness—and that of its leader—led it to be so successful. And that same audaciousness came with a punishing price."
And what was that price? Per Levy, "Facebook now admits that the damage [from the company's decisions to promote rapid growth over everything else] turned out far more extensive than expected, and is not easily repaired. All the while Mark Zuckerberg and his team insist, despite the scandals, that Facebook is still overwhelmingly a force for good in the world."
Even now—even after reading Levy's increasingly unhappy account of Facebook's growth in size and profitability on the one hand and its growth in scandalously negligent or callous treatment of users and their data on the other—I'm still inclined to agree that Facebook is a force for good. I'm biased: Were it not for our ability to stay in touch on Facebook while half a world apart, the woman who became my wife in 2017 and I would not be married today. But, just as important, I've seen so many instances in which individual users and communities have found constructive uses for Facebook's many features, ranging from the broadly political to the deeply personal. I've been a defender of some of Facebook's approaches to real problems, such as its Internet.org project (later known as Free Basics) and its (apparent) commitment to end-to-end encryption.
But again and again in Levy's book I stumble across things like Facebook's deployment of a mobile app called Onavo Protect, which purported to provide VPN (virtual private network) services for one's phone, but whose real purpose was to gather user data about how they used other apps. For a year or two I had that Onavo app on my own damned phone! As Levy writes, "it takes a certain amount of chutzpah to present people with a privacy tool whose purpose was to gain their data."
Far more common in the Facebook story than perverse examples like Onavo Protect are the occasions in which the company failed to anticipate problems that would arise from the markets it pushed itself into with little awareness of how its services might be misused—especially in the absence of Facebook personnel who spoke the dominant languages in those markets and who might at least theoretically identify content that violated Facebook's content policies or other problematic uses. Take Myanmar, for example. Levy shows that Facebook learned in 2014 about how its services were being used maliciously in Myanmar, a country where large-scale mobile-phone internet access was just taking off.
I learned about Facebook's impact in Myanmar at roughly the same time through my contacts in Burmese civil society, and my first reaction was just as brain-dead as Facebook's turned out to be. When I was told that incidents of civil violence were being reported on Facebook, my first thought was that this was positive—that the increasing ubiquity of smartphones with cameras was making rogue government officials and factions more accountable. What I didn't immediately grasp (my Burmese friends politely schooled me), and what Facebook took far longer to grasp, was that false reports of crime and sexual assaults were being used to stir up violence against innocents, with a growing genocidal focus on the country's oppressed and persecuted Rohingya minority. Levy then outlines how Facebook's property WhatsApp, with its end-to-end encryption features and its built-in ability to amplify messages, including pro-violence messages, exacerbated the civil-discord problem in Myanmar (just as it eventually was shown to have done in the Philippines and in Brazil):
"Facebook contracted with a firm called BSR to investigate its activity in Myanmar. It found that Facebook rushed into a country where digital illiteracy was rampant: most Internet users did not know how to open a browser or set up an email account or assess online content. Yet their phones were preinstalled with Facebook. The report said that the hate speech and misinformation on Facebook suppressed the expression of Myanmar's most vulnerable users. And worse: 'Facebook has become a useful platform for those seeking to incite violence and cause offline harm.' A UN report had reached a similar conclusion."
It's fair to note, as Facebook's defenders (who sometimes have included me) have noted, that all new communications technologies are destined to be misused by somebody. But Facebook's reckless stress on its grow-first-fix-problems-later strategy more or less guaranteed that the most harmful aspects of the misuses of these new media would be exacerbated rather than mitigated. By 2018, the company began to realize it was at the bottom of a reputational hole and needed to stop digging; when Zuckerberg testified before the U.S. Senate in 2018, he responded to Sen. Patrick Leahy's (D–Vt.) question about Myanmar by saying "what's happening in Myanmar is a terrible tragedy, and we need to do more."
That last line is a theme that appears again and again in the final third of the book, uttered by different top executives at Facebook. "We know we have more work to do," an exec responded when reporters revealed that Facebook's AI-fueled self-service ad product created the targeted category "Jew haters." Writes Levy: "Investigative reporters at ProPublica found 2,274 potential users identified by Facebook as fitting that category, one of more than 26,000 provided by Facebook, which apparently never vetted the [category] list itself."
When Facebook Chief Operating Officer Sheryl Sandberg, a Google veteran, met with the Congressional Black Caucus, she didn't have a particularly strong defense for Facebook's having (in Levy's words) "hosted Russian propaganda that fueled white prejudice against black people" or violating civil-rights laws "because it allowed advertisers to discriminate against African-Americans." Sandberg was reduced to "repeating, almost like a mantra, 'We will do better.'" Sandberg later told Levy that "I walked out of there saying we, and I, have a lot of work to do." This has become the paradigmatic Facebook response when any new scandal emerges from the company's shortsighted strategic choices to privilege growth over due diligence.
If there is one major exception to the company's institutional willingness to plunge ahead into new markets and new opportunities to reap revenue, it is its cautiousness in dealing with American conservatives, primarily driven by Facebook's head of global policy, Joel Kaplan. Levy writes that
"for years right-wing conservatives had been complaining that Facebook—run by those liberals in Silicon Valley—discriminated against them by down-ranking their posts. The claim was unsupported by data, and by many measures conservative content was overrepresented on Facebook. Fox News routinely headed the list of most-shared posts on the service, and even smaller right-wing sites like the Daily Wire were punching above their weight."
Facebook's intense desire to be perceived as lacking political bias seems to have led to policies and outreach efforts—including a big powwow in Menlo Park where Zuckerberg and Sandberg personally attempted to mollify prominent conspiracy theorists like Rush Limbaugh and Glenn Beck–that added up to white-glove treatment for American conservatives. Beck, at least, told Levy he was impressed with Zuckerberg's sincerity: "I sat across the table from him to try to gauge him [and he] was a little enigmatic, but I thought he was trying to do the right thing." Even so, Levy writes, "after leaving Menlo Park the conservatives returned to complaining about Facebook's treatment of them—while piling up millions of views because of their skill in exploiting Facebook's algorithms."
The attempts to accommodate conservative critics, occurring simultaneously with Facebook's promises to "do better" on other content moderation issues, illustrate the bind in which the company now finds itself. It's voicing its commitment to be more proactive in moderating malicious content and disinformation while simultaneously reassuring the squeakiest political wheels that, no, their content and policies won't be subjected to any Facebook-imposed test of factuality or truth. In other words, it's promising both to police more content and to police content less. Facebook's laboring now to create a "content oversight board," a kind of "Supreme Court" where Facebook content decisions can be appealed.
I'm skeptical of that whole oversight-board project—not least because it seems largely to sidestep the other big bucket of problems for Facebook, which is its handling of user data. I'm also skeptical about broad claims that Facebook's content algorithms and ads truly manipulate us—in the sense of robbing us of ordinary human independence and agency—but that skepticism has no bearing on the ethical question of whether Facebook should allow malign actors to exploit user data in efforts to manipulate us (e.g., by suppressing voter turnout). Whether those efforts are effective or not isn't relevant to the ethical questions, just as when a drunk in a bar misses when he swings at you has no bearing on whether he can be charged with attempted assault.
Finally, I'm most skeptical as to whether anything Facebook tries to do on its own is going to either restore Facebook's public reputation or blunt the impulse of government policymakers, both in the United States and elsewhere, to impose hobbling and even punitive regulation on the entire social-media industry (and on the tech industry generally).
It's clear that one reason Zuckerberg and Sandberg have been scrambling to find an accommodation that works—first and foremost with the U.S. government but also with the European Union (E.U.) and with non-E.U. nations—is that they know they need to get out of the crosshairs, especially as more of their company's story continues to come to light. Their problem now is that Steven Levy's Facebook: The Inside Story has instantly become the indispensable single-volume resource for all policymakers everywhere when it comes to Facebook—not because it sets out to take the company down, but because the facts it reports leave readers with no choice but to recognize how the company's indisputable successes have been undermined by its indisputable systemic deficiencies.
My takeaway is that Facebook could still manage a win out of all this—if it seizes this moment as an opportunity to embrace an ethical framework that's designed for something bigger than just solving Facebook problems. I don't believe the company's bad behavior (or negligence—there's plenty of both) can tell us what to think of this whole sector of the internet economy and what industry-level regulation or law should look like. As I've argued in my own book, I believe a new ethical framework has to be built and shared, industry-wide (affecting more companies than just Facebook). It needs to be informed by all stakeholders, including users, governments, and civil society as well as the companies themselves—and it needs to privilege fiduciary obligations to users and the general public even over any commitment to growth and profitability. Part of these obligations will entail, yes, a commitment to fighting disinformation, treating it as a cybersecurity problem—even when political stakeholders complain.
That's just my view—other critics will argue for different approaches, some of which will center on more regulation or laws or other government interventions, while others will argue for fewer but better-crafted ones. But what all Facebook's critics, and the tech industry's critics, will have in common is this: going forward, we all will be citing stuff we learned from Levy's Facebook: The Inside Story.
Show Comments (73)