Democrats Hate Facebook. Republicans Want To Ban TikTok. The Bipartisan Backlash Against Big Tech Is Here and It's a Disaster.
Even as Americans rely on tech more than ever, our early-pandemic truce with the industry is officially over.
It's the summer of 2020, and everyone seems to hate Big Tech.
Over the last three months, President Donald Trump has issued orders targeting not only Twitter, the highly politicized social media platform he uses incessantly, but TikTok, a Chinese-owned service known mainly as a place to share teen dance memes. Democrats used a high-profile congressional hearing in July to bash Google and Amazon, two of America's most successful companies, for allegedly anti-competitive practices. And members of both political parties repeatedly went after Facebook for restricting too much speech—and for not restricting enough. Big Tech just can't win.
In one sense, none of this is new. For the better part of the last decade, Silicon Valley has been on the outs. Politicians on both sides of the aisle have targeted it. Mainstream media has become increasingly critical. Ordinary people have begun to treat the internet, and the opportunities it has created, as a nuisance. Even as their products have transformed nearly every aspect of everyday life, large tech companies have been subject to increasingly negative public perception and attendant political attacks.
Yet for a brief moment this spring, as the U.S. shut down and stayed home in response to the coronavirus, it looked like American tech companies might be making a reputational comeback. With everyone trapped at home and indoors, Big Tech provided a lifeline, connecting Americans to food, entertainment, work, and each other. But America's temporary truce with Big Tech wasn't to last. Nearly five months into the pandemic, it appears any newfound goodwill earned by Silicon Valley has already been burned.
"There was a small window of time where everyone was grateful that technology was allowing us to continue to function as a society despite our inability to gather in physical spaces," Santa Clara University law professor Eric Goldman tells Reason. "And yet that gratitude wore off so quickly. Everyone just went right back to hating on internet companies and forgetting all the great things we're benefiting from today."
Why the sudden reversal? Perhaps because the political battle against Big Tech—and attempts to make it a populist cause—has been based on a loose constellation of personal grievances, culture war opportunism, crime panic, and corporate rivalry—all of which were on display in a recent congressional hearing in which the top executives of Google, Facebook, Apple, and Amazon were grilled, often inartfully, by members of Congress.
In theory, that hearing was focused on antitrust concerns. But members of both parties repeatedly brought up frustrations with how social media platforms and search engines were handling user content—a reminder that the future of Big Tech is inevitably bound up with the future of free speech. That same impulse was on display days later when Trump issued his executive order on the "threat posed by TikTok." Somehow, the app had been caught in the middle of the culture war, the trade war, fears of a foreign power, and panic over social media speech and user privacy, all at once.
In the COVID-19 era, federal tech policy has become a vehicle for conspiratorial and authoritarian impulses of all kinds. Politicians attacking Big Tech have invoked Silicon Valley elites, Russian bots, Chinese spies, Middle Eastern terrorists, domestic sex predators, gun violence, revenge porn, internet addiction, "hate speech," human trafficking, and the fate of democracy as reasons for action. Whatever works to distract people from their attempts to interfere in American freedom of expression, commerce, and privacy. Google, Amazon, Facebook, Twitter—even platforms as seemingly superficial and politics-free as TikTok—have all been swept up into a wide-ranging political war on the foundations of online life and culture.
It might be working. Even as Big Tech has benefited ordinary people in countless ways, political backlash to the size and power of America's largest technology companies—what some insiders call "techlash"—is coming stronger than ever. And it could make addressing everything from the pandemic to election integrity, cancel culture, criminal activity, police reform, racial justice, and state surveillance of our digital lives so much worse.
The Story of Section 230
So how did we get here? In many ways, the story starts in 1996, when the World Wide Web was relatively new and full of radical possibilities—and also potential for harm.
In response, Congress passed a law, the Communications Decency Act (CDA), that attempted to regulate online content and was mostly declared an unconstitutional, unenforceable mess. But the courts let one part be: Section 230. This short statute stipulates that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." That's the first bit, and it's vitally important for freedom of expression online because it gives tech companies less motivation to censor speech.
The second part, meanwhile, is more about protecting users from harmful content by assuring these companies that they won't be penalized for "good faith" actions "to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected," nor would providing "the technical means to restrict access to material" they find objectionable. The so-called "Good Samaritan" provision lets tech companies moderate and block some content without becoming targets for criminal charges or lawsuits.
Overall, it gave tech entities a way to protect their users' (and their own) First Amendment rights without requiring a prolonged court battle for every single piece of disputed content. And for a long time, very few people, politicians, or interest groups thought this was a bad thing (to the extent that they had ever heard of this wonky internet law at all).
A decade later, blogs, Facebook, Twitter, YouTube and other web 2.0 technologies had shifted the balance of power on the internet to favor the masses over the gatekeepers. Time magazine declared "you"—the user of these digital services—the person of the year. In the Middle East and other conflicted areas, populist rebellions against autocratic leaders were spread internally and globally through Twitter. Meanwhile, in the U.S., social media has given rise to underrepresented voices and political ideas often ignored by establishment powers. Its ability to document and share police abuses against black Americans in Ferguson, Missouri, and beyond made it integral in launching a new civil rights and social justice movement. The internet was a force for individual empowerment, and thus for good.
But all that freedom—to speak, to share, to joke, to communicate, to connect—was messy. It allowed people to speak truth to power, and also to spread misinformation. It created new ways to shine light on injustice, and new means to perpetuate it. It gave politicians a more direct-to-the-people megaphone—and vice versa. It spread sociability and solidarity while simultaneously enabling identity trickery, scammers, saboteurs, scandals, and pile-ons. It gave a platform to vital information, ideas, and images as well as vile expressions of violence and hate. And, as the 2016 election approached, it became mired in increasingly angry partisan political disputes.
Within a few years, Republicans and Democrats in Congress could hardly agree about anything when it came to diagnosing issues with online life. But they coalesced around similar solutions: more federal regulations and criminal investigations. Using antitrust laws against sites and services that have gotten too popular. Suddenly, high-profile pundits and politicians—from former Vice President Joe Biden to a range of Republican senators to Fox News host Tucker Carlson to left-leaning newspaper columns—started calling for Section 230 to go. Big Tech was a threat, and it had to be contained.
Big Tech vs. Big Virus
Which brings us to the present, and COVID-19.
In the early days of the pandemic, Amazon deliveries seemed to be single-handedly keeping at least half of America stocked with groceries, hand sanitizer, and household supplies. Meanwhile, Zoom became the standout service for staying in touch.
During the worst days of personal protective gear shortages, Italian hospital horror stories, and general mass panic, Silicon Valley's leading lights donated money and much-needed supplies. By the end of March, for example, Apple had announced it would donate some 2 million masks to medical workers, while Facebook and Tesla immediately offered hundreds of thousands of masks they'd had on hand for wildfires. Tesla also donated 1,200 ventilators when those were in short supply. Google gave away hundreds of millions in free ad space to groups working to combat COVID-19's impact.
Meanwhile, social media allowed people to talk through new research and share ideas, as widespread videoconferencing—once the province of science fiction—allowed people to gather, exercise, hold business meetings, have happy hours, celebrate special occasions, and console distant loved ones from within their own homes.
And as businesses shuttered and economic turmoil spread, tech tools from peer-to-peer payment apps like Venmo and Cash App to crowdfunding platforms like GoFundMe, document sharing services like Google Docs, and myriad others allowed people to organize mutual aid funds, spread the word about volunteer initiatives, ask for help, give directly to those in need, and share specialized community resources.
Of course, COVID-19 also meant that people were—and are—more reliant on tech-mediated reality than ever before. Those excluded from or marginalized in certain digital arenas had more reason to grow resentful. Politicians had more reasons to want to grab control. All of us had more opportunities to muck things up by speaking too hastily or picking the wrong side of what seemed like a billion new battles a week.
In a pandemic, the internet's freedoms were messier than ever. And that messiness fueled a new wave of tech-skeptical politics.
Coronavirus and the Crisis of Truth
Almost immediately, lawmakers began publicly bashing tech providers and calling for investigations. Amazon faced accusations of price gouging during the pandemic. Zoom was targeted as a threat to user privacy. In June, Sen. Josh Hawley (R–Mo.), a longtime critic of Google and Facebook, introduced legislation to curtail Section 230. Meanwhile, the country was erupting in protests and upheaval following the killing of George Floyd by Minneapolis police.
"We're back to politicians spending time talking about Section 230 and breaking up platforms," TechFreedom's Ashkhen Kazaryan said in June, "while the country is still being torn apart by both racial injustice and the pandemic."
Some of this renewed animosity was probably unavoidable, simply because so much contentious discourse is mediated through social media. Yvette Butler, an assistant professor of law at the University of Mississippi, thinks the pandemic has brought a heightened sense that social media platforms have a moral responsibility "to fact-check and promote the truth."
Some of it has been spurred by tech leaders getting too political, or at least taking actions that are perceived that way.
"It's both the fact that the political discourse has been very charged in the past few weeks and a lot of it has happened on these platforms, and the fact that by default the tech leaders are participating in the political dialogue," says Kazaryan.
She thinks "the pressure point—the point of no return for this—was the protests for reopening America," when Facebook angered a lot of people by banning pages for some anti-lockdown events. Then, Twitter and Snapchat actions against Trump's accounts followed suit.
Michael Solana, vice president of the San Francisco–based Founders Fund, a venture capital firm that counts Peter Thiel and an array of other Silicon Valley heavyweights among its partners, suggests that gripes about social media are creating more than just a media-driven backlash this time. "The left now generally believes Facebook is why Trump won. The right believes they're being censored by like five white Communists who went to the same school and now live in multi-million dollar homes in Atherton [California] and San Francisco," he says. "Both sides seem to want blood."
Trump Steps In
A day earlier, Twitter had affixed blue exclamation-point icons and links saying "Get the facts about mail-in ballots" beneath two Trump tweets about mail-in voting, in which the president had said "there is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent" and made false claims about California's vote-by-mail process.
Clicking the Twitter link led users to a page titled "Trump makes unsubstantiated claim that mail-in ballots will lead to voter fraud," where Twitter explained its move ("part of our efforts to enforce our civic integrity policy") and included a series of tweets from media outlets, nonprofit groups, and journalists countering the president's claims.
Trump's subsequent order suggests that Twitter's fact-check "clearly reflects political bias" and therefore somehow amounts to illegal censorship.
A private company cannot censor the president of the United States, at least not in the sense that they'd be violating the First Amendment, which protects private speech from government restrictions. But Trump's order attempts to reposition free speech as forcing private companies to follow government-set content standards and forbidding them from countering false claims made by him and his administration.
The president's order also completely mangles descriptions of Section 230 in the course of calling for it to be amended.
"When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators," states the order.
In this passage and throughout the decree, Trump argues that companies cross a legal threshold by exercising judgment about user content and forfeit the right to be treated as legally separate from their users. Trump portrays all of this as the mandate laid out in Section 230.
But the idea that these companies must be some sort of neutral conduits of user speech (or else cross an imaginary legal line between being a "publisher" and a "forum" or "platform") is not just pure fiction but also the exact opposite of both the intent and the function of Section 230. The whole point of the law is to protect the editorial discretion and content moderation of tech companies.
"Section 230 was never meant to require neutrality," said former Rep. Christopher Cox (R–Calif.), who co-authored Section 230 along with Sen. Ron Wyden (D–Ore.) some 25 years ago, at a July 28 hearing held by the Senate Subcommittee on Communications, Technology, Innovation, and the Internet. "To the contrary, the idea is that every website should be free to come up with its own approach" to content moderation, Cox said.
Nonetheless, Trump's order directs the Department of Justice, the Federal Trade Commission (FTC), and the Federal Communications Commission (FCC) to review ways that Twitter and other tech companies might be exhibiting bias and how this might be used to say Section 230 doesn't apply to them. And, in July, the Department of Commerce formally asked the FCC to follow through.
The good news is that the FCC can do little to reinterpret a law whose statute is so clear, and at least several commissioners appear to have little interest in looking for interpretive wiggle room.
"The FCC shouldn't take this bait," Commissioner Jessica Rosenworcel said in a July 27 statement. "While social media can be frustrating, turning this agency into the President's speech police is not the answer."
"We're an independent agency," Commissioner Geoffrey Starks pointed out during a July 17 panel hosted by the Information Technology and Innovation Foundation, adding that the "debate [about Section 230] belongs to Congress."
'Censorship Is a Bipartisan Objective'
That Trump's executive order on social media is toothless is a good thing, since Section 230 has allowed so much of what we love (and politicians hate) about the modern internet to flourish. However, the order seems to be working as a political rallying cry on the right.
It's "so cartoonish that you would think people on his side would distance themselves from it," says Goldman. Yet the order, "for all its cartoonishness, seems to be getting people in line instead of pushing them away."
For Republicans, animosity toward Section 230 turns largely on alleged fears about anti-conservative bias from social media companies. "That [idea] has penetrated pretty deeply among the activist and politician class" on the right," says Zach Graves, head of policy at the Lincoln Network.
Though Graves thinks Trump's executive order on social media is pure "political theater" and "won't get anywhere by itself," he notes that "there are a lot of other grievances" besides the alleged anti-conservative attitudes.
In early June, the American Principles Project—a newly launched think tank dedicated to promoting nationalist and Trump-friendly public policy—released a plan for altering Section 230 aimed at "protecting children from pornography and other harmful content" in "the digital public square." Mike Masnick, executive editor of Techdirt, described the proposal as an attempt "to bring back the rest of the Communications Decency Act"—that is, the parts aside from Section 230, all of which were struck down by the Supreme Court as unconstitutional.
By the end of June, the Department of Justice had released its own four-pronged reform proposal and South Dakota Republican Sen. John Thune—joined by Hawaii Democrat Brian Schatz—offered up the Platform Accountability and Consumer Transparency (PACT) Act, a bill that would put the federal government at the center of social media content moderation.
The Electronic Frontier Foundation (EFF), a digital privacy rights nonprofit, called the PACT Act "a serious effort to tackle a serious problem: that a handful of large online platforms dominate users' ability to speak online." But it opposes the bill because of well-founded fears of unintended consequences, including "more illegitimate censorship of user content" by dominant tech companies; its threat to "small platforms and would-be competitors to the current dominant players"; and the "First Amendment problems" it presents.
In July, Hawley introduced a less serious effort: the Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) Act, which Hawley's office describes as "a bill to remove Section 230 immunity from Big Tech companies that display manipulative, behavioral ads or provide data to be used for them" and "crack down on behavioral advertising's negative effects, which include invasive data collection and user manipulation through design choices." (This follows two of Hawley's previous anti-tech measures on similar themes going nowhere.)
Meanwhile, Democrats have been sounding anti-tech alarms, too, as well as pursuing similar solutions, albeit under different guises. For instance, Beto O'Rourke, during his presidential run, included Section 230 reform in his proposal for "combating hate and violence."
Democratic Senator and vice-presidential candidate Kamala Harris has targeted Section 230 as part of her crusade against "revenge porn."
And in response to Facebook refusing to remove an ad featuring false information about Biden, the Democratic presidential candidate said in January that "Section 230 should be revoked, immediately … for Zuckerberg and other platforms."
People on both the right and the left "want to erode [Section 230] in order to put an affirmative pressure on platforms to be cops for various kinds of harms or illegal activities," says Graves, "and there are people who think liability is the tool to do that." The Overton window on the issue has been moved, he adds.
"It used to be a kind of sacred cow that people couldn't touch" but now there are coalitions with well over 60 votes in the U.S. Senate to take action on Section 230 and even to revoke it, Graves says. "But it probably won't be wholesale, it will be chipping away based on a feel-good cause."
That's exactly the point of the EARN IT Act. Proposed in March by Sen. Lindsey Graham (R–S.C.), the bill—which got a hearing in March and was placed on the Senate calendar in July—targets Section 230 under the guise of stopping the "online sexual exploitation of children." It has a roster of prominent bipartisan co-sponsors, including Richard Blumenthal (D–Conn.), Ted Cruz (R–Texas), Dianne Feinstein (D–Calif.), and Chuck Grassley (R–Iowa).
The legislation that began this trend—the 2018 law, FOSTA—was also alleged to target online sexual exploitation and also enjoyed extremely bipartisan support. FOSTA was sold as a bill to target child sex traffickers, but really took aim at consensual adult prostitution and sex work more broadly, as well as online speech about these issues. (Prominent Democrats including Sens. Ro Khanna, Elizabeth Warren, and Bernie Sanders have since backed legislation aimed at studying FOSTA's impact and perhaps working toward its repeal, but it has yet to gain much traction.)
The fight to abolish or severely restrict Section 230 shows "how tempting censorship is to all sides," says Goldman. "Turns out that censorship is a bipartisan objective."
Perhaps that explains why—with so many life-or-death matters on American minds right now and an election on the close horizon—authorities are still paying so much attention to relatively obscure tech policy laws and the content moderation moods of private companies.
With the COVID-19 pandemic still raging on, police brutality and racial justice issues under a microscope, and urgency around concerns like voter suppression, "you would think Section 230 would drop down the list" of things politicians spend time talking about, says Goldman. "But you know, censorship is like catnip to regulators."
It's anyone's guess whether political focus on social media rules and internet regulation will persist as the 2020 election season really gets underway. "A lot depends on Trump," says Goldman. "If he wants to keep banging this drum, that's going to keep ratcheting up the pressure on Section 230."
But even if backlash against Section 230 quiets down, there are no shortage of election-friendly issues for politicians and parties to wield against tech companies.
In the last week of July alone, the Senate held a hearing on digital copyright law and the aforementioned Section 230 hearing featuring the law's author, Cox, (a spectacle loosely centered around the PACT Act) while the House hauled the chief executives of Apple, Amazon, Facebook, and Google before its Judiciary Committee.
The latter was purportedly a probe into possible violations of federal antitrust law, but turned into a long, weird airing of grievances against tech companies that had little to do with anti-competitive business practices and often saw Democrats and Republicans proposing diametrically opposed solutions for these companies to adopt. At times, lawmakers from both parties went back and forth between demands for more content and users to be suppressed, and less.
The hearing hopped seemingly at random from topic to topic: Lawmakers asked about, among other things, social media content moderation, election integrity, fake news, privacy issues, data collection transparency, counterfeit products in online marketplaces, ideological diversity among tech company staff, court-ordered takedowns of speech, general support for "American values," contracts with foreign governments, working with police and turning over user data without warrants, how search engines work, how Apple chooses what apps and podcasts to allow in its stores, and even how Gmail content gets marked as spam.
More than a few times, their questions made clear they had no idea what they were talking about. (One Republican asked Mark Zuckerberg, Facebook's chief executive, why the platform had taken down a post by Donald Trump, Jr., but the takedown had actually occurred on Twitter.) Their demands were often contradictory. The only clear implication of the hearing was this: There was no way for tech companies to win.
'We Will Lose a Little Bit of Freedom'
Looked at one way, the techlash makes little sense since it comes at a time when Americans are relying more than ever on technology to mediate their lives. But that may be exactly why the industry has come under fire.
When it comes to the pandemic, "I do think there's going to be a [tech policy] reaction," says Arthur Rizer, "and I think that reaction is going to be that we will lose a little bit of freedom through technology for the sake of safety, as it historically has always been." As one example, he points to the use of facial recognition technology by police departments who said it would only be used for solving certain serious crimes but recently admitted to "using it to find who was out and about during quarantine."
"Every time this country faces an emergency," Rizer says, "we kind of act poorly when it comes to our civil liberties."
Kazaryan notes that a new California "privacy" law places serious burdens of time and money on any businesses that interact online—something that could prove tough right now, especially for small businesses, as people struggle to come back from the pandemic and lockdowns. Yet its requirements and effects have received little attention amid all the chaos right now.
Pressure from outside-government groups to ramp up regulation on digital media and tech companies could also grow if economic pressure gets worse.
Graves sees techlash partly as a normal response to "the growth in power and pervasiveness of the companies," with them simply facing the same sort of interference and resistance that any mega-corporation gets from regulators and activists.
"Part of it is fears and anxieties" about the power they have. But another part of the push for new tech regulation is "opportunistic," he adds, pointing to hotel industry lobbying against Section 230 because it benefits home-sharing companies like Airbnb, or newspapers lobbying against it because it helps their digital competitors.
"A lot of what's driving the techlash is these sort of inter-industry fights," not because "it's the issue that people are thinking about when they're going to the polling booth," Graves says. And right now, with "a collapse in the advertising market [and] a lot more consolidation in some industries," this sort of competition—and attempts to use government regulation to one side's advantage—only stands to get worse.
In the case of TikTok, Chinese parent company ByteDance seems to have gotten caught up in two different political fights. The first is the Trump administration's international power struggle with the Chinese government and associated worries that the Chinese government could order companies to turn over data about U.S. users. The second involves the president's personal grievances with the app, after TikTok teens were rumored to have signed up for tickets to a Trump campaign rally and then not used them, leaving the president planning for heavy rally attendance that didn't pan out.
In addition to political threats and competitive pressure, large tech companies—particularly Facebook—are facing demands from progressive activists and identity-based interest groups to crack down on more speech. Earlier this summer, groups including the Anti-Defamation League and the NAACP "helped push hundreds of companies, such as Unilever and Best Buy, to pause their advertising on Facebook to protest its handling of toxic speech and misinformation," as The New York Times reported. Misinformation fears and activism around them seems only likely to increase as the election heats up and the pandemic wears on. And that means pressure to crack down is likely to increase.
Techlash Could Make Coping With COVID-19 and Documenting Police Abuse More Difficult
If and when that happens, it will be our loss.
Not only does the political obsession with regulating and punishing Big Tech distract from more pressing issues, from police abuse to the economic downturn due to pandemic-related lockdowns, it also makes it more likely that we will struggle to address those problems.
Think about all the ways digital tools and tech companies have ensured access to up-to-date and diverse information during the pandemic. Think about all the online streaming services, interactive video games, e-book purveyors, podcast makers, and apps that have been keeping us entertained. The many kinds of free chat services letting us keep in touch with friends, family, and colleagues. The online educational tools helping to make homeschooling at least somewhat tenable. All the crowdfunding donation platforms, people-powered marketplaces like Etsy and eBay, and gig economy apps from Uber to Patreon that are helping people make ends meet.
It's fair to say this is far from ideal. But without today's technology, it would be so much worse.
Now think about the recent pushback against law enforcement being able to hurt and kill people—especially black Americans—with impunity. Think about the way we've seen police and federal agents react in recent weeks to protesters and the press covering them.
Think about how much of that would still have happened without not just smartphones and digital video but also quick, accessible, and gatekeeper-free ways to share and spread that video.
Without social media, "there are so many important conversations that would have never happened," says Kazaryan, referring to recent protests and activism around police racism and brutality. In addition, she points out, "tech companies have been creating jobs and providing tools for people to do their jobs" while we deal with the coronavirus.
A more immediate consequence of anti-Section 230 efforts will be forcing social media companies—and the rest of the internet along with them—to crack down on content from users and ratchet up content and account removals.
By seriously raising the legal hoops that these entities must jump through to defend their First Amendment rights, and the potential legal liabilities if they can't, the government will leave online speech platforms with little choice but to severely curtail the amount and types of speech allowed—thereby increasing the social media "censorship" problems that Trump and company say they're out to fix. The only other sound strategy for these companies is to give up on content moderation entirely to avoid any way of "knowing" about bad content—thereby increasing the amount of obscene, violent, harassing, or otherwise objectionable content which anti-Section 230 laws claim to thwart.
So what looked like a moment of redemption might turn out, instead, to be a tipping point in the other direction. "I think that the fundamental forces driving the techlash are going to be made worse, not better, by COVID," says Graves.
"Whatever goodwill that [tech companies] have picked up is either gone or [has] been spent," Rizer says.
The techlash may only just be getting started.
CORRECTION: This article previously described Section 230 co-author Christopher Cox as a former senator; he was a member of the House of Representatives.