Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • The Soho Forum Debates
    • Just Asking Questions
    • The Best of Reason Magazine
    • Why We Can't Have Nice Things
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Donate Crypto
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Login Form

Create new account
Forgot password

Technology

Andrew Yang Proposes Making Social Media Algorithms Subject to Federal Approval

The presidential hopeful on Thursday released a plan to regulate tech giants.

Billy Binion | 11.15.2019 9:00 AM

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests
zumaamericastwentyfive549693 | Alex Edelman/ZUMA Press/Newscom
(Alex Edelman/ZUMA Press/Newscom)

Entrepreneur Andrew Yang has run a tech-centered campaign for the Democratic presidential nomination, positioning his Universal Basic Income proposal as a solution to rapid technological change and increasing automation. On Thursday, he released a broad plan to rein in the tech companies that he says wield unbridled influence over the American economy and society at large.

"Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies," the plan reads. "They're making decisions on rights that government usually makes, like speech and safety."

Yang has now joined the growing cacophony of Democrats and Republicans who wish to amend Section 230 of the Communications Decency Act; the landmark legislation protects social media companies from facing certain liabilities for third-party content posted by users online. As Reason's Elizabeth Nolan Brown writes, it's essentially "the Internet's First Amendment."

The algorithms developed by tech companies are the root of the problem, Yang says, as they "push negative, polarizing, and false content to maximize engagement."

That's true, to an extent. Just like with any company or industry, social media firms are incentivized to keep consumers hooked as long as possible. But it's also true that social media does more to boost already popular content than it does to amplify content nobody likes or wants to engage with. And in an age of polarization, it appears that negative content can be quite popular.

To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to "create algorithms that minimize the spread of mis/disinformation," as well as "information that's specifically designed to polarize or incite individuals." Leaving aside the constitutional question, who in government gets to make these decisions? And what would prevent future administrations from using Yang's censorious architecture to label and suppress speech they find polarizing merely because they disagree with it politically?

Yang's push to alter 230 is similarly misguided, as he seems to think that removing liabilities would somehow end only bad online content. "Section 230 of the Communications Decency Act absolves platforms from all responsibility for any content published on them," he writes. "However, given the role of recommendation algorithms—which push negative, polarizing, and false content to maximize engagement—there needs to be some accountability."

Yet social media sites are already working to police content they deem harmful—something that should be clear in the many Republican complaints of overzealous and biased content removal efforts. Section 230 expressly permits those tech companies to scrub "objectionable" posts "in good faith," allowing them to self-regulate.

It goes without saying that social media companies haven't done a perfect job with screening content, but their failure says more about the task than their effort. User-uploaded content is essentially an infinite stream. The algorithms that tech companies use to weed out the content that clashes with their terms of service regularly fail. Human screens also fall short. Even if Facebook or Twitter or Youtube could create an algorithm that only deleted the content those companies intended for it to delete, they would still come under fire for what content they find acceptable and what content they don't. Dismantling Section 230 would probably discourage efforts to fine-tune the content vetting process and instead lead to broad, inflexible content restrictions.

Or, it could lead to platforms refusing to make any decisions about what they allow users to post.

"Social media services moderate content to reduce the presence of hate speech, scams, and spam," Carl Szabo, Vice President and General Counsel at the trade organization NetChoice, said in a statement. "Yang's proposal to amend Section 230 would likely increase the amount of hate speech and terrorist content online."

It's possible that Yang misunderstands the very core of the law. "We must address once and for all the publisher vs. platform grey area that tech companies have lived in for years," he writes. But that dichotomy is a fiction.

"Yang incorrectly claims a 'publisher vs. platform grey area.' Section 230 of the Communications Decency Act does not categorize online services," Szabo says. "Section 230 enables services that host user-created content to remove content without assuming liability."

Where the distinction came from is somewhat of a mystery, as that language is absent from the law. Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such. A newspaper, for instance, can be held accountable for libelous statements that a reporter and editor publish, but their comment section is exempt from such liabilities. That's because they aren't editing the content—but they can safely remove it if they deem it objectionable.

Likewise, Facebook does not become a "publisher" when it designates a piece of content to the trash chute, any more than a coffee house would suddenly become a "publisher" if it decided to remove an offensive flier from its bulletin board.

Yang's mistaken interpretation of Section 230 is likely a result of the "dis/misinformation" around the law promoted by his fellow presidential candidates and in congressional hearings. There's something deeply ironic about that.

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: Ford v Ferrari Is Thrilling, Excellent, and Not a Superhero Movie

Billy Binion is a reporter at Reason.

TechnologySection 230Free SpeechFirst AmendmentAndrew YangElection 2020
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests

Show Comments (103)

Latest

'Banal Horror': Asylum Case Deals Trump Yet Another Loss on Due Process

Billy Binion | 5.29.2025 5:27 PM

Supreme Court Unanimously Agrees To Curb Environmental Red Tape That Slows Down Construction Projects

Jeff Luse | 5.29.2025 3:31 PM

What To Expect Now That Trump Has Scrapped Biden's Crippling AI Regulations

Jack Nicastro | 5.29.2025 3:16 PM

Original Sin, the Biden Cover-Up Book, Is Better Late Than Never

Robby Soave | 5.29.2025 2:23 PM

Did 'Activist Judges' Derail Trump's Tariffs?

Eric Boehm | 5.29.2025 2:05 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS

© 2024 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.

This field is for validation purposes and should be left unchanged.

This modal will close in 10

Reason Plus

Special Offer!

  • Full digital edition access
  • No ads
  • Commenting privileges

Just $25 per year

Join Today!