The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
More from the free speech and social media platforms symposium in the first issue of our Journal of Free Speech Law; you can read the whole article (by Michigan State law professor Adam Candeub) here, but here's the abstract:
Section 230 of the Communications Decency Act gives internet platforms legal protection for content moderation. Even though the statute is 25 years old, courts have not clearly stated which provision within section 230 protects content moderation. Some say section 230(c)(1), others section 230(c)(2). But section 230(c)(1) speaks only to liability arising from third-party content, codifying common carriers' liability protection for delivering messages.
And while section 230(c)(2) addresses content moderation, its protections extend only to content moderation involving certain types of speech. All content moderation decisions for reasons not specified in section 230(c)(2), such as based on material being considered "hate speech," "disinformation," or "incitement," stand outside section 230's protections. More important, because section 230(c)(2) regulates both First Amendment protected and unprotected speech, it does raise constitutional concerns, but they may not be fatal.