MENU

Reason.com

Free Minds & Free Markets

Expect More Conservative Purges on Social Media If Republicans Target Section 230

Killing Section 230 would only lead web platforms to ban even more speech.

Gary Waters IKON Images/NewscomGary Waters IKON Images/NewscomBipartisan lust to destroy the internet is growing, with calls from both Republicans and Democrats to carve out exceptions to or just destroy the law that lets most of the U.S. internet today exist as it does: Section 230 of the decades-old Communications Decency Act. Republicans have recently been rallying around Section 230's demise under the apparent delusion that it would protect conservative voices from marginalization and banishment on social media.

Some of this seems to be based on significant misunderstanding of what Section 230 is and does. Other conservatives clearly just find it good fodder for persistent persecution yarns. On the latter front, senator-elect Josh Hawley provides a perfect example.

As the current attorney general of Missouri, and one of the cadre of attorneys general obsessed with taking down Backpage—an unconstitutional passion project of state prosecutors' that centered on the meaning and application of Section 230—Hawley certainly should know how the statute works. Basically, that's by declaring that "no provider or user of an interactive computer service" should be treated as the speaker of content created by others when it comes to assessing certain legal liabilities.

Yet, as lawyer and The Verge Editor-in-Chief Nilay Patel pointed out this morning, here's Hawley acting like Twitter is being accorded some special status under Section 230 that's conditioned on the social platform taking no sides:

Nothing in Section 230 requires that an entity maintain "a true diversity of political discourse," whatever that means. This is not like the old equal-time rule for broadcast networks. You can be a web publication with an explicitly ideological slant, a social platform with idiosyncratic policies on permitted speech, or a private messageboard for anarchist Amish redheads to discuss the Cubs and Section 230 doesn't care. All that matters, for the most part, is whether a digital platform or service created whatever content is in question.

(Obligatory explainer: The standards are different for content suspected of enabling intellectual property violations, prostitution, or child sexual exploitation, and Section 230 won't shield anyone from liability for federal crimes.)

The law explicitly states that a service's efforts to filter out illegal or undesirable content do not open it to liability. "Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected," reads 230(c)(2).

"The only reason Section 230 exists is to give platforms the ability to moderate free of liability," writes Patel.

The law was implemented in response to Stratton Oakmont v. Prodigy, a 1995 case where the investment firm now immortalized in the film The Wolf of Wall Street sued Prodigy over message board posts it claimed were defamatory. The court found that since Prodigy moderated its message boards, it exerted the same editorial control as a publisher, and was thus liable for what was published.

Holding Prodigy liable for every post on its message boards was, of course, insane, and Congress responded by including 230(c)(2) in the Communications Decency Act [...]

Patel concludes that "it's baffling" why "Republicans like Hawley and [Ted] Cruz seem intent on getting both the legislative intent and plain text of 230 exactly backwards." But not really—Hawley has long railed against Section 230 in his attempts to get a piece of Backpage's profits for Missouri. For others, weakening or demolishing Section 230 could let them more easily censor publications they don't like, or make point about gun control, or regulate "hate speech," or any number of things. That's why there's such bipartisan fervor by politicians to undermine it.

Weakening Section 230 works as an all-purpose salve toward authoritarian ends.

Assertions from Republicans that Section 230 stands in the way of them getting more fair online treatment are especially ridiculous. Weakening 230 would require platforms and providers to crack down more tightly on all manners of speech to avoid liability. If conservatives think they're unfairly targeted for Twitter suspensions and Facebook jail now, just wait until these sites are facing 50 angry state attorneys general and millions in civil fines if they make a wrong call. Erring on the side of more speech doesn't stand a chance.

Photo Credit: Gary Waters IKON Images/Newscom

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  • Unicorn Abattoir||

    I suspect that more conservative purges are coming regardless of what Republicans do.

  • lap83||

    Either way, I think we can all agree it's the Republicans' fault

  • buybuydandavis||

    Not all Republicans. The right wing of the Globalist Uniparty is blameless.

    Only Trump and the Deplorables are to blame. Like always.

  • ThomasD||

    Lay back and take it is the responsible thing to do.

  • ThomasD||

    "Don't petition the Warden, it will only makes the guards worse."

  • SlothB77||

    Exactly.

    >"Republicans have recently been rallying around Section 230's demise under the apparent delusion that it would protect conservative voices from marginalization and banishment on social media."

    Nope, completely wrong.

    Conservatives are being purged either way. We have nothing to lose. Getting rid of 230 takes liberal voices and the powerful social media giants with us. Either let both sides speak or neither will. You don't get a monopoly on speech.

  • SimonP||

    I suspect that websites will continue to cherry-pick examples of conservatives being banned for violating ToS so as to create the misimpression that there is some kind of politically-motivated "purge" going on, as a strategy to in fact slant platforms to favor conservative speech - exactly as they've done with the mainstream media and universities.

  • Diane Reynolds (Paul.)||

    Perhaps we can remake net neutrality to force social networks to carry pages they don't want to.

  • Fist of Etiquette||

    The subscriptions to your newsletter just increased tenfold.

  • Longtobefree||

    Well, is there a legal distinction between a cake and a web page?

  • buybuydandavis||

    Serve the damn web page!

  • Uncle Jay||

    May Saint Karl Marx' soul bless these Big Tech operators and owners.
    These Big Tech people recognize that allowing counter-revolutionary thought only brings strife, doubt and questions regarding our ruling elites.
    Freedom of speech is never tolerated in any socialist utopia because the prudent Though Police know that an open discussion only leads to division, mistrust and possible politically incorrect behavior that will topple the calm and harmony our socialist slavers have so conveniently given us.
    So let us applaud the Big Tech censors so we can all further enjoy the repression needed to have a society that everyone thinks, acts and talks alike for the sake of freedom and personal uniqueness.

  • chemjeff radical individualist||

    Thank you for the explainer on Section 230. I had a hunch that there was more to the story than what our local Republican colleagues were saying about it.

  • Social Justice is neither||

    Yes, because when I want rational analysis I always go to Vox and their affiliates.

  • chemjeff radical individualist||

    If you have a better explanation of Section 230 then by all means please present it.

  • Longtobefree||

    https://en.wikipedia.org/wiki/Section_
    230_of_the_Communications_Decency_Act

  • JesseAz||

    Pft. References that link to primary sources don't matter.

  • Sevo||

    That link also gets me a whiny 'pledge window' on how Wiki can't be pure if it takes ads.
    Fuck you, Wiki. You think I can't tell when you're bullshitting 'cause ads, just like you're bullshitting 'cause no ads there?

  • John||

    There really isn't. But since you are incapable of reading with any percision or grasping any kind of complex reasoning, I am sure you read this statute to mean whatever you want it to mean. Actually thinking and interpreting language is just not something you do.

  • SlothB77||

    If the opinions of your side were being censored from all significant internet platforms, while the opinions of the other side, no matter how hateful or inciteful, were being allowed what would do? Remember, your side would be silenced whether 230 was kept or not.

  • JesseAz||

    Hey everybody. Jeff has now read nearly a dozen paragraphs on 230. This makes him an expert now. Please don't point out other legal analysis that argues differently. His bias is confirmed. We can all move on now.

  • chemjeff radical individualist||

    I'm not an expert, and I never claimed to be. However I do know how to read and I do know how to construct arguments. Instead of snarking maybe you should try constructing some arguments yourself.

  • Rational Exuberance||

    However I do know how to read

    In the sense that you can cobble together letters into words if you read them out aloud really slowly.

    Your problem, however, remains that you can't think.

  • JWatts||

    Sorry, but the premise of the argument is a bit uncertain.

    Here's what the actual law says:

    "2) Civil liability No provider or user of an interactive computer service shall be held liable on account of:

    (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or"

    It's not clear that moderation for political point of view is actually allowed under a strict reading of this law. The only clause that would cover it is "otherwise objectionable".

  • Zeb||

    And I think that does cover it. It's certainly true that many people find other people's political views objectionable.

  • Zeb||

    In any case, if the provider says he found material objectionable, how are you going to disprove that?

  • John||

    It doesn't just say objectionable Zeb. It gives a list of things that that count as objectionable. I don't think it is reasonable to read that statute as giving this long list of things that platforms can ban only to end the list with a catchall term that allows them to ban anything they want. "Otherwise objectionable" was clearly intended to cover things that are similar to the things listed but somehow don't fit strictly into any of those terms.

    For example, if a provider didn't remove nude pictures or consider them lude, it could still remove revenge porn as 'otherwise objectionable".

  • Brett Bellmore||

    Exactly, it's a standard legal principle that lists with catch all provisions are to be interpreted such that the items incorporated under the catch all provision are the same sort of thing as the listed items.

  • Zeb||

    I don't know how you can interpret it that way when it says "otherwise". That means in different ways. If you want a law to mean something in particular, then you should write it so it says what you mean.

  • Zeb||

    Maybe there is some special legal meaning, but in plain English, it's just saying that anything else that a person running a platform finds objectionable also is fair game.

    If they wanted it to mean things that were objectionable in similar ways to the specific ones, then they should have said "similarly objectionable" or something like that. "Otherwise objectionable" seems to me to include things outside of the conventionally objectionable things listed. I think there is some precedent for this kind of interpretation having something to do with laws against sending obscene materials in the mail. I'll have to try to find that and make sure it isn't something I dreamed about or something.

    I'm sort of not sure what to think here. It would be a terrible thing if all right of center viewpoints were silenced. But I just don't see that happening. I could be wrong.
    What I am fairly sure about is that punishing companies who appear to be censoring people based on point of view is not a good path to start on.
    We need to somehow convince most people that a cultural value of freedom of speech is important and that hearing things you don't like is OK. If that can't happen, we're screwed either way.

  • buybuydandavis||

    "but in plain English, it's just saying that anything else that a person running a platform finds objectionable also is fair game."

    Law is often not in plain english but in terms of art and standardized construction.

    That's how the Law Guild makes sure that the peasants need them.

  • Stevecsd||

    Zeb, read this article on Real Clear Politics

    https://tinyurl.com/RCP230

    The courts have interpreted the "otherwise objectionable" to only be related to the other things listed in that section, not a broad interpretation.

  • Ken Hagler||

    That clause is quite vague enough, though. Obviously the social media companies in question consider not being a communist to be objectionable.

  • Benitacanova||

    And they've decided that "in good faith".

  • Calidissident||

    The phrase "that the provider or user considers" is key there.

  • John||

    No it is not. There is an implied standard of reasonability there. If there wasn't, then the following language would be meaningless. And statutes are to be read in a way that gives the terms in them meaning whenever possible.

    Moreover, even if you believe that "considers to be" covers whatever the provider thinks it does, the statute at the very least requires the provider to declare something to be one of those things before removing it. Providers are not doing even that and just kicking people off and removing content without explanation.

  • Calidissident||

    What exactly constitutes reasonable here? The statute explicitly states that it covers removing material that's constitutionally-protected so it clearly isn't that.

  • John||

    What constitutes reasonable is just that, a reasonable interpretation of the terms. It doesn't have to be to everyone's satisfaction but it has to be reasonable. And that at the very least means whatever terms they apply are applied consistently and have some rational basis such that they are not arbitary and capricous.

    You seem to be under the illusion that courts can't apply a reasonable standard in any objective way. And that is just nonsense. The entirety of administrative law for example is based on the concept that a government's action is permissible so long as its interpretation of its authority is reasonable and not arbitary and capricious. It is the same thing here. Courts apply such standards all of the time.

  • Calidissident||

    There's nothing in the law that indicates that websites moderating political content they find unreasonable isn't covered by section 230 regardless of whether you agree with that or not. You're reading it to say what you want it to say, not what it actually says.

  • John||

    Yes there is. The law lists things that providers can remove consistent with having the immunity. Political content the provider doesn't like isn't on the list. The only way you can read it to be on the list is to either read the terms "provider considers.." to mean that the terms that follow have no objective meaning and the provider can do whatever it wants or to read the term "otherwise objectionable" to have no relationship to the other terms such that it becomes a catch all that allows providers to ban anything they want.

    Both ways of reading the statute render the terms listed there meaningless. If you read the statute that way it effectively says "providers can ban any content they want for any reason". And that is clearly not what Congress intended. And indeed, it renders the entire clause superfulous because it effectively means all providers get the immunity regardless of what they do.

    Under your reading of the statute, how could a provider not have this immunity? And if that is what Congress meant, why did it go to all the trouble to list all of those things out and not just say that? That is not how you read statutes. You read them in a way that gives every term meaning to the greatest extent possible.

  • Calidissident||

    It may not be intended to cover everything, but it's clearly a very broad protection and was intended to be. I see no reason to interpret what providers are allowed to consider "obscene, etc." in a narrow manner.

    More to the point, I think that political content is intrinsically inclined to be objectionable and in some cases obscene to those who disagree with it. In some instances this is obvious to 99% of reasonable people - "All Jews should be killed" is a core political position of National Socialism. "All kulaks and reactionaries should be killed" is a core political position of Stalinism. If someone removes this content, should that open them up to liability? If there are your proposed exemptions, I don't see how it should. But if not, my point is that it ultimately does permit discrimination by political ideology, and it opens up a can of worms of the government deciding which political positions and ideologies it's ok for websites to reasonably view as obscene or objectionable and thus censor without losing liability.

  • John||

    Under your standard, what would a provider have to do in order to lose immunity? I don't see how they could. And if that is the case, you are reading the statute to give universal liability when Congress clearly didn't intend that.

    There is nothing special about political content. If you think political content the provider sees as objectionable is covered by this, than why isn't non political content they find objectionable covered by it? There is nothing in the language or history of that statute that indicates Congress intended "otherwise objectionable" to cover political content but not anything else.

    Basically, you want every provider to have this immunity no matter what they do. And that may be a great idea. But that is not what Congress did here. They clearly intended to restrict the ways in which providers could censor content in order for them to avial themselves of the immunity.

  • Rational Exuberance||

    It may not be intended to cover everything, but it's clearly a very broad protection and was intended to be

    Well, and perhaps that is why Republicans are reasonably considering narrowing that protection and clarifying the law!

  • chemjeff radical individualist||

    Or, it could be interpreted as Congress saying: "Here is a list of things that are commonly cited as objectionable material (lewd stuff, etc.), and we will explicitly grant all platforms immunity for removing those items, regardless of the nature of the platform itself. But since the entirety of what is considered objectionable is really in the eyes of the beholder, we will also grant immunity for anything that the particular online community deems objectionable, with the phrase 'otherwise objectionable'."

  • John||

    If you read the statute that way and any online community can remove anything it finds objectionable, then the other items on the list are meaningless. And that is not how statutes are read. Courts do not interpret statutes in ways that render language meaningless. And that is what your interpretation does. If an online community can ban anything it finds objectionable, then the content being lewd or obscene or illegal is irrelevent.

  • Calidissident||

    Aren't those irrelevant anyway as long as something falls under your definition of what they're allowed to consider "otherwise objectionable?"

  • John||

    Aren't those irrelevant anyway as long as something falls under your definition of what they're allowed to consider "otherwise objectionable?"

    No. because otherwise objectionable has a meaning. At the very least it means the provider is consistent with whatever rule it has and not arbitrary. And it is clear what "otherwise objectionable" means. It is talking about things that go against the mission or nature of the community. If you are a Christian platform, you can kick off any non Christian content. Sure, you can define your platform however you want. But your rules have to stated and applied evenly. And that is not the same as the sort of complete discretion you are arguing for.

    Basically you think no provider should ever be liable for anything. And that is not what the statute says no matter how hard you try and torture it to do so.

  • chemjeff radical individualist||

    Sure, you can define your platform however you want. But your rules have to stated and applied evenly.

    Where does *Section 230* require this?

    A platform's Terms of Service may state this, and if a platform violates its own Terms of Service, then that opens up to a breach of contract. But that has nothing to do with Section 230 immunity.

  • Calidissident||

    ^What chemjeff said.

    Also, you didn't address my point in the second paragraph. You really cannot eliminate political bias without having zero moderation, or at the very least allowing everything that's constitutionally-protected, which the law clearly and explicitly does not mandate.

  • John||

    Also, you didn't address my point in the second paragraph. You really cannot eliminate political bias without having zero moderation

    Yes you can. Look at Hit and Run. There is no political bias in their moderation. They moderate all of the time. When some lunatic like Hihn doxes someone or someone puts up illegal content, Reason kicks them off. And reason in now way engages in political bias in its moderation.

    There is nothign difficult about it.

  • Calidissident||

    Reason's moderation is pretty bare-bones and doesn't go very far beyond eliminating stuff that isn't constitutionally-protected, which the law explicitly is not limited to.

    Can you answer the specific examples I gave? Would a website open itself to liability if it bans a Nazi for posting "Gas the kikes!" or a communist for posting "Kill the rich!"

  • JWatts||

    "Can you answer the specific examples I gave? Would a website open itself to liability if it bans a Nazi for posting "Gas the kikes!" or a communist for posting "Kill the rich!""

    Those would be covered under the "excessively violent, harassing," categories. However, the root question is: Could a self identified Nazi not posting anything objectionable be banned just because they were identified as a Nazi?

    Or could a black person be banned because they were writing in a "black manner" that the provider found objectionable?

  • Calidissident||

    The point is that those positions, as offensive as they are, are undoubtedly political, and thus the standard would allow for political discrimination. It just becomes a question of which ideologies and positions are determined to "objectively" fit into those categories.

    To go off your Nazi example - who is being banned (from major social media sites rather than random ideological political blogs) just for identifying with a certain label and not because some people find what they post objectionable? Milo and Alex Jones didn't get banned just for identifying with a political group. You may disagree on whether what they posted was objectionable, or whether they should have been banned even if you do agree that they did post objectionable content, but they weren't kicked off just for identifying with a particular ideology.

  • JWatts||

    "Milo and Alex Jones didn't get banned just for identifying with a political group. ... but they weren't kicked off just for identifying with a particular ideology."

    Let's take the most recent case:

    "Jesse Kelly, a conservative writer, radio host, and failed Republican political candidate, is no longer welcome on Twitter: The social media site permanently banned him on Sunday, for reasons unknown. ...It's not clear which tweets got Kelly in trouble, or if it was something else. The decision to ban him could have been the result of baseless complaints, or even an error on Twitter's part. Kelly told other conservative writers that he was left completely in the dark,"

    I think he should take the matter to court.

  • JesseAz||

    A better example is the liberal classical feminist who was just suspended for posting a biological fact, men can't become women. Their DNA stays the same. She got banned. Yet tons of posts from trans activists remain up calling for physical harm to what they call TERFs, which they claim this feminist is. So Twitter suspends her for a biological fact based on harassment, but does nothing for calls of violence against those who state biological facts. That would violate 230.

  • Longtobefree||

    But the question is; if you ban a Nazi for posting "gas the kikes", and yet do NOT ban a Muslim for posting "kill the Jews", are you exempt under 203?

  • chemjeff radical individualist||

    Well, sure. Why not? I don't see where Section 230 *requires* objectionable content to be removed.

  • Rational Exuberance||

    Well, and hence we should clarify and strengthen it!

  • Zeb||

    Political content the provider doesn't like isn't on the list. The only way you can read it to be on the list is to either read the terms "provider considers.." to mean that the terms that follow have no objective meaning

    Well, it does do that, by a plain reading. I think it's poorly written. If you want a law to apply only to certain specific things, then don't throw a completely generalized catch-all at the end. It does render the rest meaningless. I don't think the courts should be in the business of excusing the poor writing of lawmakers. Congress can fix their sloppy, imprecise writing if they don't like it.

  • Social Justice is neither||

    So here is an ostensibly libertarian organization actively cheerleading attacks on free speech in defense of their progressive allies.

    Sorry, when "free speech platforms" go into actively curating content in pursuit of ideological goals then they aren't free speech platforms anymore as they are adopting the voices they choose to not suppress as their own.

  • Zeb||

    So what would you do?

    I'm not seeing cheerleading here. Just some analysis. And if doing the right thing helps people who disagree with you, shouldn't you do it anyway? Maybe changing this law is the right thing. I don't know. But I don't think this is (or should be) about taking sides in the ideological debate.

  • JesseAz||

    Removing legal protections is the right move. Twitter is being used by the left to fox, organize, and advertise violence against people like Carlson. The group was only suspended after an outcry despite obvious ToS violations. Nobody commenting on or contributing to the group faced punishment. Make Twitter liable for selective enforcement.

  • buybuydandavis||

    If Twitter is going to behave like a publisher, they should have the liability of a publisher.

  • Zeb||

    I'm not quite that they are behaving like a publisher. They aren't behaving like a completely open communication platform either. But the only two options aren't publisher or completely open. This is something where the old categories aren't sufficient.
    I don't like what Twitter and all are doing, but I don't think opening them up to lawsuits is not going to be good for open communication or the internet.

  • Rational Exuberance||

    So what would you do?

    Eliminate Section 230. There is no rational reason why online publishers require these kinds of exemptions when offline publishers don't.

  • Zeb||

    Sure, but you are assuming that they are publishers or should be considered as such. I don't think that's obvious. New categories are needed for this kind of social networking platform.

  • Calidissident||

    Where is the cheerleading, and where exactly is the legal analysis wrong here?

    Also, is the libertarian position to mandate what privately-owned companies must allow on their website?

  • John||

    I don't think it is cheerleading but it is a bit of a silly argument. The assumption seems to be that tech companies really do hate conservatives and conservatives fighting back will just make it worse, even though if conservatives do nothing the tech companies can ban them for any reason and since they hate them almost certainly will do so.

    ENB seems to think that conservatives should just surrender and abide by whatever rules the Progs who run these platforms demand of them. Nowhere in the article that I can see does ENB say that she sees this as something bad. The only thing she seems to disagree with is conservatives not doing what they are told.

  • Calidissident||

    It more seems to be that the proposed solution would have an effect that worsens the problem it aims to solve.

    But setting that aside - if "surrender" in this case means "don't change or threaten to change the law to compel companies into allowing content you want them to allow" then yes, they should do that. If you're just talking about conservatives privately pressuring companies to have different policies, or boycotting, etc. then obviously that's fine.

  • John||

    Why can't conservatives change the law? These companies do not have a right to this provision. Why can't conservatives deprive them of it in response to these companies abusing it?

    These companies were given pretty broad immunity from legal liability on the assumption that they were just platforms and would make a good faith effort to only ban obscene or illegal content. They have clearly abused that and why can't conservatives change the law to deprive them of liability?

    I know it must be horrifying for you to contemplate people you don't like petitioning their goverment for relief but that is how a democratic Republic works sometime.

  • Calidissident||

    I don't think websites should generally be liable for the content users post on them. I don't think changing that is good for freedom or limited government, and I also don't think it's even good for the end goal sought here for the reasons outlined in the article. I don't think the government determining what people are allowed to believe is "obscene" is good either. If a conservative Christian website removes comments promoting abortion I don't think that should open them to legal liability for everything posted there.

  • John||

    If websites are discriminating against content and removing it, why shouldn't they be liable? It is one thing if you make no effort to police content. Then you are assuming no responsibility for what is there. But once you start saying what can and cannot be on the website, there is no way you can then claim you are not liable for illegal content that is put there. If you want the freedom to police the content, you necessarily should assume the responsibility for that content.

    Providers should not be able to have it both ways where they get the freedom to police their platform without assuming any responsibility for how well they do it. It should be one or the other.

  • Calidissident||

    If everything posted is pre-cleared by the website, then I'd agree with you. If anyone can post anything at any time, I don't think it's reasonable to hold the website responsible for not sifting through everything quickly and accurately enough to delete everything that should be deleted. In some cases where it could be sufficiently illustrated that they were aware of some illegal content and failed to act, then that might be appropriate but not as a general rule. And as Shackford said in the article, the existing law doesn't give blanket immunity as is by any means.

  • John||

    Why is it not reasonable to hold them to that? Is it hard? Sure. But they are the ones that want the discretion to censor content for any reason. If they think doing that is too hard, they can always just not censor content at all.

    The logic behind this immunity is not that because policing content is hard providers should be immune. It is that if a provider just gives other people a forum in the same way paper sellers give people something to write on, they shouldn't be responsible for what is put up on the site. But if you are policing content, you are not just selliing blank paper. You are running what amounts to a publication and like every other publication should be held accointable for illegal content in it.

  • Calidissident||

    If a paper seller only sold to select customers should they be open to liability? I'm sorry, but I don't think a publicly-accessible online platform removing particular content makes every comment there the same as the stories a newspaper decides to publish. The New York Times should be responsible for what they put in their paper, but I don't think they're responsible for everything someone says in the comments on their website.

  • John||

    The New York Times should be responsible for what they put in their paper, but I don't think they're responsible for everything someone says in the comments on their website.

    If the New York Times goes to the effort to monitor its content and remove content it doens't like, then absolutely they should. They are making the effort to monitor the comments. They shouldn't be allowed to ignore illegal content since they are.

  • Calidissident||

    And I disagree with that. I don't think any moderation at all implies responsibility for everything else. And the law pretty clearly agrees with that. The original case the law was written in response to was about a forum being held liable because they moderated comments.

  • JesseAz||

    The New York times can be held liable for what their writers put into articles... I'm not sure you understand the issue.

  • chemjeff radical individualist||

    So what is your proposed solution, John?

    Repeal Section 230 entirely? I think that would be a bad idea.

    Modify Section 230? If so, how?

  • John||

    I think we should just enforce section 230. The problem here is not the law it is the fact that courts are not enforcing it. If these platforms want to arbitrarily kick people off of them or discriminate based on political content, good for them. But when they do that they should then assume the responsibility for all of the content on their platform.

    Maybe the law needs to be written more clearly to say that platforms cannot engage in discrimination based on political affiliation or anything other than the content being obscene, violent or illegal if they want to avial themselves of the immunity.

  • chemjeff radical individualist||

    Why should only discrimination based on political affiliation be forbidden? What about other forms of discrimination? Would those be permitted, and if so, why the discrepancy?

  • Mickey Rat||

    The argument is not that discrimination based on content is forbidden. It is that it removes liability protections for the content that is run.

    You can have an editorial viewpoint, but you are responsible for what is published on your platform if you do.

  • chemjeff radical individualist||

    And Section 230 was designed to remove that liability from the platform *even for* platforms that are editorially biased.

  • Calidissident||

    A problem I see with your solution is the question of whether the moderation of obscene or violent content is done in a consistent manner. With massive websites, it's basically guaranteed this will never be the case even if good-faith efforts are made. And you'll inevitably have cases where someone argues "I was banned for promoting violence, but this other person wasn't banned for promoting violence!" and I just don't see the legal enforcement of that turning out well.

  • John||

    Sure you will Jeff. You have that in every law. The devil is always in the details. But, if you made the language clear, providers would have to at least make a good faith showing that they are not just kicking people off arbitrarially. And that would eliminate a large majority of this stuff. Would there be close cases where courts gave the provider too much deference or screwed the providers by not giving them enough? Sure. But that is the nature of courts and laws. No law is ever enforced perfectly.

  • chemjeff radical individualist||

    It would also deter websites from even establishing or maintaining forums in the first place. Imagine a small blog with a small number of followers, and then along comes a troll who shits all over the forum and makes life unbearable. The blogger bans the troll, but then the troll sues the blogger for 'discrimination'. What blogger wants to deal with that type of headache? Better not to even have a comments section in the first place.

  • Calidissident||

    And to add to your point, moderating would open them up to liability for what everyone posts if the government didn't find their moderation to fit the narrow exemptions in John's proposed modification of the law.

  • John||

    And the result of that Callidissident would be unmoderated forums. I don't see the problem with that. And I think you greatly over estimate the difficulty in moderating forums if all you are doing is getting rid of illegal or irrellevent content.

    BTW, the "otherwise objectionable" is mean to allow providers to kick out content that does not fit with their product. It allows say a gaming forum to kick out non gaming content.

    All of these rules just require the platform to apply reasonable and uniform rules in a consistent way. That is not difficult to do.

  • chemjeff radical individualist||

    It allows say a gaming forum to kick out non gaming content.

    So why should it not allow a left-wing forum to kick out right-wing content, or vice-versa?

  • John||

    You absolutely should. The problem with what Facebook and Twitter are doing is not that they are kicking out just right wing content. It is that they are establishing rules that claim to be neutral but then not applying them that way. If Facebook wants to come out and say "we only allow leftwing content on here because we are a leftwing platform" that is totally okay under this statute and the "otherwise objectionable" standard. What they can't do is claim to be one thing but actually be another.

  • chemjeff radical individualist||

    It is that they are establishing rules that claim to be neutral but then not applying them that way.

    If that's the case, then that is a violation of their contract with their users, that has nothing at all to do with Section 230. The law doesn't say anything about immunity only being granted to platforms that claim to be neutral.

  • Calidissident||

    Unmoderated forums can be easily ruined by trolls and bad actors. And as chemjeff said, what would really happen is a lot less forums. I don't think it's reasonable to force the choice between no forum and no moderation because you don't like the moderation policies of certain websites. Forums that are still moderated are probably going to be moderated much more strictly in order to avoid liability for anything. And the existing law clearly isn't supposed to be limited to forums just removing illegal content as it explicitly states "whether or not such material is constitutionally protected."

    Can you please provide a citation for your argument in the second paragraph? And what if a website's product involves a political slant? If a website openly identifies with a certain political ideology, is removing content from different ideologies just maintaining the product?

    This proposal is an awful solution that creates problems far greater than the one it aims to solve.

  • John||

    So what Jeff? Freedom is hard. If you don't want the responsibility that comes with it, don't take it. There is nothing that says the law should bend over backwards to make it easy to run platforms anymore than it should bend over backwards to make it easy to run any other business.

    If a website is publishing libelous material for example, the default position isn't that they are not responsible for the harm that it causes. The default position is that they are. And to exempt them from that, they should have to stop being publishers.

  • chemjeff radical individualist||

    Well, John, if your proposed solution leads to forums closing up shop, then perhaps your proposed 'cure' is worse than the disease.

  • newshutz||

    Yeah, if Reason closed its forums, The Hihsain One would have no outlet for its pain.

  • Azathoth!!||

    Also, is the libertarian position to mandate what privately-owned companies must allow on their website?

    This is the standard leftist switch.

    These sites--youtube, Facebook, Twitter and the like all agreed to terms that stipulate that they are not liable because they are not publishers--they are conduits.

    They did this with the express purpose of being able to deliver content that the standard media couldn't--made by creators from everywhere. It was a great idea that worked well--until they screwed it up.

    They are not abiding by the contract they agreed to--and it IS, in fact, a libertarian position that contracts, entered into willingly, should be upheld.

    What they're doing now, editing content outside the terms of service their subscribers signed is clear breach of contract.

  • Calidissident||

    Breach of contract and section 230 are different legal issues. I think they would also probably disagree with on what their terms of service does and does not cover.

  • JesseAz||

    What is the libertarian theory about excluding favored groups from liability?

  • Rational Exuberance||

    Also, is the libertarian position to mandate what privately-owned companies must allow on their website?

    No, the libertarian position is to remove corrupt, special-purpose, crony-capitalist exemptions from laws like Section 230 that everybody else is subject to.

    That is, removing Section 230 does not tell anybody what to publish or what to allow, it simply puts online publishing into the same legal situation as offline publishing.

  • Mr. JD||

    Exactly.

    Whenever a person on the Right criticizes Affirmative Action or social welfare, he's required to start with a disclaimer that "some racism does exist" or "some people are truly in need" before he's allowed to explain why the programs are bad on net.

    But can an ostensibly libertarian site even manage the shortest of disclaimers about attacks on free speech being bad before defending social media sites as they lie about the partisanship in their moderation?

  • John||

    Or propose a solution to the problem other than conservatives to shut up and stop being so icky?

  • chemjeff radical individualist||

    attacks on free speech being bad

    But "attacks on free speech" are NOT necessarily bad, presuming you mean more of an "ethos of free speech" rather than direct government censorship.

    For example, if I invite you to a party at my house, and you go on and on about how much you love Donald Trump, I should have every right to throw you off my property based on your speech. I wouldn't have acted in the spirit or ethos of free speech, but my private property rights come ahead of any ethos that may be socially expected of me. In that case, I would argue that this "attack on free speech" is NOT a bad thing, and instead is a GOOD thing insofar as it affirmed the primacy of property rights.

  • JesseAz||

    Your example is terrible and proves you don't understand the issue.

  • chemjeff radical individualist||

    My example is meaningful and your objection to it proves that you cannot think beyond the narrow confines of your own biases.

  • Rational Exuberance||

    Section 230 doesn't protect property rights; rather, Section 230 grants special legal immunity to a small group of privileged publishers, namely online publishers.

    Section 230 is like saying that if you made a few billion dollars by starting a high tech startup, you (unlike everybody else on the planet) are exempt from all personal liability.

  • Azathoth!!||

    You adore this example--


    For example, if I invite you to a party at my house, and you go on and on about how much you love Donald Trump, I should have every right to throw you off my property based on your speech.

    You use it again and again........against issues it doesn't apply to.

    See, it's absolutely correct--it just isn't analogous to the situation with social media.

    Let's stick with the party analogy--

    Say you want to put on a massive party, so you look to rent a place big enough to fit everyone. You find one, but the owner wants you to pay a deposit in case anyone damages anything. You don't have it but you manage to set it up so that every attendee signs something that makes them liable for anything they damage. The owner accepts this—so long as the stuff doesn't get damaged by the attendees doing something YOU have them do. If you start having them do stuff, you're responsible for ALL the damages yourself.

    That's where this party started. That's where your analogy needs to start. Not at a simple house party.

  • Fist of Etiquette||

    You know who else liked purging?

  • Unicorn Abattoir||

    Bulemics?

  • Dillinger||

    Karen Carpenter too soon?

  • Brett Bellmore||

    "here's Hawley acting like Twitter is being accorded some special status under Section 230 that's conditioned on the social platform taking no sides:"

    It was, in fact, tacitly conditioned on that. The idea that they'd up and start censoring political speech was considered so absurd at the time that it wasn't though necessary to formally condition their, yes, special status on not engaging in political censorship.

  • chemjeff radical individualist||

    The idea that they'd up and start censoring political speech was considered so absurd at the time that it wasn't though necessary to formally condition their, yes, special status on not engaging in political censorship.

    You're kidding, right? There were most certainly partisan biased political forums on the Internet even in 1996 when the CDA was written.

  • Rational Exuberance||

    But the big, influential sponsors of this legislation weren't among those partisan biased political forums.

  • Stephen Lathrop||

    The OP gets the biggest point critically wrong. Nothing in the repeal of Section 230 need expose private publishers to any kind of government speech suppression whatever.

    Publishing without Section 230 protection is the default situation, still today, for ink-on-paper publishers. That doesn't mean in any way that they become targets of government censorship, that they become subject to government pressure about what they publish, or that they risk being turned into criminals for anything they do in the normal course of publishing. What it does mean is that they are subject to liability under civil law, especially for defamation or copyright violations. That, in turn, means they have to read everything before they publish it.

    Pre-internet, that constraint never kept America's publishers from being a model to the world for press freedom. Applying the same standards to the internet now wouldn't be any worse. Indeed, an internet operated under traditional standards of liability would far excel its ink-on-paper predecessor for freedom, because it wouldn't need ink or paper—eliminating the biggest bar preventing the impecunious from becoming publishers. "Gatekeeper" concerns would be reduced accordingly. And all without government speech regulation.

  • John||

    Eactly that. ENB and others on this thread are acting like internet publishers are special and somehow entitled to be immune from the repsonsibilities that come with free speech and publishing in every other context. And that is just bullshit.

  • Calidissident||

    The point is that most people here would consider requiring every Internet website to read every comment before "publishing" it to be a bad thing for freedom. No one is arguing for a different standard between a print newspaper and an online magazine.

  • Stephen Lathrop||

    Well, the point is misplaced. Because everyone disappointed with the standards enforced by editors they didn't like would be free to become his own publisher, and say what he pleased—subject, of course, to taking responsibility for his own defamations, etc.

    But if the larger point you suggest is that it too much constricts "freedom," to be held responsible for libel (etc.), I take your point, while disagreeing. I do think much of the confusion now creating puzzlement with Facebook and others like it, is precisely that many folks have grown accustomed to the notion of consequence-free defamation, and think they like it. There are plenty of folks who think what is best about the internet is its apparent promise to—for the first time—let them weaponize their own speech, and use it to inflict real damage on others they consider to be their enemies.

    My suggestion is that that attitude is utterly incompatible with speech freedom, and that those two can't long exist together. Either weaponized speech goes away, or free speech does. You can see the strain on the latter by noting how many of the comments here have turned recklessly toward schemes of government speech regulation.

  • chemjeff radical individualist||

    Everyone agrees that actual defamation should be punished. For example, if you posted a defamatory comment about me here at Reason, Section 230 would not prevent me from pursuing charges of defamation against you. Section 230 just means that I couldn't also pursue charges against Reason as well.

    The anonymity of the Internet, along with Section 230's protections, *do* enable a lot of people to turn into complete assholes online. But defamation is still defamation no matter how much of an asshole some people can turn into.

  • Rational Exuberance||

    Section 230 just means that I couldn't also pursue charges against Reason as well.

    Yes, that's what it means. And people are proposing removing that special form of legal immunity.

    So far, you have made no compelling argument why that is a bad idea.

  • buybuydandavis||

    They wouldn't be *required* to review every comment. But they wouldn't receive immunity from liability for their actions either.

    Be a common carrier or be a publisher, complete with the liabilities associated with either.

    They'll pick common carrier the moment they face that choice.

  • JesseAz||

    Your understanding of what most people want seems ignorant. What most here want is an equal application of the ToS instead of targeted application in exchange for liability immunity.

  • chemjeff radical individualist||

    an equal application of the ToS

    Which has nothing to do with Section 230.

    Using a privilege that Twitter would otherwise be eligible for as a political weapon to beat them into compliance with a demand from the state is pretty slimy behavior.

  • Rational Exuberance||

    The point is that most people here would consider requiring every Internet website to read every comment before "publishing" it to be a bad thing for freedom.

    That's because the current commenting model is intrinsically broken, and getting rid of Section 230 would fix this.

    How would that work? Reason would not have its own comment section anymore; instead, people would self-publish comments that refer to the Reason web site and the browser would link all those comments together into something that would look very much like the current comment section. This would be a big improvement over the current privacy-invading, censorship-enabling comment sections.

  • Lester224||

    Sounds good but would have unintended consequences. People are lazy. That would result in many fewer comments. Only those with a lot of free time on their hands a lots of energy would comment. The most frequent commenters here, say. A recipe for insult-wars and not much else.

  • Dillinger||

    a new perpetual argument. and elected people get paid off.

  • Vulgar Madman||

    Shorter ENB: "Stop resisting!"

  • buybuydandavis||

    +1

  • Curly4||

    Social media will find a way to weed out the users that they don't think should have access. I would not be to surprised that that would also include the smart phone as well. That is unless there is a civil right extended to social media and the smart cell phone just like there was to hotel rooms and clubs that gave blacks the right to rent a room just like white people.

  • spork||

    If you don't like Facebook and Twitter shutting down your opinions, go make your own platform. And then they'll shut that down too.

  • buybuydandavis||

    ""The only reason Section 230 exists is to give platforms the ability to moderate free of liability"

    Reason stands up for crony capitalism yet again.

    Why should social media companies get special privileges that other companies do not?

    The phone company isn't held liable because they're required to be common carriers. They don't spy on what you say and ban you when they don't like it.

    If social media companies are not going to be common carriers, but curated publishers, they shouldn't get exemptions for the liability that other publishers face.

    Maybe ENB is right and the existing law is pure crony capitalism, giving social media companies the rights of other publishers but exemptions from the liability other publishers face. Or maybe the law does require that they act as a common carrier.

    Either way, their special status, with the prerogatives of publishers, but only the liability of common carriers, should end.

  • Sevo||

    "Either way, their special status, with the prerogatives of publishers, but only the liability of common carriers, should end."

    Yes.
    You can claim to simply forward what is sent, do so, and thereby be nothing other than the conduit with zero responsibility for that content. Don't like the content? Find and contact the provider.
    As soon as you begin *any* selection at all (and I mean ANY) you have now taken responsibility for the content.
    Pick one; you don't get a bit of this and a bit of that.

  • buybuydandavis||

    "Holding Prodigy liable for every post on its message boards was, of course, insane,"

    "Of course, insane"

    An admission that they have no argument.

    If Prodigy acts as a publisher, it's perfectly sane to hold them to the same legal liability standards of other publishers.

    And perfectly sane to hold them to the legal liability standards of a common carrier as long as they act as a common carrier.

  • chemjeff radical individualist||

    Well, Congress thought it was insane too, that is why they passed Section 230 in direct response to the Prodigy case.

  • buybuydandavis||

    Not true. That lawmakers make a law one way simply does not imply they consider it insane to make it another way.

    Just what evidence do you have about what Congress *thought* about the law beyond ENB's legal analysis?
    Did you ask any of them? Read Congressional debates on the topic?

    The actual law
    https://www.law.cornell.edu/uscode/text/47/230

    Tidbits
    (a) Findings
    The Congress finds the following:
    (2) These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
    (3) The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.

    They presumed users have "a great degree of control the information that they receive", and that these "interactive computer services offer a forum for a true diversity of political discourse".

    Is it "true diversity" when, by your interpretation, they are perfectly free to entirely ban certain political opinions?
    Do I have a "great degree of control over the information I receive" when the information I want is banned?

    If the findings on which the law was justified no longer hold, it's far from insane to think that they would have written different laws under the findings accurate for today.

  • Robert||

    Did anyone explain the cartoon? Bad enough we're not getting alt-text, but when they require an interpreter...!

  • Sevo||

    I don't tweet or book my face, but it appears as if it is a takeoff of the twitter logo swearing at the public in general. Sorta clever, but a laugh-riot, it aint.

  • Longtobefree||

    Will someone with social media accounts please start a 'boycott twitter' movement on facebook, and start a 'boycott facebook' movement on twitter?

  • darkflame||

    Approaching this question from a different angle, the courts said that Trump's twitter was a public forum... while that case was regarding Trump blocking people, not Twitter blocking accounts, couldn't that same principal hold here? Most politicians use twitter to some extent, and if its a public forum, and Twitter is blocking one side of the political spectrum but not the other (like they are), couldn't there be an argument in court made against Twitter?

  • Rational Exuberance||

    Weakening Section 230 works as an all-purpose salve toward authoritarian ends.

    The idea that holding companies responsible for what they publish amounts to authoritarianism is utter and complete b.s.

    Section 230 (along with net neutrality) has allowed companies like Google to create a rich and powerful monopoly for themselves. Of course, that's why statist, corporatist Reason defends it.

  • Freelancelot||

    Does it matter to you, "Reason", that the Twitter CEO lied blatantly before Congress about Twitter's algorithms? Does intense and ubiquitous but deceitful left-wing prejudice in social media platforms, in most major news media organizations, in academia not disturb you even if it is legal?

  • buybuydandavis||

    ' Nothing in Section 230 requires that an entity maintain "a true diversity of political discourse," whatever that means. '

    "whatever that means"

    If ENB had bothered to read the law, she'd have some idea of the relevance of "true diversity of political discourse", which is a phrase in one of the findings within the law.

    The law
    https://www.law.cornell.edu/uscode/text/47/230

    (a) Findings
    The Congress finds the following:
    (2) These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
    (3) The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.

    They presumed users have "a great degree of control the information that they receive", and that these "interactive computer services offer a forum for a true diversity of political discourse".

    Is it "true diversity" when, by your interpretation, they are perfectly free to entirely ban certain political opinions?
    Do I have a "great degree of control over the information I receive" when the information I want is banned?

    Are "findings" in a law considered binding?

    Even if not, if the findings on which the law was justified are now violated in fact, it's unwarranted to conclude that the legislative intent is fulfilled under those explicit violation of assumptions.

  • Violent Sociopath||

    This is a stupid article, written by a stupid person, relying upon the "expertise" of another stupid person.

    CDA Section 230 is the product of two forces: first, a line of jurisprudence that emerged in the early- to mid-1990s which treated internet service providers as publishers of content that appeared on their platforms or networks; and two, anti-porn social conservatism, which was dismayed by pictures of titties appearing online. The former resulted in ISPs running to Congress and peddling self-serving lies about how they were just passive conduits for content and that it would be impractical for them to engage in robust editorial control. The latter resulted in one of the greatest self-owns in political history: Congress granting ISPs a broad exemption from liability regardless of how much or how little editorial control they exercised over content, in the hope that the ISPs could be leaned on to censor pictures of titties.

    The intervening years have demonstrated two things. First, contrary to the ISPs' self-serving lies, they are fully capable of exercising robust editorial control over content, and there's absolutely no practical reason to immunize them from traditional liability. Second, while the ISPs have very little interest in censoring titty pictures, they have quite a lot of interest in censoring conservatives.

  • Violent Sociopath||

    Section 230 shouldn't be repealed. It should be modified, such that the liability immunity is expressly conditioned on ISPs and social medial platforms remaining content neutral. The law should hold ISPs and social media companies to their own representations. If they're in fact passive conduits for information that do not censor user content, then immunity from liability is appropriate. But if they're publishers -- and it's increasingly clear that's precisely what they are -- then they should be made to assume the legal obligations of publishers.

    The mistake, here, is thinking that this is about "ensuring conservatives get fairer treatment online." No. Section 230 immunity is a special legal dispensation not justified by ISPs' and social media companies' own behavior. It's corporate welfare. Conservatives are under no obligation to continue to support it, particularly when these companies have made clear that they despise us and want to exclude us from their platforms.

    Let them all burn.

  • ThomasD||

    " It's corporate welfare."

    It's ENB. You think she has a problem with that?

GET REASON MAGAZINE

Get Reason's print or digital edition before it’s posted online