The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
If the Government Bans Viewpoint Discrimination by Social Media Platforms, That Would Also Protect Pro-Terrorist Content
Government action protecting speech must itself be viewpoint-neutral, I think -- and this makes it much less likely that such viewpoint discrimination requirements will indeed be adopted.
There's been a good deal of talk, in President Trump's "Preventing Online Censorship" draft Executive Order as well as elsewhere, about forbidding viewpoint discrimination by social media platforms. (Some call for that outright, while others suggest using the threat of increased liability to pressure platforms to stop such viewpoint discrimination.) I don't think that's consistent with current law (more on that soon), but I can certainly imagine Congress enacting some such statute.
Platforms could respond that they have a First Amendment right not to host speech they disapprove of, much like newspapers may refuse to publish items they disapprove of (see Miami Herald Co. v. Tornillo). But it's possible that they would be treated more like cable systems; the Supreme Court rejected (by a 5-4 vote) a First Amendment challenge to facially content-neutral "must-carry" statute requiring cable systems to carry broadcast channels (see Turner Broadcasting Sys. v. FCC).
But, as I understand it, social media platforms routinely engage in one form of broadly accepted viewpoint discrimination: They try to block and to remove pro-terrorist speech (see, e.g., this general Facebook policy). This isn't limited to speech by known terrorist organizations, nor is it limited to constitutionally unprotected incitement of imminent terrorist activity, or solicitation of specific terrorist acts. Even speech that is protected by the First Amendment from governmental punishment, such as a lone-wolf American urging people to generally engage in jihadist violence, or praising people who had recently engaged in jihadist attacks, gets blocked. What's more, as I understand it, the federal government has long appreciated such actions (though it couldn't constitutionally require them).
If Congress were to indeed require social media platforms to be viewpoint-neutral (or content-neutral) in dealing with user-generated content, then they would have to stop blocking content that expresses a pro-terrorist-violence viewpoint. Likewise, if the federal government stops advertising on platforms that impose "viewpoint-based speech restrictions" or "Violate Free Speech Principles," it would have to limit online platform advertising to groups that don't discriminate against pro-terrorist speech, either. (I quote here from sec. 3 of the draft Executive Order, which seems to contemplate stopping advertising on such platforms, though it doesn't on its face prohibit such advertising.)
Likewise, sec. 4 of the draft Order says that "It is the policy of the United States that large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech." Again, though, much pro-terrorist advocacy (including of the sort that social media platforms try to stop) is protected speech under the Supreme Court's precedents.
Nor do I think that the draft Order could be revised to just have a "but not pro-terrorist viewpoints" limitation. To the extent that Congress can indeed impose limits on social media platforms' editing, those limits must themselves be viewpoint-neutral (and perhaps even content-neutral). Certainly Turner Broadcasting, which upheld Congress's power to require cable systems to carry channels they didn't want to carry, stressed the content-neutrality of that rule. Likewise with PruneYard Shopping Center v. Robins (cited by the draft executive order), which upheld a state's power to require shopping centers to carry speech they didn't want to carry.
The government isn't allowed to discriminate based on viewpoint when it uses its own property to promote a diversity of private ideas. (See, e.g., Rosenberger v. Rector; Matal v. Tam; Iancu v. Brunetti.) By the same logic, it can't discriminate based on viewpoint when it tries to promote a diversity of private ideas on private property, either. Such a grant of immunity from private restraint is as much a government-provided benefit as a grant of money to a wide range of university student groups (Rosenberger) or a grant of trademark protection to a wide range of trademark owners (Matal and Iancu).
The matter is not completely certain, as I discuss at pp. 375-77 of this article; for instance, in Ralphs Grocery Co. v. United Food & Commercial Workers Union Local 8 (Cal. 2012), the California Supreme Court upheld a content-based law that allowed union picketing but not other picketing on employers' private property. In practice, though, even that content-based rule wasn't really viewpoint-based because employers of course could all along speak out in opposition to the union speech on their own property. And in Waremart Foods v. NLRB (D.C. Cir. 2004), the D.C. Circuit held that a similar rule would be unconstitutional precisely because it was content-based. The Supreme Court's plurality opinion in Pacific Gas & Elec. v. Pub. Util. Comm'n (1986) likewise generally condemns "content-based grant[s] of access to private property." (In Turner Broadcasting, the majority seem to endorse the plurality's condemnation of content-based grants of access, by stressing that, "unlike the access rules struck down in [Pacific Gas & Elec.], the must-carry rules are content neutral in application.") And, as I noted above, the general prohibition on viewpoint discrimination in programs aimed at promoting a diversity of private views is broad and strong.
Of course, if a government rule were indeed to require content neutrality—the rule in traditional public fora—and not just viewpoint neutrality, then even more platform restrictions (including some pretty popular ones) would be forbidden. Consider, for instance, some platforms' deleting material (especially on user request) for being pornographic (at least unless they're constitutionally unprotected hard-core "obscenity"), containing nonlibelous personal insults, containing vulgarities, and so on.
Now of course one possible response is that platforms indeed shouldn't be allowed to ban pro-terrorist speech, because the platforms really are "the functional equivalent[s] of a traditional public forum," and the solution for bad speech—on the platforms as well as on sidewalks and in parks—is counterspeech. Another response might be that platform exclusion of pro-terrorist speech is good on its own, but on balance it's better to sacrifice that in order to ban viewpoint discrimination by platforms more broadly. There is much to be said for these sorts of arguments.
But here I just wanted to note that requiring viewpoint neutrality by platforms would indeed invalidate platforms' attempts to suppress pro-terrorist advocacy, and that requiring viewpoint neutrality but excluding some viewpoints would likely be unconstitutional—a result that would make such viewpoint neutrality mandates much less plausible politically.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Is 47 USC 230(c)(2)(A) currently viewpoint neutral? Kind of seems like it's not.
". . .material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected . . "
Reasonable people can disagree on the precise boundaries of each of the above terms. Except "otherwise objectionable". This exception has swallowed the rule. It can't result in the platform becoming a publisher without any of the liabilities of a publisher.
Ok, but even if some platform moderated in bad faith or outside the listed categories (assuming "otherwise objectionable" is interpreted to mean other content objectionable for similar reasons as those listed, rather than just meaning any and all content), (c)(1) still says an ICS user or provider is not a publisher. It doesn't matter what they do for moderation. So what is the point of (c)(2). All I can think of is that in (c)(1) there is another element, which is that the content is provided by "another ICP." You may argue that an ICS provider or user fails this element if they were the ICP in some way. So then (c)(2) is a safe harbor against this argument.
That brings me back to my initial question. Upon thinking about it further it seems 230(c)(2)(A) is perhaps viewpoint neutral, but is not content neutral.
If the foregoing is correct and "otherwise objectionable" is interpreted narrowly, as would seem necessary to not render the foregoing words meaningless, then it would seem that nearly all of the acts of censorship that Twitter et al engage in which are politically controversial, do not fall under (c)(2)(A). Such moderation is indisputably not done in "good faith to restrict" the statutory types of content.
But that doesn't automatically mean the platforms are now publishers of everything. Actually, I don't know what the heck it means. Maybe, as I mentioned, it means there could be a particular set of facts where these actions could open them up to being treated as a publisher of particular information, if they can be shown to have contributed to its creation or development in some way.
would seem that nearly all of the acts of censorship that Twitter et al engage in which are politically controversial, do not fall under (c)(2)(A).
Allowing politically controversial choices by a private actor renders a law not content neutral? Come on, dude, that's laughable analysis.
I think what he is saying is that "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" are terms that limit the scope of permissible moderation. Moderation based upon political viewpoints does not fall within that list and, therefore, does not confer protection from civil liability.
I do not think that automatically makes Twitter a publisher, however.
No, all this talk about Twitter not being a publisher is a distraction, moderation is where the discussion is.
That said, when Twitter appends "fact checks" to posts, this is NOT user generated content, it is Twitter generated content, and it seems to me that Twitter unavoidably does become a publisher for the purposes of those fact checks. FaceBook has a similar problem with some of what it's doing; When they take user generated content and append their own commentary to it, they become a publisher.
To the extent it's their content, yes. To the extent they're just providing a hyperlink to someone else's content, not necessarily. But assuming for the sake of argument that we're discussing the first category, yes, which would open them to liability for the contents of that fact check only. In other words, if the fact check said that Trump had a soul, then he could sue for defamation for such an obviously false claim.
"To the extent they’re just providing a hyperlink to someone else’s content, not necessarily."
So if I publish hyperlink under your billboard, and the page at the hyperlink (not made by me) says you're a crook and nobody should hire you - no liability for me?
Yes, that’s what I’m saying. Not necessarily. Under traditional defamation law, a distributor of someone else’s content is generally only liable if it had actual notice of defamatory content.
"To the extent they’re just providing a hyperlink to someone else’s content, not necessarily."
Yes, necessarily. It's perfectly analogous to a newspaper publishing a letter to the editor: Legally it's their content, because they had to make an affirmative decision to use THAT content, rather than somebody else's. Chosing to publish "fact checks" from CNN and WaPo, and not National Review and Fox news, is an editorial decision.
I'm not sure, but I certainly don't think it's at all obvious that linking is publication, Brett.
Certainly, the test is not merely making an affirmative decision.
Brett, it’s painful when you try to play lawyer. Please stop.
Sarc, You are conflating some different issues. First I asked about the issue of neutrality for Section 230. If you had a statute that banned speech that was "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected" it seems to me that would be content-based. It's been a few years since law school so I could be wrong. However, it might be viewpoint neutral, which is what EV's article is about. Content-based doesn't necessarily mean it's unconstitutional. Now, Section 230 is not such a statute banning speech. But it is a statute that involves the liability of online platforms, and perhaps in EV's words above "the threat of increased liability." Since a viewpoint-based law affecting platform liability may likely be unconstitutional, I am wondering if there is any problem for Section 230 as a content-based law that affects platform liability.
The post you responded to was mostly addressing the separate but related, subsidiary issue of just what Section 230 actually does. When YouTube deletes a video of several doctors making a reasoned presentation evaluating COVID lockdowns, that does not fall under (c)(2)(A). None of the political censorship that has been at issue over the last 3-4 years falls under (c)(2)(A), if we assume that "otherwise objectionable" is construed narrowly. I'm not sure what effect that has under this law. I've long thought Section 230 was a fairly useless and unclear law that didn't actually do much, because I thought social media platforms would not have been publishers anyway. EV's earlier post today explains that some early Internet cases held otherwise.
//Maybe, as I mentioned, it means there could be a particular set of facts where these actions could open them up to being treated as a publisher of particular information, if they can be shown to have contributed to its creation or development in some way.//
And what kind of liability would result? For example, does this cover the creation of a defamatory fact check?
That's because they're not currently required to follow the rules of a public forum because they're not a state actor. That statute says they're free to make whatever decisions they want with regards to the content on their service. The question is whether that should change to make it consistent with the limits of state actors in a traditional public forum.
The Second Circuit did rule that Trump's Twitter feed was a public forum. Does that change your analysis?
Not really because private parties (as opposed to state actors) aren't required to follow the rules of a public forum even if the government (the state actor) is required to follow those rules. The Second Circuit's ruling would restrict public officials but not Twitter.
I would agree it's somewhat sui generis because I can't really think of examples of a private entity being considered a public forum but, if this is an example of that, it doesn't turn the private entity's owners into public officials.
Antisemites are hoping for it.
Interesting. Unexpected (expected??) consequences often slip under the radar. Eugene, is it your sense that the people pushing for a change in legislation are mostly motivated by a non-political desire? By a desire to protect Trump specifically? By something else? Is it your aim to get this information directly in front of legislators, if it looks like it's gaining traction in Congress? Or is this post merely the observations of a First Amendment expert, and you're offering up the information so that others may take up the cause, if they, individually, see fit?
Pro-terrorist content is already being allowed scot-free, if the terrorists agree with the hosting site's slant (example: Facebook will not silence ISIS or Al-Qaeda on its platform, much less Antifa).
Which is to suggest that the terrorism suppression supposed exception is, in reality, a red herring. As opposed, to the more common straw man.
I would fully expect that our law enforcement and intelligence community people watch these terrorist sites fairly closely. Which is why I am not otherwise panicked by letting those sympathetic to terrorism openly converse. Much better, I think, than forcing them underground, where they would be harder to track.
Exactly. Let their social media activity be a honey pot that can be observed and monitored.
That was my reaction: Why would we want pro-terrorist people to hide more effectively? Let them out themselves, so we know who they are.
And then let them start worrying about which of their couple dozen friends is actually a FBI Agent. Nothing breaks up a conspiracy faster than paranoia, and when you have a couple dozen people you really don't know, you don't even have to have an undercover FBI Agent to put an end to a lot of things....
From my days involved with the Michigan Militia, the easiest way to identify the infiltrator is to look for the guy suggesting that you rob a bank or shoot a cop. He's your guy, show him the door.
I don't know, maybe that doesn't work for groups that actually ARE planning to do bad stuff.
Under such a policy, you'd also kick out non-infiltrators like Tim McVeigh and Terry Nichols - and from what I heard long ago,they _were_ booted from the MM for this reason. So it pays off even for non-infiltrators.
It's quite proven that their media has been a successful radicalizing force, so that's a pretty unwise policy.
One man's radicalizing force is another man's social or political movement, and I don't trust government OR Mark Zuckerberg to make that distinction.
Sure, if you want to be postmodernist about it.
In the real world, we don't like it when Al Queda radicalizes people over social media and shut it down.
Forget postmodernist, I want to be classical liberal about it, because I really don't trust people to make that decision.
Hah. Rarely am I to your right, but I'm less worried about inchoate tyrannical abuse of power than actual demonstrated terrorist plots arising from online radicalization. (Note: I'm not counting the ones that are pretty entrap-y)
And not even secret black-ops type 'lets torture and foment dumb coups' either, this is in the open unclassified policy.
There is a direct link between terrorist groups on social media and local radicalization. A lot have moved to telegram and the like, but driving them underground lowers access in America, and thus the threat.
Note that this deplatforming has been done by companies on their own, probably because people reacted badly. And that it's been going on for about a decade without tyranny, though not without some costs in performative nigh-entrapments of people whose only crime is being dim and angry. That mitigates towards reform, not ending.
If you want to argue that the risk of giving the government the power to declare certain groups terrorist organizations is greater than the risk of home-grown radicalization, you can do that. But I think you'll be in the minority there.
I mean, they do all the time, but this issue isn't really about facts anyway.
Well, no, they do some of the time. Really, it's the Antifa they don't silence. I tested that once, found an Antifa page on FB talking about going out and beating some people up, reported it for yucks.
A week later FB got back to be, said they didn't see the problem.
Antifa is not quite on the same plane as AQ and ISIS.
Not yet, anyway.
I cannot speak for your ever fertile imagination, Brett.
They go around looking for people they disagree with to beat up. (This isn't an allegation, that's what THEY say they're up to!) That starts out bad, it doesn't take much imagination to see it going worse.
I"m not saying they don't suck, I'm saying calling them proto-Al Qaeda is pretty overheated.
IANAL and I wonder how this will fare being an executive order, not legislation. I have long thought that most executive orders reach into legislative territory, but often find out that legislation has been written with specific carve-outs for executive action.
Are their Section 230 provisions for modification by executive order, and if not, would this be likely to be knocked down just for that?
I think any FCC regulation would fail because it's not a reasonable interpretation of the statute but an attempt to expand the statute to effect policy. So, yes, Congress can do things an executive order could not.
Parts of the executive order are clearly fine. This one is a very dubious interpretation of a statute based on a belief of what the statute should say.
I think you'd have to see the FCC regulation; Unavoidably, that "good faith" language IS in the statute itself, and it has to have SOME meaning. Currently it's being treated as though it had none.
It does have meaning when it comes to liability for direct actions taken by Twitter, but not when it comes to publisher liability.
"Parts of the executive order are clearly fine. "
There isn't an executive order. There is a "leaked" draft, which as far as I can tell, no one has even attempted to authenticate.
//If Congress were to indeed require social media platforms to be viewpoint-neutral (or content-neutral) in dealing with user-generated content, then they would have to stop blocking content that expresses a pro-terrorist-violence viewpoint.//
Why is that a problem? I terrorists want to out themselves on social media, it makes things a lot easier for law enforcement, no?
You do realize that overseas terrorist groups use social media to try to recruit sympathizers here?
So?
So it doesn't make it any easier for law enforcement.
And that’s the only criterion we use when deciding on public policy issues? Allowing people to converse with each other and to go outdoors also makes it easier to recruit [fill in your choice of bad actors]. Should we ban those activities?
I didn't say we should ban anything. I was responding to someone who claimed that permitting terrorist propaganda on social media would make it easier for law enforcement.
I'm not seeing the problem here.
I didn’t think so at first either. But then I remembered that we’re in a post-COVID world, where the default position on any action that increases a risk is for prohibition. And don’t try using that phoney-baloney cost/benefit analysis stuff.
This may be a regrettable, but necessary consequence of the need for free speech.
Part of supporting free speech, is supporting the concept, no matter who is making argument, no matter how repugnant the group. Be it Nazis defended by the old ACLU, or terrorists allowed to voice their opinions on Twitter.
Allowing governments or government-corporate complexes to effectively suppress free speech is, in general, not a good idea. We've seen this with the recent Corona Virus epidemic, as the government of China effectively suppressed early knowledge about the human-human transmission of the virus. The risk and dangers inherent in allowing government, or government-large corporate suppression are too great.
If the price for free speech is allowing terrorists to also voice their opinions, it is an unfortunate price, but one that must be paid. Otherwise, all we have is all dissenters being called "terrorists" eventually, and no freedom at all.
Yes, you captured exactly how I see this as well = If the price for free speech is allowing terrorists to also voice their opinions, it is an unfortunate price, but one that must be paid. Otherwise, all we have is all dissenters being called “terrorists” eventually, and no freedom at all.
The Founders were 'terrorists' of their time, looking at it from the British point of view (circa 1776-1787). Yet, here we are today.
I was a bit surprised to see Professor Volohk appear to be voicing the opposite opinion.
It is kind of why I really like VC. Lots of very interesting twists and turns.
You nailed it. I'm fine with terrorists spreading their message. Because then we know who they are, and it also protects regular speech.
"a lone-wolf American urging people to generally engage in jihadist violence"
NO!
I understand that there is a difference between what is and is not an actual "threat" -- I've been down that road with the purgatorial cesspool known as UMass Amherst and the clause "Im ba l'hargekha, hashkem l'hargo" which I have in Hebrew(?), a language I do not speak, so that there is no doubt that it means what a few thousand centuries of Rabbis & Rabbinical Scholars have written that it means, and nothing else.
But advocating violence is not protected speech, and people who have done so have been held liable for the violence.
Chaplinsky was arrested for calling the police chief "a damned fascist" -- and while that standard wouldn't hold up today (BLM routinely calls cops worse), there are true threats.
But more relevant is the question of would you prefer to be surprised by reading about the terrorists or by reading about what they have done? In other words, cops read social media as well (much to the surprise of a *lot* of criminals....) --- and an officer reading about someone advocating jihad or praising people who had recently engaged in jihadist attacks might conclude that "hey, maybe I ought to check this guy out..."
I like to know who my bigots are so they don't blindside me...
"But advocating violence is not protected speech"
Actually, it can be. Advocating criminal violence isn't protected speech, but not all violence is criminal violence.
Wrong again.
See Brandenburg for the actual description of what's not protected.
I am gay and Jewish. For some strange reason Stormfront keeps deleting my account and my comments.
"they would have to stop blocking content that expresses a pro-terrorist-violence viewpoint"
Really? A court is going to punish twitter for blocking a "pro-terrorist-violence viewpoint".
This is a strawman. Revoking 230 does not make twitter into the government subject to 1A concerns.
The stated policies of the executive order are stated as intended to do just that. It's intended to prevent moderation from Twitter unless they want publisher liability for all third party content on their site.
I think you're reading too much into the order. There's no question that Twitter could moderate all day long without offending Trump. All they'd have to do is stop moderating on the basis of politics.
I think a lot of observers would think that all they'd have to do is stop moderating on the basis of politics that cut against the people and issues that Trump favors. Those observers would think that Trump would not only be fine with moderation that hurts his political enemies, he'd be actively encouraging such action.
(But your general point is, I think, largely correct.)
What a joke this whole "social media censoring conservatives" is. Is there actual evidence of this? I mean, has anyone looked into the specifics of the complaints cited in the draft?
This is just one more attempt by Trump to silence his critics, or to play to his oh-so-persecuted base. It deserves to arouse outrage, not serious legal analysis.
//This is just one more attempt by Trump to silence his critics//
How is an executive order whose intent is to permit more speech and less censorship an attempt by Trump to silence critics? What's the logic here?
To be fair, Trump was complaining about Twitter's "fact check".
OTOH, Section 230 would not provide immunity for content Twitter actually originated itself, and the "fact check" was Twitter originated.
That is correct.
However, I think the fact-check was more of a platform based function, not a publisher function, as it simply provided a link to CNN/WaPo articles.
The issue, in my view, is Twitter's manipulation of an original tweet, which seems a lot like somebody running up to a sign you are holding up on the street and writing a contrary message on it with a sharpie, and then insisting it is an exercise of **their** free speech rights.
I think that's going too far, in as much as Twitter is selecting CNN/WaPo specifically, they're not fact checking based on National Review and Fox.
It's not like they have a feature where anyone who wants to could append a "fact check" to any tweet, or could designate the source for "fact checks" they'd see; THAT would probably legitimately qualify as user generated content. But they don't do that because they want the "fact checks" to reflect their own perspective, which is exactly what makes them a publisher of them.
Just curious here, what would convince you? Would direct testimony from people within the company saying "yes, we suppressed conservative outlets and stories" convince you? Or not?
That doesn't matter, really. Because there are no factual findings in the EO, just a rant.
EO's are supposed to be orders, not "factual findings"
Besides, we know nothing would convince you.
Used to be common practice, before Trump.
Generally, if you want to pass rational basis, it's generally a good idea.
What is the provenance on this "draft" EO? Who leaked it? Has the administration officially acknowledged it as genuine?
Without answers to these questions why does anything the "draft" says matter?
I deleted my Farcebook Account after I got put into Farcebook "jail" for having used the term "tranny bathrooms." Well of all the things I can think of calling them, that isn't exactly offensive.
It's thought control, and they're getting good at it. But they lost my contribution to their empire....
For having used it nearly a year earlier.
They're just searching *everything* looking for anything to the left of Vladimir Lenin....
Don't you mean "to the Right of Lenin..."? If you do not have a typo, then I'm not understanding.
You just put more thought into Dr. Ed’s comment than he did.
There are a few topical initiatives in the draft, but notwithstanding Prof. Volokh's point above about [EO Draft] Section 4, a lot of [EP Draft] Section 2 is a call for such companies to stay consistent to their terms of service - just enforce speech consistent with what you say you will do - the complaint being that the company's censorship is "deceptive, pretextual, or inconsistent with a provider's terms of service". It goes on to identify the problems of unreasoned explanation, inadequate notice, etc.
So this seems like something that could be addressed in ToS, such as - our terms are that "we may randomly, and/or with prejudicial viewpoint discrimination, remove posts from users, with or without notice, with or without explanation." Happy?
Similarly, the [EO Draft] Section 4 criticisms about protecting speech encouraging terrorist activities, if called out in ToS, could similarly remain forbidden, no?
Ultimately, as with much federal law, I'm curious how this EO could be enforced consistently. As I read the source draft document, it makes my head swim to think of the administrative overhead in policing such a surgical incursion into [USC] Section 230.
//So this seems like something that could be addressed in ToS, such as – our terms are that “we may randomly, and/or with prejudicial viewpoint discrimination, remove posts from users, with or without notice, with or without explanation.” Happy?//
Why not just do that? The problem with Twitter isn't so much that they do whatever they want, whenever they want, it's that they pretend to be neutral.
If Twitter was honest about the way it conducts moderation, would Twitter be as popular as it is today? Maybe. I think not, however.
That may be Trump's entire effort -- to get Twatter & Farcebook identified as the bastions of leftist bias that they want to be -- and to force them to admit that to their users.
A lot of bright Millennials get most of their news from Farcebook -- having to admit that they are biased would likely change that..
The problem is that you get your news from people who aren't honest. Twitter and Facebook expressly say what you ask "why they don't just do."
Whatever happened to 'more speech' being the correct answer to bad speech.
The left discovered that they didn't do all that well if contrary perspectives could be heard.
Um, Trump's entire issue is with more speech he doesn't approve of.
Who defines "terrorist content"?
The last presidential election saw a level of polarization where some called their political adversaries "terrorists". Some of the countries top politicians took up this mantra. A mainstream presidential candidate called her political opponents deplorable and irredeemable. Connecticut Gov. Dannel Malloy called the NRA (and their 5 million members) terrorists, San Francisco officially labeled the NRA a terrorist organization.
Allowing "terrorist" and other objectionable content also exposes these ideals to public scrutiny, allowing people to make their own decisions as to what is bad and good.