The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
No, a Web Platform's Decision to Restrict Speech Doesn't Strip It of 47 U.S.C. § 230 Immunity
Lots of people have argued that, even if online platforms should often be immune from liability for speech by their users, they should lose that immunity if they decide to restrict some users' speech. Here's a sample of this argument:
In contrast [to platforms that allow people to whatever they want], here [Twitter] has virtually created an editorial staff … who … spend time censoring [user posts]. Indeed, it could be said that [Twitter's] current system of [editing] may have a chilling effect on freedom of communication in cyberspace, and it appears that this chilling effect is exactly what [Twitter] wants, but for the legal liability that [should attach] to such censorship…. [Twitter's] conscious choice, to gain the benefits of editorial control, [should open] it up to a greater liability than … other computer networks that make no such choice.
Now you might think that's a good argument, or a bad argument. But it is precisely the argument that Congress rejected in passing 47 U.S.C. § 230. That quote is from a 1995 case called Stratton Oakmont v. Prodigy, which held that Prodigy could be sued for its users' libelous posts because Prodigy edited submissions; the references to Twitter in the block quote above are to Prodigy in the original. Congress enacted § 230 to (among other things) overturn Stratton Oakmont.
And in addition to providing immunity to platforms that edit (alongside those that don't), Congress expressly protected their right to edit, notwithstanding any state laws that might aim to restrict that right (not that state laws generally do that):
No provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected ….
This protects providers' ability to restrict any material that they consider to be, among other things, "harassing" or "otherwise objectionable" (whether or not the court agrees with that view). When a service's operators restrict material that they view as offensive to themselves or to some users, they are restricting material that they consider—in perfect good faith—to be objectionable.
We might think that the service is wrong to consider some ideologies to be objectionable, or unduly narrow-minded, or acting in a way that harms public debate. But if Twitter is censoring some conservative messages, it's doing that precisely because it considers them to be "otherwise objectionable." (One can imagine non-good-faith restrictions, such as if a service restricts messages not because it considers them objectionable but simply because it's competing financially with their authors, and wants to use its market share as a way to block the competition; but that doesn't seem to be happening in any of the recent blocking controversies.)
Maybe Congress erred; maybe § 230 should be revised; I'm inclined to think it's on balance a good idea, but we can certainly debate about whether and how it should be changed. But we should recognize that § 230 does indeed provide immunity to platforms that restrict material they consider objectionable (whether for political or other reasons) as well as to platforms that don't.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Main article appears to have been double posted.
Yes, indeed, but the more the merrier, and in that spirit I would point out that one of the most objectionable aspects of Twitter's conduct in this affair is its failure to suppress the account purporting to be posted by Devin's mother. This is surely an act of criminal mimicry if there ever was one. See the documentation of our nation's leading criminal "parody" case at:
https://raphaelgolbtrial.wordpress.com/
That's a very informative post. Thanks.
You seem to be depriving that "in good faith" of any function. But we have to assume Congress put those words in the statute for a reason: That particular guarantee was only supposed to apply to good faith moderation.
Interpreting "or otherwise objectionable" to include anything the carrier dislikes for any reason whatsoever would render the "in good faith" meaningless.
But see Donato v. Moldow 865 A.2d 711, 727 (N.J. 2005) ("Whether Moldow knew and disliked appellants is not relevant to the immunity terms of ? 230. Selective editing and commenting are activities within the scope of the traditional publisher's function.").
You're mixing up wrongful non-deletion, (Which is what Donato was about.) with wrongful deletion.
But the statute treats them separately. The carrier's immunity for content created by others is absolute, it's immunity for deletion of content is qualified.
Where immunity is qualified, it must be possible to forfeit it.
"You're mixing up wrongful non-deletion, (Which is what Donato was about.) with wrongful deletion."
Donato was about alleged wrongful deletion. It's littered in the opinion. For example:
"Therefore, we are unpersuaded by appellants' contention that Moldow's conduct in removing some messages after receiving complaints, but not removing others, transforms him into an information content provider."
In the portion I cited you to:
"Nothing more is alleged here. Whether Moldow knew and disliked appellants is not relevant to the immunity terms of ? 230. Selective editing and commenting are activities within the scope of the traditional publisher's function."
Again, Donato was a complaint about the content that wasn't deleted. That they'd deleted other content merely demonstrated that not deleting the content complained about was a choice.
Selective editing and commenting may be within the scope of a traditional publisher's function, but 230 provides online platforms protection beyond that afforded traditional publishers, and that protection is qualified in the context of deletion.
Expressly so. It is not a categorical protection in all cases whatsoever, but only in cases where the deletion was a good faith effort to remove objectionable content.
On the express terms of the law, a deletion that isn't protected is possible.
"Again, Donato was a complaint about the content that wasn't deleted."
So is Nunes's lawsuit (which is the subject of the OP). Specifically, he complains that "Twitter consciously allowed the defamation of Nunes to continue. . . . Twitter permitted @DevinNunesMom . . . to tweet and retweet with impunity throughout 2018." In its Count 1, it claims that Twitter "used its platform and allowed its platform to be used by others as a means to defame Nunes." His defamation suit in Count 2 is directed at Twitter, alleging that it made false statements about Nunes. He is seeking an injunction ordering Twitter to suspend several accounts and deactivate and delete all tweets, retweets, and replies by those accounts. These are, fundamentally, complaints about content that has not yet been deleted.
"On the express terms of the law, a deletion that isn't protected is possible."
Yes, and Professor V has identified the sort of bad faith found by courts ("One can imagine non-good-faith restrictions, such as if a service restricts messages not because it considers them objectionable but simply because it's competing financially with their authors"). There's a Kasperasky case about that somewhere.
The complaint alleges a bit more than that. It alleges that Twitter participated in the development of defamatory material by a combination of acts, including selective censoring, shadow banning.
Right. Just like in Donato, the plaintiff has alleged that the defendant both refused to remove defamatory material, and selectively removed material.
While I disagree that is all that is alleged, I suspect the case will ultimately fail, although discovery would have been fun.
MKE, I was once a newspaper editor. If I didn't like your stuff, it didn't get into my newspaper. Nobody ever said, "Hey, you have to like my stuff!"
And I had all sorts of reasons for disliking stuff. Some of what I disliked purported to be factual, but wasn't true. Some of it was scurrilous attacks on some private person, with no public importance. Some of it was interesting-looking stuff that made potentially defamatory allegations which I couldn't tell were true. If I was feeling low energy, and didn't want the work of proving the truth, then that stuff didn't get in either.
All sorts of reasons. And sometimes no reason but my own private prejudices. And I never explained to any of those folks why their stuff didn't get in. It just didn't. I guess all that time, I must have been "shadow banning" would-be contributors, without knowing it. But I thought I was just editing. Are you sure "shadow banning" isn't just a scary-sounding term for editing?
Exactly. And that's your real complaint about ? 230; all the rest is just smoke: it eliminates the ability of random schmuck at a newspaper to act as a censor.
"it eliminates the ability of random schmuck at a newspaper to act as a censor."
But, the ability of other random schmucks to censor is maintained. For large platforms (eg twitter) the random schmuck exercising censorship by caprice or prejudice is replaced by a writer of schmucky algorithms. Schmucks, no matter how you try, you just can't get away from the schmucks. Like the poor, the schmucks will always be with us.
"Interpreting "or otherwise objectionable" to include anything the carrier dislikes for any reason whatsoever would render the "in good faith" meaningless."
Finding normal, mainstream conservative speech objectionable enough to censor shows a lack of good faith.
True, true, true. It's obviously just their post modern neo marxist philosophy leading to the banning of such fantastic conservative patriots (as if there were any other form of patriotism than conservative patriotism) as Alex jones and Milo Winealotamuss. Next they'll be trying to ban #flatearth.
I disagree. If Twitter deleted a tweet that it did not regard as objectionable at all, but thought the deletion would irritate the tweeter, and it would be fun to irritate him, then the deletion would not be in good faith, by reference to the permissible criterion - objectionableness. It would be a deletion not to delete objectionable content, but to irritate someone under the false colors of objectionable content.
Yes! We need a Ministry of Good Faith to scour the internet for occurrences of bad faith and to punish them. The Ministry can also handle instances of blasphemy and heresy too. I understand that the Russians have a pale that has fallen into disuse.
One can imagine non-good-faith restrictions, such as if a service restricts messages not because it considers them objectionable but simply because it's competing financially with their authors, and wants to use its market share as a way to block the competition; but that doesn't seem to be happening in any of the recent blocking controversies
Why would that be considered not in good faith, but blocking people for strictly ideological reasons would be considered in good faith?
Also, what if it's one and the same? One competitor to Twitter is Gab. Many on the left have denounced Gab and called for it to be de-platformed itself because Gab hosts content they find offensive.
If Twitter were to delete all posts encouraging people to sign up and use Gab, would that be in good faith, because everyone agrees that Gab is an offensive website that offends the sensibilities of the community? Or would it be in bad faith because they are blocking access to a competitors product?
Good faith can have slightly different meanings in different contexts, and I can't find a case where its use in this statute was definitively construed, but the underlying idea is that of an honest and sincere attempt to do whatever the actor purports to do. Here, that translates into removing or restricting content because the provider sincerely finds it to be objectionable, as opposed to using that as a pretext to remove it for some other reason as in the example Prof. Volokh provided.
The good news though is that you are free to decry the status quo, and demand that the law be changed so the Twitters and Facebooks who so dominate the web be held to a higher standard. And maybe they should.
This works both ways; Twitter is most certainly is removing users and restricting content pretextually - it's funny peculiar how the bans and deletions all run in one direction - but I'm sure that they honestly and sincerely believe any expression of centrist or rightist thought IS offensive by its very non-leftist nature.
Chem, I don't understand your comment. What does "pretextually" mean to you?
Full of fecal matter, just like Twitter's justifications.
Reasonable Man says that expression of rightist, centrist, and leftist thought is OK so long as it's not obscene or whatever. Twitter, OTOH, believes in their heart of hearts that only expression of leftist thought is OK, and therefore they pretextually ban the rest as obscene, abusive, whatever.
Just look at their recent conservative-banning campaign based on saying that telling laid-off JournoLists to "Learn to code" was abusive, when those same JournoLists were telling coal miners the EXACT SAME THING after 44*'s environmental regulations destroyed the coal miners' jobs.
That ban, deletion, etc. is wholly pretextual.
Twitter and it's favored leftist mob need to be brought to heel. How dare they suggest to the miners that Obama wrongully abused that their way forward depends on education and adapting to the modern economy when everyone knows that what we really need is more black lung disease, more mine cave ins, more impoundment failures and more acid rain. Make America Great Again, Bring back black lung disease!
Yes, indeed. Milo and Alex Jones were banned simply because of their mainstream conservative views. Can you name others who have been banned for such bad faith reasons?
"...it's funny peculiar how the bans and deletions all run in one direction..."
The entire purpose of 230 was to prevent someone from using a failure by a content host to police all comments uniformly from being sued for that failure. Otherwise, people would be discouraged from exercising any censorship. The "good faith" is a reference to attempts by the person to police their own content, not some requirement that they prove up equal, viewpoint-neutral censorship.
Y'all's weird interpretation of the statute can't prevail because it will result in precisely the conduct that 230 was intended to avoid.
Indeed, you are reading "good faith" out of the law. The law references offensive content, and lists examples of it. "or otherwise objectionable" is just a catchall phrase meant to encompass things of the same nature as those listed.
Banning on the basis of politics on a platform which doesn't explicitly state that it has a particular political orientation is going to be bad faith, on an ordinary reading of Section 230.
"Banning on the basis of politics on a platform which doesn't explicitly state that it has a particular political orientation is going to be bad faith, on an ordinary reading of Section 230."
Exactly this. No one says that you can't make a platform which is politically biased; what is problematic is saying that you're unbiased and yet making editorial decisions on the base of politics. Twitter is trying to say that they're unbiased (and therefore get 230 protection,) but it's clear that they're not.
230 protection is not based on being "unbiased". Biased and unbiased content hosts enjoy the same protection.
Twitter has, as you know, reserved the right to remove content for any reason whatsoever. The universe of bad faith under its own agreement is infinitesimal. Bad faith, per the case law, is going to be more akin to something like anti-competitive behavior (under Sherman Act) used to block out competitors.
The existence and usage of shadow banning by definition removes good faith from the equation.
"The existence and usage of shadow banning by definition removes good faith from the equation."
Literally true. It's right there in the dictionary.
Very informative.
As I mentioned, it seems that the New York Times, for example, could fire and rehire its reporters/content producers as independent contractors, while retaining editorial functions, and thereby become an interactive computer service with Section 230 protection with respect to all of the content published online. No matter how much editorial control the NYT exercised in terms of the selection of which content to publish, they would not be the "information content provider" and could pass the buck for responsibility onto its reporters.
Of course, one may surmise that the NYT would lose (even more) respectability if it didn't take responsibility for its content. Also, apparently case law indicates that if service providers specifically solicit, contribute to or actively participate in the creation or development (not just the publication) of unlawful content, then they may be considered an information content provider.
As a policy, I agree Section 230 seems like a good idea and its application seems OK thus far. But maybe problems could develop with publishers (or, service providers) seeking to distribute potentially defamatory content yet evading liability as a publisher or speaker. But the best answer to these problems is more free speech, as well as the much deserved decline in the reputation of the legacy media which is already well underway, and more public criticism of overly censorious online platforms that are not transparent and that mislead and claim to be neutral.
If, in the future, online platforms for speech or financial transactions gain very clear monopoly power status, I think that would change things and present a serious problem for free speech.
It's a mystery to me why the NYT hasn't already done this as it's obvious that their reason for being is to publish defamatory articles about those whom they oppose without bearing any liability for the defamation.
Well it's not too much of a mystery, as they really face very little threat of liability no matter what they print about a public figure under New York Times v Sullivan.
Devon Nunes's cow is not a public figure but a private bovine. Likewise Devon's mom. If the NYT were to go entirely online and publish only selected comments, it would be able to defame private bovines far and wide, and with impunity! Then where would we be?
Our Fearless Leader has promised to open up the libel laws and I'm sure he has Mitch working on it. If Mitch can't get this done for El Puerco (as the raping Mexicans call him --and that's defamation right there!) what good is he? At least that pesky John McCain is dead and gone and can no longer stand in the way of Truth, Justice, and Making America Great Again like it was in the 50s. If not for McCain we could have gotten rid of those awful pre-exiting condition rules and every American would be able to negotiate directy with the insurance companies without government interference -- just so long as they keep there dirty fingers off my Medicare.
Haha, ok you got me. Yes, the venerable NYT is at this time at least several notches above anonymous Twitter trolls randomly attacking someone's mom. God Bless those beacons of truth, justice and impartial journalistic integrity.
Praise the Lord that we have Breitbart, Infowars, and Lou Dobbs to bring us the truth.
Breitbart and the Washington Post or something are two sides of the same coin. Infowars and . . idk, Mother Jones? I've never consumed either of them and I also don't watch TV so I can't say for sure. I will say the right leaning folks tend to be more up front and honest about their viewpoint orientation and advocacy.
Anyway, polls show like 90% of Americans are well aware that the media reports fake news and is biased. But at the same time, even higher percentages say that their opinions aren't influenced by media bias. That seems high, but the point is one needs to read between the lines.
If you would avail yourself of sources like Alex Jones/Infowars and Whirled Nutz Daily you would learn how to properly gauge the treasonous purveyors of fake news such as the NYT and Wapo. Alex Jones, Jerome Corsi, Roger Stone, Michael Caputo (who was so viciously slandered by being identified as a Putin stooge), Michael Flynn, Jr (who expose the story of the child trafficking ring run by the Clintons in the basement of the pizza restaurant) -- these are the people we need to pay attention to for the truth during the coming tumultuous times.
It's deja vu all over again!
It's deja vu all over again!
You can say that again!
I think Eddy time-travelled ten minutes into the future and then went back and took your advice.
You know that deja vu thing? This post is a lot like that.
My read of the complaint is that the plaintiffs intend to argue that Twitter restricted the availability of content to advance the political campaigns of a specific party or candidates -- not because Twitter believed "in good faith" that the content was "objectionable" in the respects that Twitter claims it was objectionable. This is demonstrated by the fact that equally "obscene," "harassing," "lewd," or "violent" content from people supporting one set of candidates was taken down, but left up for people supporting another set of candidates.
Perhaps you're right that Twitter could straight-up remove conservative content because it is conservative. But that is not what Twitter claims to have done. (And if Twitter actually said it was filtering out content because it was conservative, it would be a business and political disaster for Twitter.)
Instead, Twitter is trying to have its cake and eat it too by saying it doesn't discriminate against conservatives -- it's just that all these conservatives being shadow-banned happen to be obscene, lewd, violent harassers! And what the plaintiffs are planning to show is that Twitter's stated grounds for targeting them are not "in good faith" as required under the statute, as demonstrated by Twitter's failure to remove obscene, lewd, violent harassment from another set of posters.
"My read of the complaint..."
Is that the claimant intended to say something about the "good faith" exception without mentioning anything in the complaint about good or bad faith? No, their theory is that Twitter is a content provider because it makes editorial decisions that the plaintiff disagrees with. They will need to plead more to implicate a bad faith exception to immunity.
Actually, yes, the complaint alleges expressly that Twitter's claimed apolitical grounds for censoring some tweets but not others were "a lie," and that Twitter was actually seeking to influence politics:
"Twitter knew the defamation was (and is) happening. Twitter let it happen because Twitter had (and has) a political agenda and motive: Twitter allowed (and allows) its platform to serve as a portal of defamation in order to undermine public confidence in Plaintiff and to benefit his opponents and opponents of the Republican Party. . . . As part of its agenda to squelch Nunes' voice, cause him extreme pain and suffering, influence the 2018 Congressional election, and distract, intimidate and interfere with Nunes' investigation into corruption and Russian involvement in the 2016 Presidential Election, Twitter did absolutely nothing. . . . Twitter represents that it enforces its Terms and Rules equally and that it does not discriminate against conservatives who wish to use its "public square". This is not true. This is a lie."
I would be very surprised if a court held that these pleadings are not enough to support an argument that Twitter was not acting in good faith.
Why do you think having a political agenda is "bad faith" in the first place?
But just to be clear, you didn't find it strange that the plaintiff never alleged bad faith?
We're not talking about a failure to expressly plead an element of plaintiffs' claim -- which, I agree, would have been strange. This is a question of how the plaintiffs will respond to an affirmative defense that Twitter has not yet asserted. The plaintiffs have substantively pleaded bad faith throughout the complaint, which leads me to believe they intend to make a bad faith argument when Twitter cites the statute in its defense. The fact that plaintiffs used other words than "bad faith" in their pleadings is not enough to lead me to believe otherwise.
They'd probably have a 12(c) problem, see 546 F.Supp.2d 605.
It's bad faith to publicly state they have no political bias, As Jack has now done on multiple podcasts and outlets, and actually do have a political bias. To create a terms of service that states content neutrality and rules that is only enforced for some not written rule, ie politics, is also bad faith.
"To create a terms of service that states content neutrality..."
The Twitter Terms of Service explicitly states that Twitter can remove content, or users, for any reason whatsoever.
It wouldn't be...save for their statement that they don't censor conservatives. Like private colleges that pay lip service to the first amendment in their mission statements then go on private censorship sprees.
And if Twitter actually said it was filtering out content because it was conservative, it would be a business and political disaster for Twitter
Maybe the point of the lawsuit is not to acquire $250 million, but to make Twitter admit publicly that it regards conservative content as per se objectionable.
What Twitter is claiming its done and what it's actually done are not the same thing. And a company should be judged on what it's actually done, not what it claimed.
This brings up a related issue with respect to the political bias and advocacy of tech companies. Isn't there be a campaign finance "in kind contribution" issue here?
Numerous statements and leaks from the likes of Google have shown that major efforts and actions were undertaken with the overt intent to help certain candidates get elected, such as Hillary Clinton in 2016.
Not just Clinton, but 44* was the recipient of MASSIVE in-kind donations from tech/media/social media.
If we had a true FEC, Zuckerberg, Bezos, etc. would be UNDER Leavenworth.
I don't see why you think Twitter's claims as to its deletion rationale are relevant to a potential bad faith claim by plaintiff. If Twitter is truly intending to delete content due to its conservatism, and lying about this, they are likely in *political* hot water; but since they are acting in good faith to restrict availability of otherwise objectionable content, per 230 they can't be held *legally* liable.
I don't see how saying you con't censor conservatives, then censoring conservatives, is in good faith simply because you have an obviously false surface rationale. By their own definition it isn't objectionable.
Unless they want to declare it objectionable, but, good guys they are, they choose not to delete it (but do so anyway.)
That's having your cake and eating it three different ways.
'Doing X in good faith', if it means anything, means really doing X, as opposed to pretending to do X or doing X in some pro forma way that robs X of all meaning. So if Twitter is truly intending to and does delete content because they find it objectionable due to its conservatism, they are doing that in good faith, and that is protected by 230. If Twitter is lying about their intent that certainly speaks to their credibility but has no bearing on the 230 protection.
I'm mildly curious about the intersection of "voluntarily" and deletion bots. Is what your bot does pursuant to the algorithm that you program it with, something that "you" do "voluntarily" ?
After all, algoithms often finish up in places which the algorithm designer did not intend.
If you fire a shot, you're responsible for the bullet.
If you write an algorithm, you're responsible for its results.
Should be good, right?
I'm not sure that your analysis of responsibility covers the point. The statutory exception refers to "actions voluntarily taken" not to "actions for which you are responsible." I'm not sure they're quite the same thing.
If you fire a shot, aiming for your barn door and, because you are a poor shot, you hit your neighbour's horse you're certainly responsible for the damage to the horse. And you certainly fired the shot voluntarily. But you didn't voluntarily shoot the horse, that was a mistake.
And so, but more so, with bots and algorithms. You're responsible for the bot deleting Mr Nunes's tweets (or not deleting tweets about him, or whatever it is that he's complaining about) but it's not obvious that you deleted his tweets (or whatever) "voluntariiy."
Doesn't matter if it was a mistake or not, unintended or not. The act of launching a bullet carries liability; so should launching an algorithm.
If Alex is screwing around with a loaded pistol, and fires a shot which strikes Barb, he's still liable. Sure, not as liable as he intended to fire on Barb, but still liable. Negligent algorithm designers are no less responsible than negligent gun handlers.
And then we get into the issue of "creative incompetence," it's a sure & certain thing that they'll write their algorithms to ban conservatives and just say, "oops, we missed that."
I think "voluntary" is meant to distinguish from actions undertaken pursuant to a court order. Not to distinguish from inadvertencies.
Im going to reiterate my question from one of the earlier posts to the author. Do you think the lawsuit by Nunes is more about getting the chance at discovery and do you think it will be able to survive until discovery under the negligence and good faith standards contained within Section 230.
1. Yes.
2. No idea.
3. But, assuming you're right, Nunes should be trying to find a jurisdiction where he's likely to find a friendly judge. Not that there are Dem judges and GOP judges, obviously.
I have been informed that there are no Democrat judges or Republican judges.
1. No. There's no discovery that would help him here. It's not a legal action at all; it's a fundraising stunt.
2. No.
Don'tcha just hate it when the law and the facts get in the way of a good conservative position.
Just like Ilya preening about the Muslim ban got in the way? May want to save the preening for the actual court result.
Just like Ilya preening about the Muslim ban got in the way? May want to save the preening for the actual court result.
A question to the Professor. Is there a limit to 47 U.S.C. ? 230? It was passed for a good reason...but it could be abused. Furthermore, there's a question about when does selective editing cross into campaign support, in violation of campaign finance laws.
Let's imagine a series of scenarios.
1. Twitter selectively bans or shadow bans a number of conservatives, because it finds conservative actions objectionable.
2. Google bans the placement of conservative ads, because it finds them objectionable.
3. Google de-lists or removes the placement of polling locations in conservative areas, because it finds conservative actions objectionable.
Are all of these OK under 47 U.S.C. ? 230?
Why do you think this would not be the case? Reddit has channels that are moderated on precisely these grounds. Do you think Reddit is not entitled to 47 USC 230?
Sorry...
You think there's nothing illegal about Google deliberately suppressing/censoring the location of polling places in certain more republican areas?
I just want to clarify that that is your position.
"nothing illegal"? I haven't read the laws in all 50 states and I don't have a fucking photographic knowledge of all federal laws. Tell me which laws you had in mind, and I'll do my best to answer your question. Please keep in mind that 47 USC 230 doesn't impose liability, but grants immunity.
Why do you think Google would be prohibited from suppressing/censoring the location of polling places through its own proprietary system?
I'm not asking you for a photographic memory. I'm not trying to catch you in some bizarre catch-22. I'm asking what you think.
I'm asking, if you truly think it would be OK for Google to make such an action. OR, if you think such an action would break any "good faith" clause, and open it to liability.
Who cares what someone thinks.
What does the law require?
If the law does not prohibit Google from suppressing/censoring the location of polling places location of polling places in certain more republican areas, then they can do it.
What do you mean OK? Legally? Yes. Would it make me happy? No.
Blocking search results doesn't have anything to do with 230, so the good faith clause you're talking about, doesn't apply. But back to what we are discussing, I don't think selectively blocking conservative comments violates the "good faith" clause.
Armchair Lawyer, I'm not a lawyer. That's how I can recognize you aren't a lawyer either. Am I right?
Hey, this is the internet where even non-lawyers are entitled to reach stupid conclusions about the law.
All of these are OK under the first amendment.
Is Google a public platform or is it not? You want nakers to use artistic skills to make cakes they find objectionable... but want Google to be able to do anything? This is even worse than hobby lobby which was a limited public Corp. You really do not have a consistent view on anything.
Exactly! If you want to enforce existing anti-discrimination laws against bakers then you must enforce non-existent anti-discrimination policy against Google and Twitter. Anything else would not be consistent. As for Hobby Lobby, SAVE THE ZYGOTES!!!!
Jesse, you constantly post inane shit on topics that are obviously above your head. You're talking about bakers and cakes (state antidiscrimination laws) and then lumping it in with Hobby Lobby (RFRA case). You have no sense of what David Nieporent, who posts here often, thinks about either case, and I'd suggest to you that he's about the last person on this site you could accuse of intellectual inconsistency. (That doesn't mean he's right.)
If a state antidiscrimination law purported to impose liability on Stormfront for banning Jewish posters, it would be preempted by 230. Since literally nothing we are talking about has to do with a federal requirement, neither Hobby Lobby nor the RFRA is implicated at all. Even if you weren't a mindless partisan hack, it would be impossible to have a discussion with you about adult stuff because you don't have the attention span or the basic educational background to discuss legal things.
It would be, but I also think it would be unconstitutional under Boy Scouts v. Dale and Hurley v. Irish-American Gay, Lesbian, and Bisexual Group of Boston. (Stormfront presents a different issue than Twitter, of course.)
To be fair, there is a nonfrivolous argument that ?230 wouldn't act to immunize either banning Jewish posters or deleting posts by Jewish posters. 230 concerns actions "to restrict access to or availability of material", banning users doesn't fit in that box. Deleting posts does but only if it is based on the material being (perceived to be) objectionable, not if it is based on the identity of the originator.
To be consistent, Goggle should also be forced to bake cakes for the gays.
That is an...interesting interpretation.
If anyone is interested in how you might plead around, or fail to plead around, the good faith requirement, at least for 12(b)(6) purposes, see:
e-ventures Worldwide v. Google 188 F.Supp.3d 1265 (sufficient where plaintiff "alleged . . . that Google failed to act in good faith when removing its websites from Google's search results")
Smith v. Trusted Universal Standards 2010 WL 1799456 (sufficient, pro se claimant given some pleading leeway)
e360Insight v. Comcast 546 F.Supp.2d 605 (insufficient even where plaintiff states "Comcast has not acted in good faith" just because Comcast "singl[ed] out" the plaintiff)
Not all of those are 12(b)(6), I now realize.
Wow, it's like deja vu all over again!
This post makes me want to look up how the Backpage criminal case is going. I vaguely remember that in either the criminal case or the last civil case that the government argued they were in fact publishers because they had an algorithm that changed words with connotations for prostitution and sex acts to words or phrases that were more vague.
They had successfully beat back all the civil cases, but for the criminal case, the government seized their source of revenue and there's likely few funds for effective counsel.
I, for one, am enjoying the doubling-down crowd finding new, but crazier, avenues to shut down the badspeech.
Numerous statements and leaks from the likes of Google have shown that major efforts and actions were undertaken with the overt intent to help certain candidates get elected, such as Hillary Clinton in 2016.
If we had a true FEC, Zuckerberg, Bezos, etc. would be UNDER Leavenworth.
It's bad faith to publicly state they have no political bias
You want makers to use artistic skills to make cakes they find objectionable... but want Google to be able to do anything?
Well, as our resident expert on bad faith...
Oh, I'm sincere to a fault these days.
So sue me.
Is your twitter handle Devin Nunes Cow?
Well, this is a very interesting related topic.
"New research from psychologist and search engine expert Dr. Robert Epstein shows that biased Google searches had a measurable impact on the 2018 midterm elections, pushing tens of thousands of votes towards the Democrat candidates in three key congressional races, and potentially millions more in races across the country.
The study, from Epstein and a team at the American Institute for Behavioral Research and Technology (AIBRT), analyzed Google searches related to three highly competitive congressional races in Southern California. In all three races, the Democrat won ? and Epstein's research suggests that Google search bias may have tipped them over the edge.
The research follows a previous study conducted in 2016 which showed that biased Google results pushed votes to Hillary Clinton in the presidential election. Democrats and Google executives have disputed these findings.
Epstein says that in the days leading up to the 2018 midterms, he was able to preserve "more than 47,000 election-related searches on Google, Bing, and Yahoo, along with the nearly 400,000 web pages to which the search results linked."
Analysis of this data showed a clear pro-Democrat bias in election-related Google search results as compared to competing search engines. . .
According to Epstein's study, at least 35,455 undecided voters in the three districts may have been persuaded to vote for a Democrat candidate because of slanted Google search results. Considering that each vote gained by a Democrat is potentially a vote lost by a Republican, this means more than 70,910 votes may have been lost by Republicans in the three districts due to Google bias. In one of these districts, CA 45, the Democrat margin of victory was just over 12,000 votes.
The total Democrat win margin across all three districts was 71,337, meaning that bias Google searches could account for the vast majority of Democrat votes. Extrapolated to elections around the country, Epstein says that bias Google results could have influenced 4.6 million undecided voters to support Democrat candidates.
Moreover, Epstein's findings are based on modest assumptions, such as the assumption that voters conduct one election-related search per week. According to Epstein, marketing research shows that people typically conduct 4-5 searches per day, not one per week. In other words, the true impact of biased search results could be much higher.