The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Why Big Tech will lose its Supreme Court case on section 230
I only count two votes to ratify Big Tech's sweeping immunity claims
The Supreme Court's oral argument in Gonzalez v. Google left most observers in a muddle over the likely outcome. In three hours of questioning, the Justices defied partisan stereotypes and asked excellent questions, but mostly just raised doubts about how they intended to resolve the case. I had the same problem while listening to the argument in for a Cyberlaw Podcast episode (No. 445) that will be mostly devoted to Gonzalez.
But after going back to look at each Justice's questions separately, I conclude that we do in fact have a pretty good idea how the case will turn out: Gonzalez will lose, and so will Google, whose effort to win a broad victory is likely to be killed – and most enthusiastically by the Court's left-leaning Justices.
First, a bit about the case. Gonzalez seeks to hold Google liable because the terror group ISIS was able to post videos on YouTube, and YouTube recommended or at least kept serving those videos to susceptible people. This contributed, the complaint alleges, to a terror attack in Paris that killed Gonzalez's daughter. Google's defense is that section 230 makes it immune from liability as a "publisher" of third-party content, and that organizing, presenting, and even recommending content is the kind of thing publishers do.
I should say up front that I am completely out of sympathy with Google's position. I was around when section 230 was adopted; it was part of the Communications Decency Act, which was designed to protect children from indecent content on the internet. The tech companies, which were far from being Big Tech at the time, hated the decency part of the bill but couldn't beat it. Instead, they tried to turn the decency lemon into lemonade by asking for relief from a recent defamation ruling that online services who excluded certain content were the equivalent of publishers under defamation law and thus liable for any defamatory third-party content they distributed. Services like AOL and Compuserve pointed out the irony that they were being punished for their effort to build family-friendly online communities -- the opposite of what Congress wanted. "If you want us to exclude indecent content," they argued to Congress, "you have to immunize us from publisher liability when we do that." That was and is a compelling argument, but only for undoing publisher liability under defamation law. To my mind, that's exactly what Congress did when it said, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
But that's not how the courts have read section 230. Seduced by a transformative technology and by aggressive, effective advocacy, the courts read this language to immunize online providers for doing anything that publishers can be said to do. This immunity goes far beyond defamation, as the Gonzalez case shows. There, Google said it should be immune because deciding what content to show or even recommend to users is the kind of thing a publisher does. Of course, carried to its logical extreme, this means that what are now some of the richest companies in the world cannot be held liable even if they deliberately serve how-to-kill-yourself videos to the depressed, body-shaming videos to the anorexic, and ISIS videos to extremists.
So, why not just correct the error, narrow the statutory interpretation to its original purpose, and let Congress actually debate and enact any other protections Big Tech needs? Because, we're told, these companies have built their massively profitable businesses on top of the immunity they sold to the courts. To change now, after twenty-six years of investment, would be disruptive – perhaps even catastrophic. That in a nutshell is the dilemma on whose horns the Court twisted for three hours.
It is generally considered professional folly for appellate lawyers to predict the outcome of a case based on the oral argument. In fact, this is only sometimes true. Judges, and Justices even more so, usually want feedback from counsel on the outcome they're considering. It's hard to get that feedback without telling counsel what they have in mind. That said, some judges believe in hiding the ball, and some just like to ask tough questions. And in complex cases, sometimes the Justices' initial inclinations yield to advocacy in conference or in drafts circulated by other Justices.
That latter fate could be in store for the Gonzalez case. So there's a good chance I'll end up guessing wrong about the outcome. But considering how muddled the argument seemed, I was surprised how much can be learned by going back through each Justice's questions to see what each of them thinks the case is about. It turns out that most of them were very clear about what rules of decision they were contemplating.
Justice Gorsuch. Let's start with Justice Gorsuch. I believe we know what his opinion will say. He laid his theory out for every advocate. He will again indulge his bent for finding the answer in the text of the statute. Briefly, he noted that Congress defined the entities eligible for immunity to include providers of software to "filter, screen, allow or disallow content" and to "pick, choose, analyze, or digest content," Bingo, he seemed to say, there's your textualist solution to the case: Congress told us what publishers do and thus what should be immune. No one, with the possible exception of Justice Kavanaugh, found this particularly compelling, mainly because it's an extraordinarily broad immunity, protecting even platforms that boost content for the worst of motives – to harm competitors, say, or to denigrate particular political candidates or ethnic groups. (The notion has serious technical flaws as well, but I'll pass over them here.)
Justice Kavanaugh. Justice Gorsuch's embrace of broad immunity suggests that he sees this case through a business conservative's eyes: The less liability the state imposes on business, the better. In this, he was joined most clearly by Justice Kavanaugh, who reverted several times to the risk of economic disruption if a narrower reading of section 230 were adopted.
Chief Justice Roberts. If you're looking for a third business conservative on this Court, Chief Justice Roberts is the most likely candidate. And he clearly resonates to Big Tech's concerns about unleashing torrents of litigation; he's reluctant to impose liability for content selection where the criteria for selection are generally applicable (e.g., the site just gives the user what she asks for). But he also recognizes that it's the platform that has the power to select what the user sees, and he wonders why the platform shouldn't be responsible for how it uses that power.
The Chief Justice's qualms about a sweeping immunity, however, are muted. They are expressed much more directly by the Justices on the left.
Justice Sotomayor. Justice Sotomayor returns time and again to the idea that the power to select and recommend can be abused – by encouraging discrimination on racial or ethnic grounds, for example. Her hypotheticals include "an Internet provider who was in cahoots with ISIS" to encourage terrorism and a dating app "that won't match black people to white people." She's not willing to narrow the immunity back to what Congress probably intended in 1996 (spoiler: none of the Justices is), but she bluntly tells the Solicitor General's lawyer what she wants: "Let's assume we're looking for a line because it's clear from our questions we are, okay?" She wants an immunity for what could be called "good" selection criteria – those that are neutral, unbiased, or general-purpose – but not for "bad" criteria.
Justice Jackson. If anyone supports the idea of returning to the 1996 intent, it's Justice Jackson, who tells Google's lawyer that "you're saying the protection extends to Internet platforms that are promoting offensive material…. exactly the opposite of what Congress was trying to do in the statute." At another point, she signals clearly that she disagrees with the Google position that any selection criteria it chooses to use are immune from suit. In another colloquy, she downplays the risk of business disruption as just a "parade of horribles." Not all of her questions sound this theme, but there are enough to conclude that she's close to Justice Sotomayor in her skepticism about the sweeping immunity Big Tech wants.
Justice Kagan. Justice Kagan also sees that section 230 doesn't really fit the modern internet. The Court's job, she seems to say, is "to figure out how … this statute which was a pre-algorithm statute applies in a post-algorithm world." She thinks the plaintiff's reading could "send us down the road such that 230 really can't mean anything at all." She's daunted by the difficulty of refashion the statute to avoid over-immunizing Big Tech:
I don't have to accept all Ms. Blatt's "the sky is falling" stuff to accept something about, boy, there is a lot of uncertainty about going the way you would have us go, in part, just because of the difficulty of drawing lines in this area and just because of the fact that, once we go with you, all of a sudden we're finding that Google isn't protected. And maybe Congress should want that system, but isn't that something for Congress to do, not the Court?
At the same time, she sees, the immunity Google wants would allow Google to knowingly boost a false and defamatory video and to refuse to take it down. She asks, "Should 230 really be taken to go that far?" I'm guessing that she thinks the answer is "no" and that she, like Justice Sotomayor, is just looking for a line that gets her there. For purposes of the count, let's put her in the middle with the Chief Justice.
So far, the Justice-by-Justice breakdown for giving Google the sweeping immunity it wants is a 2-2-2 split between the left and right with the Chief Justice and Justice Kagan in the middle. That sounds familiar. But it's about to get weird. That's because the three remaining Justices are at least as much social as business conservatives. And Big Tech has a long track record of contempt for social conservatives.
Justice Thomas. You'd think that Justice Thomas, who's been grumbling about section 230 for this reason for years, would have been an easy vote against Google. He clearly has doubts about Google's sweeping claim of immunity for any selection criteria. At the same time, his questions show some sympathy for protecting Google's selection criteria, as long as they're generic and neutral. I still think he'll be a vote to limit the immunity, assuming someone finds a dividing line between good selection criteria and bad.
Justice Alito. Justice Alito is the only Justice to show a hint of conservative resentment at the rise of Big Tech censorship in recent years. He notes that Google could label and preferentially distribute what it considers "responsible" news sources and he questions why such curation should be immune from liability: "That's not YouTube's speech?" he asks. "The fact that YouTube put those at the top, so those are the ones I'm most likely to look at, that's not YouTube's speech?" He also raises the specter of deliberate distribution of bad content: "So suppose the competitor of a restaurant posts a video saying that this rival restaurant suffers from all sorts of health problems, it -- it creates a fake video showing rats running around in the kitchen, it says that the chef has some highly communicable disease and so forth, and YouTube knows that this is defamatory, knows it's -- it's completely false, and yet refuses to take it down. They could not be civilly liable for that? ,,, You really think that Congress meant to go that far?"
And, in another sign that Big Tech may have overplayed its claim of an imminent internet apocalypse, his last sardonic question is "Would … Google collapse and the Internet be destroyed if YouTube and, therefore, Google were potentially liable for posting and refusing to take down videos that it knows are defamatory and false?"
By my count, that leaves the Court roughly divided 2-2-4 on whether to give Google a sweeping immunity, with two business conservatives all in for Google (Gorsuch, Kavanaugh), two Justices waffling (Roberts, Kagan), and what might be called a "populistish" grouping of Sotomayor, Jackson, Alito, and (probably) Thomas,
Justice Barrett. Is Justice Barrett a fifth vote for that unlikely left-right alignment? Most likely. Like several of the other Justices, she was puzzled and put off by some of the idiosyncratic arguments made by the lawyer for Gonzalez. She also showed considerable interest that I don't understand in making sure section 230 protects ordinary users for their likes and retweets. But when Google's lawyer rose to speak, Justice Barrett rolled out a barrage of objections like those we heard from the other four immunity skeptics: Do you really, she asked, expect us to immunize a platform that deliberately boosts defamation, terrorism, or racism?
So there it is, by my seat-of-the-pants count -- somewhere between five and seven votes to cut back the broad immunity that a generation of Big Tech lawyers built in the lower courts.
And what about the folly of predicting outcomes from argument? Well, it's hard to deny that I'm running a pretty high risk of ending up with egg on my face.
There is a real possibility that the Court will dump the case without ruling on Google's immunity. The lawyer for Gonzalez did himself no favors by shifting positions on his way to oral argument. He ended up claiming that thumbnail extracts of videos were really Google's content, not third-party content, and that simply serving users more videos like the last one they watched was a "recommendation" and thus Google's own speech. The Justice's struggled just to understand his argument, and they may be tempted to dump the case for that reason, ruling that immunity is unnecessary because Google faces no underlying liability for aiding and abetting ISIS (the question presented in a companion case argued the day after Gonzalez).
But dumping the case without a decision is not a neutral act. It leaves in place a raft of immunity-maximizing cases from the lower courts -- precedents that at least seven Justices find troubling. That law won't go away on its own, so I'm guessing they'll feel dutybound to offer some corrective guidance on the scope of 230.
If they do, I bet that six or seven Justices will decisively reject the maximalist immunity sought by Google. They may have trouble tying that rejection to the text of the law (as do the immunity maximalists), and whatever limits they impose on section 230 (e.g., immunity only for "reasonable" or "neutral" content selection) could turn out to be unpersuasive or unstable. But that just means that Big Tech, which won its current legal protection by nuking liability from orbit will have to win some of its protection back by engaging in house-to-house legal combat.
If so, the popcorn's on me.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I don't think Prof. Baker is right about the intention of Section 230. I think it's a little more complicated and a LOT more cynical than that.
What happened is (1) obviously Stratton-Oakmont was decided and caused a tech freakout over their filtering decisions becoming a basis for liability, (2) Congress freaked out over internet porn and got behind the CDA, and (3) there became an opportunity to write language to overturn Stratton-Oakmont.
And (3), the opportunity, is the key. Because they did draft a statute that overturned Stratton-0akmont, but then they ALSO added another provision that says essentially "you can't treat platforms as a publisher of user generated content". What does that mean? Well it's pretty damned vague, isn't it? Which was EXACTLY the point.
Again, this happens all the time. When there's an opportunity to write legislation, EVERY interest group shoots for broad language they can then take to the courts and get interpreted the way they wanted. I can tell a similar story about the original California anti-SLAPP statute, for instance, which was not proposed because of a perceived need to make it impossible to file a defamation suit in California but was written in such a vague way that of course it could be successfully argued in California courts to have that effect. This is how this sort of politics works.
Given all that, it's hard to have any sort of Platonic view of what 230 applies to. It obviously immunizes filtering decisions, and then it is a question of how far the "treated as a publisher" immunity extends which... the statute doesn't tell us, because it was intended to allow people to go into court and argue it was super broad while also not seeming super broad to the members of Congress who voted for it.
Isn't our system grand?
230 passed in 1996 and Congress (and all of us), weren't fully sure of how the internets would work 27 years ago.
It's not Congress' fault back then they wrote a super broad law.
Does 230 need some tweaking?
Maybe. Maybe not.
It doesn't… but if it did, it ought to be up to Congress to do such tweaking. This is when stare decisis is at its highest: there are decades of cases interpreting it this way, and tons of reliance interests. Congress can abrogate them; courts should not.
I think in general Gonzalez’ lawyer did his client no favors by taking a maximalist position that the Justices were highly skeptical of, suggesting Google will likely win this case. And that may have somewhat shielded notice of skepticism of Google’s maximalist position, suggesting tech companies’ future immunity will likely be pared from what lower courts have been ruling.
In some respects, both lawyers’ maximalist positions did their clients no favors. If either lawyer had seriously engaged the Justices’ efforts at pragmatic line-drawing, he might have had an opportunity to influence where the Justices will end up drawing their line. But since both lawyers stuck to their maximalist guns, the Justices will have to come up with a line on their own, without any meaningful input from either party.
"assuming someone finds a dividing line between good selection criteria and bad."
Perhaps instead of the hyper generalized "good" vs "bad" criteria, the justices ought to simply read sec230 for the specific filtering Congress was encouraging.
IOW, stop treating "or otherwise objectionable" as if it mooted the rest of that list, and apply ejusdem generis.
You're using big words you don't understand again. Also, this case has nothing to do with (c)(2), so your comments are irrelevant here.
"assuming someone finds a dividing line between good selection criteria and bad."
Has anybody proposed a workable rule to distinguish between showing me a second cat video after I watch the first, steering me towards good hate speech, and steering me towards bad hate speech?
Cat videos are an abomination and should be banned. At the least. Long jail terms are really what is called for.
To be fair, 230 contains an exclusion for criminal conduct, so it shouldn't pose a problem for your cat video issue.
Only for federal criminal conduct.
Not sure of its impact on the Google case, but it is clear that Congress contemplated broader immunity than from defamation.
Section 230 contains a number of exceptions:
There would be no need for all these exceptions if all that was at issue was defamation. And note the pre-emption of "any State or local law that is inconsistent with this section."
The congressional intent was 'sue the user not the internet-service' - and it was supposed to apply in all cases except where the service refused to censor user posts.
Then they wrote it hilariously wrong, because that's clearly not what it says.
No. No such "except" appears anywhere in the statute.
The Internet is fundamentally changing societies. Not just the USA, but everywhere. We have only decades to make social adjustments that ought to have a century.
Hate speech is harmful (unless you're hating Donald Trump).
Advocating violence is harmful (unless you're advocating helping Ukraine.) What about the mother of a Russian soldier killed in Ukraine; can she sue Google as Gonzales did?
SCOTUS keeps telling us that the remedy for objectionable speech is more speech, but nobody seems to be buying that. IMO there are far too many attempts to suppress objectionable ideas.
'Hate Speech' is attached to identity (race, sex, national origin, religion, etc), hating a specific individual for how they behave is A-OK (whether that is Donald Trump or Alec Baldwin doesn't matter).
Hate speech is also protected by the US Constitution - but that's irrelevant here since government doesn't operate any web forums...
Advocating criminal violence is not OK. Advocating for the US/allies to win a war is (nobody got banned for expressing their desire to see Osama Bin Laden shot in the head & fed to fish)....
And there is no legal requirement for a private-property owner to be 'fair' in what speech they allow, so long as they don't breach any contract they may have with their users...
Whatever the SCOTUS decides, congress should respond by repealing section 230 and writing a new law.
Given the changes in the internet since the original law was passed it is long overdue, but don't hold your breath.
That's a non-starter, because split control of Congress, and the Democrats and Republicans want radically different outcomes.
Basically the right is pissed off about being censored, and the left is pissed off that the right isn't being censored enough. Where's the common ground between those two positions?
"Where’s the common ground between those two positions?"
More oversight (aka, opportunities for graft)?
Not until they settle what the 'oversight' will be attempting to accomplish. The two major parties have disjoint objectives here.
Maybe they'll find some minor matters they can agree on, perhaps getting the platforms to stop sabotaging the third party filters they're actually supposed to be informing their users about.
But I doubt it, as the major disagreement will tend to overwhelm any efforts at cooperation.
It is likely the Supreme court will not "fix" Section 230 because it would open the flood gates of litigation. The Gonzales v Google case will not fix what ails a law that is well beyond its expiration date.
Congress should amend Section 230 as follows:
(c) "GOOD SAMARITAN" BLOCKING AND SCREENING OF OFFENSIVE MATERIAL.
(1) TREATMENT OF HOSTING SERVICE
No internet hosting service shall be held liable for hosting lawful speech or content that is entirely provided, created, or developed by a third-party author.
(2) CIVIL LIABILITY PROTECTION
A hosting service shall not be held liable on account of—
(A) action uniformly undertaken in "good faith" to restrict access of third-party materials when the hosting service reasonably considers such material is contrary to accepted morality or convention. (e.g., defamation, fraud, incitement, fighting words, true threats, terrorism, speech integral to criminal behavior or conduct, child exploitation, cyberstalking, sex trafficking, trafficking in illegal products or activities, sexual exploitation, or is otherwise unlawful.)
(B) action taken to make available directly or through a third-party software provider the technical means for users to restrict access to any speech or material the user finds objectionable, even when such speech or material is constitutionally protected.
(d) OBLIGATIONS OF HOSTING SERVICE
Hosting services may elect to reject every provision of Section 230 (c) without penalty or disfavor and may rely solely on hosting services terms of service for civil liability protection.
Suggestions: ogee@two-feathers.net
A rare high-visibility case which will not be decided by ideology.
Recall that the hierarchy goes:
1. ideology
2. personal belief
3. precedent
4. what the constitution (or the relevant law) says.
This one seems to be lining up on personal belief, where personal belief can be understood as unique from ideology. It's better to find a line. It's better to not allow this but allow that. That's the realm of personal beliefs.
Just a tinge of concern for either precedent or what the law says.
The Fuck Cheer case wasn't decided by ideology. They happen more than you might think.
My "rare" claim should have put 1A cases on the non-rare side of the line. Indeed, 1A cases tend to bypass both ideology and personal feelings, and get resolved on precedent and what the constitution says.
Imagine if they all were decided like 1A cases are decided.
That's why everybody tries to shoehorn their rights claims into the 1st amendment, even if some other amendment in the Bill of Rights is solidly on point: It's the only one of them the courts take seriously.
".... essentially “you can’t treat platforms as a publisher of user generated content” ..."
I compare user generated web content to self-published books: the publisher is the user who generated the content.
Suppose that a self-published book listed in Amazon Books contains libel.
Who can be sued over the libel, Amazon or the self-published author who generated the content?
Amazon Books offers self-publishing facilities to authors.
Amazon Books has taken down objectionable self-published books and barred objectionable self-published authors from using Amazon.
Who is the publisher of user generated content?
In a fair world the author alone is liable for user-created content...
The non-existence of that world (User posts something to a Prodigy Online forum accusing a financial-fraudster of fraud. Fraudster sues for defamation, court holds Prodigy liable... Fraudster is subsequently convicted in criminal court, exposing the supposed 'defamation' as truth) is why Sec 230 exists.
It is unfortunate that is has to be asked - why doesn't Congress just fix it, rather than have the court wrestle with a 25 year old ambiguous statute that everyone seems to agree doesn't work like it should?
Like I said above, they can't fix it, because there's no longer any common ground between the Democrats and Republicans on this topic.
The Democrats, confident that most of the platforms will continue to be run by their ideological allies, want the platforms to be free to abusively censor all manner of speech dissenting from the left's own opinions. The Republicans, being the target of that censorship, want to put a stop to it.
And those few side issues they might actually agree about aren't important enough to either side to avoid getting sucked into the main fight.
You won't see Section 230 "fixed" until one or the other side has enough control of the elected branches to legislate as it pleases, without any compromise with the other side. I don't see that happening any time soon.
Because there is no actual agreement that it is broken - and if so in what way.
And because broader liability would more or less burn-down public-participation on the web - you can't have comment sections like this one if Reason.com could be sued for what people post.
Remember, that even before Section 230, Reason didn't have to worry about that, because Section 230 was only restating what was already judicial precedent on the topic: User posts weren't the platform's speech.
The point of Section 230 wasn't to allow comments, it was to allow moderation of comments without the platform becoming liable for the ones they left up. Given the minimal moderation regime here at Reason, this is one of the few sites that wouldn't have to worry about Section 230 being repealed, because they're not using its protection in the first place.
Everyone doesn't agree with that!
If Sec230 is curtailed, than open user comments are done for & the internet will be forced back into what it was before social-media:
Private, members-only forums where individuals can actually be banned permanently because they must put up name/address/credit-card to join.
The fact of the matter is, you cannot have any sort of 'open forum' - moderated or otherwise - if the forum-owner can be sued for the behavior of the users.
Let’s also remember that most of the caterwhaling over Sec230 from the ‘right’ is aligned with the nonsensical idea that Sec230 was enacted to only protect fora that refused to censor user posts…
Ergo, the ‘Tucker Carlson’ version where Facebook shouldn’t have access to Sec230 because it won’t let you claim COVID vaccines kill people (psst, they don’t)…
Reality is it was explicitly written to PROMOTE censorship and content-moderation, by excusing any site which engages in this from being classified as a publisher based on traditional defamation law.
The major difference of course, is that printed-ink publishers attached liability for their contracted authors' work because there was a financial relationship between the two.... The process of collecting, editing & turning into a bound document attached liability for doing so to whoever did it...
This does not equate to the 'process' of operating a comments form on a website, where there is no financial relationship & no actual way to ensure someone does not post content you disagree with (since they can make a new account if you ban them, because... open to the public)....
Psssst, yes some people do die from all vaccines, including the COVID ones.
Not true.
You can pre-clear comments. Many sites do. Go try and comment on the Deseret News and you'll find that they pre-clear every comment before they'll post it up (or at least they used to, they may have changed systems by now).
That is, of course, a lot more work. and cost-prohibitive for larger sites. But "more work" and "cost prohibitive" isn't the same as "impossible", and it wouldn't be the first time regulations meant that a given business model only worked at certain sizes.
I wish the participants in these Section 230 lawsuits had a clue about the operation of the World Wide Web.
In my new district court complaint, I point out the following.
Aside: How Does Google Search Service Belong to the Class of Message Common Carriers?
89. When a user requests a Google search, Google search creates a document to the user's specification and transmits the document to the user via digital message common carriage. This document is unpublished digital literary property and is a product that is created by an automated process. The procedure is rather like an online purchase from Amazon whole food and Amazon fresh. Amazon employees prepare the order, which Amazon Delivery delivers via package common carriage. The Google user works ("eyes-on-a-page") or barters for common carriage. (Google collects valuable information from the user's computing device.) One pays either an explicit fee or an implicit fee for Amazon Delivery common carriage service. Google search service makes the product and does not gather together products that originate with other sources. Google argues among other things that it is immune to JASTA liability because the creation of the search page product is automated. Most American manufacturers create a product today via an automated process. Automation is not a defense to manufacturer’s liability.[1]
Note
[1] In Reynaldo Gonzalez, et al., Petitioners v. Google LLC, [21-1333], Google seems to argue that automation prevents liability. US manufacturing requires fewer and fewer workers. Products are almost entirely created under algorithmic control.
Google software is buggy. Doesn't Google have liability for harmful results
• that are caused by omitting an obvious software check and
• that provide material support to make ISIS recruiting more effective?
Back in the 70s Joachim analyzed software that controlled a radiation treatment for cancer. The system had fried a patient.
When Joachim looked at the code, Joachim found that if a dosage was entered, the value was checked for sanity. Joachim further found that if a dosage were entered and failed the sanity check, the new entered value would not be checked for sanity and could fry the patient.
The defendants in the litigation had to pay a lot of money to the plaintiff. The liability seems completely justified.
"In my new district court complaint,"
So, how is the old one going?
Still don't understand res judicata, I see. Or Fed. R. Civ. P. 8(a). Or Article III of the constitution. Or the Ninth Amendment. Or the concept of common carriers. Or § 230. Or… well, is there anything you actually do understand?
What Congress meant is less important than what they said, and
" Of course, carried to its logical extreme, this means that what are now some of the richest companies in the world cannot be held liable even if they deliberately serve how-to-kill-yourself videos to the depressed, body-shaming videos to the anorexic, and ISIS videos to extremists."
is both false as to "deliberately" and counter to free speech.
I am no scholar but I think Section 230 should be amended as follows:
(c) "GOOD SAMARITAN" BLOCKING AND SCREENING OF OFFENSIVE MATERIAL.
(1) TREATMENT OF HOSTING SERVICE
No internet hosting service shall be held liable for hosting lawful speech or content that is entirely provided, created, or developed by a third-party author.
(2) CIVIL LIABILITY PROTECTION
A hosting service shall not be held liable on account of—
(A) action uniformly undertaken in "good faith" to restrict access of third-party materials when the hosting service reasonably considers such material is contrary to accepted morality or convention. (e.g., defamation, fraud, incitement, fighting words, true threats, terrorism, speech integral to criminal behavior or conduct, child exploitation, cyberstalking, sex trafficking, trafficking in illegal products or activities, sexual exploitation, or is otherwise unlawful.)
(B) action taken to make available directly or through a third-party software provider the technical means for users to restrict access to any speech or material the user finds objectionable, even when such speech or material is constitutionally protected.
(d) OBLIGATIONS OF HOSTING SERVICE
Hosting services may elect to reject every provision of Section 230 (c) without penalty or disfavor and may rely solely on hosting services terms of service for civil liability protection
Why isn't this a speech issue? What about the video was defamatory in a way that would make Google liable for its content? Even if Gonzalez wins they only win the right to sue Google for publishing a particular video. For Gonzalez to win that suit the video has to somehow be outside of First Amendment speech protections, doesn't it? Has anyone ever successfully sued the publishers of Anarchist Cookbook?