The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Retweeters Immune from Defamation Liability Under 47 U.S.C. § 230
From the New Hampshire Supreme Court's decision today in Banaian v. Bascom, in an opinion by Justice Anna Barbara Hantz Marconi:
[According to the Complaint,] the plaintiff was a teacher at Merrimack Valley Middle School in May 2016, when a student at Merrimack Valley High School "hacked" the Merrimack Valley Middle School website and changed the plaintiff's webpage, creating a post that "suggest[ed] that [the plaintiff] was sexually pe[r]verted and desirous of seeking sexual liaisons with Merrimack Valley students and their parents." Another student took a picture of the altered website and tweeted that image over Twitter. The retweeter defendants retweeted the original tweet. As a result, the plaintiff was subject to "school-wide ridicule," was unable to work for approximately six months, and suffered financial, emotional, physical, and reputational harm….
Plaintiff sued the retweeters, among others, for libel, but the court concluded that 47 U.S.C. § 230, part of the CDA (Communications Decency Act), precluded the lawsuit:
The CDA provides in pertinent part that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." An "interactive computer service" is "any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet." An "information content provider" is "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service." "No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with" section 230….
The meaning of "user" in the first element of section 230(c)(1) is the sole issue in this appeal. The plaintiff argues that "[a] person who knowingly retweets defamatory information is not a 'user' of an interactive computer service the CDA was designed to protect from defamation liability." She asserts that "[n]othing in the text of Section 230, or in the legislative history suggests that Congress intended to provide immunity to individual users of a website," and that "[t]he term 'user' of an interactive computer service should be interpreted to mean libraries, colleges, computer coffee shops, and companies that at the beginning of the internet were primary access points for many people." The plaintiff further asserts that "because the CDA changes the common law of defamation, the statute must speak directly to immunizing individual users."
The trial court "recognized that the vast majority of the reported cases that address whether a defendant is immune from suit under Section 230 involve internet service providers …, and not individual users." Nonetheless, cases that have addressed this issue have determined that the broad immunity in the statute extends to individual users. For example, in Barrett v. Rosenthal (Cal. 2006), an individual who posted a copy of an article she had received via email on two newsgroup websites was sued for republishing defamatory information. The California Supreme Court … determined that the term " '[u]ser' plainly refers to someone who uses something, and the statutory context makes it clear that Congress simply meant someone who uses an interactive computer service." … Given that Congress declared that "'[n]o provider or user of an interactive computer service shall be treated as [a] publisher or speaker,'" the court found no basis "for concluding that Congress intended to treat service providers and users differently," and that "the statute confers immunity on both." …
Subsequently, the United States District Court for the Eastern District of Virginia, noting that the CDA does not contain a definition of "user," turned to the plain meaning of the word. Directory Assistants, Inc. v. Supermedia, LLC (E.D. Va. 2012)…. [T]he court found that "a person who creates or develops unlawful content may be held liable, but … a user of an interactive computer service who finds and forwards via e-mail that content posted online in an interactive computer service by others is immune from liability."
We are persuaded by the reasoning set forth in these cases. The plaintiff identifies no case law that supports a contrary result. Rather, the plaintiff argues that because the text of the statute is ambiguous, the title of section 230(c)—"Protection for 'Good Samaritan' blocking and screening of offensive material"—should be used to resolve the ambiguity. We disagree, however, that the term "user" in the text of section 230 is ambiguous. "[H]eadings and titles are not meant to take the place of the detailed provisions of the text"; hence, "the wise rule that the title of a statute and the heading of a section cannot limit the plain meaning of the text." Likewise, to the extent the plaintiff asserts that the legislative history of section 230 compels the conclusion that Congress did not intend "users" to refer to individual users, we do not consider legislative history to construe a statute which is clear on its face.
Despite the plaintiff's assertion to the contrary, we conclude that it is evident that section 230 of the CDA abrogates the common law of defamation as applied to individual users. The CDA provides that "[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section." We agree with the trial court that the statute's plain language confers immunity from suit upon users and that "Congress chose to immunize all users who repost[] the content of others." …
Congratulations to Adam R. Mordecai of Morrison Mahoney LLP and Debra L. Mayotte of Desmarais Law Group, PLLC, who argued successfully on behalf of the defendants.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Wow. That's a hole Congress needs to plug.
There are some things that the government is not well poised to fix. This is one of them.
Corporal punishment should be allowed in our schools again. To deter.
Sounds like the correct result under the law. But I wonder if we wouldn't be better off if folks were discouraged from passing on gossip on the internet?
Yeah, because prior laws against passing along gossip have worked so well ... uhm ... never?
There's nothing in this ruling pertaining to the original tweeter so presumably the suit against them is allowed to proceed?
Humorously, the immunity may prove to be reciprocal. A would-be libelist need only arrange to have someone re-tweet, and then both may be protected. The re-tweeter as above, and the original because he can blame the re-tweeter for the damages—which would never have happened (or would have been trivial) had it not been for the immense circulation delivered by geometrically expanding generations of re-tweets.
Is that your professional legal opinion, humorously speaking?
Plaintiff sued the retweeters, among others, for libel, but the court concluded that 47 U.S.C. § 230, part of the CDA (Communications Decency Act), precluded the lawsuit:
That guts libel protection completely. It is a dramatic illustration of the unwisdom of Section 230's severance of joint liability, previously shared among publishers and contributors alike.
Many commenters on this blog have shown incomplete understanding of what goes on in publishing, and almost no understanding why published libel is much more damaging than spoken slander.
Publishing is not something a contributor does. Contributors provide content, sometimes for pay, more often without compensation. Publishing is a force multiplier for contributors.
The publisher practices activities which contributors do not. The publisher mobilizes an audience, and curates it, with an eye to making it attractive to would-be advertisers. By ad sales the publisher amasses revenue to pay for the entire effort, to make a profit, and sometimes to pay contributors. By lending its own connection to the internet, and by lending its own curated audience to contributors, the publisher enables contributors the possibility of world-wide, anonymous, cost-free, massive audience access for their contributions.
Once published, those contributions create an indelible record, with unlimited potential reach and circulation. If they prove damaging to some third party, those damaging published contributions will follow that party to his grave, and outlast his living memory. A job applicant under such a falsely-created cloud must reckon for the rest of his life that a prospective employer half-a-world away will push a button on a keyboard, and likely suppose the lies he finds are a valid product of his own research. Ordinary contributors could not inflict any of that on their own. Only by collaboration with a publisher can that happen.
Thus, because publishing is typically a joint activity, practiced among publishers and contributors with differing roles, and unique responsibilities, it has long been customary for the law to hold both kinds of parties responsible for any damaging defamations their joint efforts deliver. That is not only reasonable and just, but also an indispensable support for press freedom.
But it no longer happens. Section 230 heedlessly severed that connection, making only contributors liable, and tacitly making the separate role of the publisher disappear. Thus, as a practical matter, Section 230 ended most prospects for legal recovery of damages—while also disparaging by implication any notion that damages could be more serious than those usually-trivial damages created by spoken slander (If it appears no publisher is involved, only a, "speaker," will be seen.) Internet fans—especially because they do not accurately assess the publisher's contribution—applaud that, supposing it frees them to libel at will, or merely by error, without consequence.
The consequence will come, and it will include widespread public hatred of press freedom. It will bring demands for government control of publishers. That pressure is building already. Absent strong and widespread public support, press freedom will not find the political context necessary for it to endure.
By concentrating on libel—the subject of this thread—I have over-simplified a more-complicated subject. It has to do also with editing prior to publication, and its necessary role to protect the public life of the nation. Other complications touch on inter-relations of publishing as a business activity, with its ability to deliver expressive material to the public—and the indispensable need for that business model to remain viable. Absent private publishing—independent, self-financing, thriving, competing, profuse, and diverse—information necessary for public life, and for national self-governance, will not continue to be available.
Except at the hand of government self-interest.
Consider the situation where Joe on Yelp wants to post a review of Alice's Restaurant saying 'I had dinner there on May 6th, and my soup was cold and the waiter inattentive'.
In you desired world, what should Yelp do in terms of due diligence prior to posting Joe's review, in order to avoid a lawsuit for Alice?
Do you object in principle to sites like Yelp and Angie's List, and Amazon reviews, and want them shut down? Should employees of Yelp/Angie's List/Amazon somehow verify that the soup was cold/the tree trimmer showed up late/the widget from amazon was DOA?
You tend to speak in airy generalities. Can you address the specifics of what you propose for a change?
Absaroka, Except for the widget, you are talking about opinions. No lawsuits over opinions. You want to sue? The lawyer you want explains briefly, and shows you the door.
The widget was broke or it wasn't. Not a lawsuit. But if you need to defend, truth protects you.
'The tree trimmer showed up late' is a statement of opinion?
And what about the review of the broken widget?
As a follow on from your days running a newspaper: I doubt you got a lot of requests for classified ads saying 'my soup was cold at the diner'. But I suppose you got a lot of requests for ads saying something like 'low miles, runs great, only driven to church by Granny on Sunday'. Did you send a mechanic out to verify those (frequently inaccurate, IMHE) claims? If you did not, should someone who bought a lemon subsequent to your ad be able to sue your newspaper?
Absaroka, anybody is able to sue your newspaper. Nobody I ever heard of got sued over used car classifieds. I never even thought of it before you asked.
An editor is not a lawyer trying to advise a client. That would be hard. But libel stuff is really not hard, when all you need to do is not get sued. You got doubts? You aren't forced to publish anything.
You learn the difference between facts and opinions.
You don't sweat the opinions—they get a free pass. About 99%+ of what shows up on a typical internet blog is going to be pure opinion. Joe Keyboard is not an enterprising news gatherer.
You ask yourself questions about facts.
Damaging?
Is it an important story?
"Yes," to both questions? You look at it again, from scratch. Damaging facts are special. Everything stops. You need iron-clad evidence. Evidence that for sure will stand up in court. Can you bolster the evidence? Do that. Even if it seems air tight already. Still sure? Bolster it some more. Keep that up. Only when that runs out, go with the story.
Damaging, true, but not important? Set it aside. Don't buy trouble over trivialities.
Set-asides are great. If you don't have a backlog of other stories to check out, you aren't doing the job. A set-aside makes room to work on something better. So make every mistake on the side of caution.
I had a couple like that which made me ache. They were not trivial. I knew they were true. But how I knew I had no appetite to test in a courtroom. No appetite? It's a set-aside.
Stay alert, stay busy, stay curious. ABC. You publish good stories. People will want to sue, people will threaten to sue, people won't sue. Leastwise, no one ever did sue over anything I published.
Doing it like that left plenty of room to build a reputation as an aggressive, controversial, reliable news source. When you went out Thursday morning to the breakfast restaurants, you couldn't see anyone's face. They all had their noses behind the newspaper. We took photos of that. Used them to sell ads. The ad buyers knew the photos weren't fake. A lot of them were in the photos.
"Nobody I ever heard of got sued over used car classifieds."
I've certainly never heard of a newspaper being sued over a false used car ad[1]. But why is that? my sense is that A)people wanted to have classified ads available and B)everyone agrees that it was hardly practical for the newspaper to go out and do compression checks on the cars being listed.
But it seems to me that the same logic applies to Twitter; A)people want to have Twitter(God knows why!) , Yelp, and Amazon reviews, and B)it's not practical for Twitter to verify every tweet or Amazon to verify every review.
Why should a newspaper be able to publish my car ad ('oil always changed on schedule') with impunity but Yelp be sued for my review ('I found a fly in my soup')?
[1]if they could be sued for that, they wouldn't have a classified section
Absaroka, the same logic doesn't apply.
The logic which does apply is:
"Yay, we've got Section 230! Libel anyone you want. Screw your political enemies. In fact, make up the the most outlandish, wicked conduct you can think of, dress it up in some kind of false proof, and hit your enemy target like a bomb."
Think I am making that up? Take a look at QAnon. What kind of crap comes out of them? Nobody believes any of that, right?
Did you read the one in the NYT last week? A Fort Smith Arkansas pastor—in charge of a right-wing super church—gave up in despair. He unknowingly mentioned approvingly some Hollywood figure in a sermon, only to have his parishioners turn on him. Turns out the Hollywood guy mentioned was a QAnon target, and the congregation, "knew," he was a pedophile. Now their already-suspect pastor was endorsing him. It was the last straw. The NYT did not mention it, but I speculate the pastor decided the job wasn't worth the death threats. But that is just me. I'm prejudiced against QAnon.
It works. Judges buy that stuff. EV buys that stuff. Internet fans buy that stuff. Nobody believes anything that crazy, so it can't be real libel. Some buy that because they believe it; others buy it because they think they can't have the internet without it, so it's worth it. Maybe that's you. Put as many of those internet fans as you can on a libel jury, and any libeler probably gets off. Most places, it only takes one holdout.
Meanwhile, if you are the one targeted, and an attacker takes a few small pains to dress up some, "evidence," you could be dead to the world of employment—anything but menial—world-wide, forever. Sometimes that means you lose your family, too. Good times on the internet!
With that last, I have exaggerated a bit—not much—to allow for the not-distant future. Attack bots will shortly arrive, using AI equipped to simulate undetectably actual human prose. At practically no expense, they will be able to paper anyone's past with all sorts of authentic-looking accounts of wrongdoing.
Did you grow up in some small town, served by a little newspaper now defunct? Perfect. Put sleazy info in that newspaper's format. Leave it lying around online, in facsimile, as it were. Too bad about that disastrous high school senior year, when you were briefly jailed for sexual attacks on kids. And then your rich Dad paid off some pols, to make the cops let you go. They had to fire some cops over that. There are other stories about the firings, with cops and pols mentioned by their verifiable names, which you can check out online, also in scanned facsimile. The town was outraged, while you got sent to college far away.
See? The AI bots get trained using the entire internet as their database. They learn what's there, and they learn to use it like they find it, which makes it look authentic. But nothing of the totally made-up text is actually plagiarized. You can't critique it with a Google search. Nothing turns up.
Look it up. It's in the NYT. With astounding examples. Probably cherry-picked, I suppose. The AI work is not quite complete.
So, back to the tale of your destruction. Salt in subsidiary "media," accounts, in other plausible-looking formats—maybe from a defunct magazine, formerly published in a town next over—to make it look like your long-ago bad deeds went viral, in a small way. Even though, thanks to your Dad, the original account can't any longer be found to be verified. But you can still find it online.
See, thanks to the internet, assholes like you can't cover their tracks like they used to. The internet gives everyone the power to do their own research. It gets rid of the gatekeepers. Lies from the mainstream media get found out quick, by Joe Keyboards everywhere.
Maybe computer forensics could untangle a mess like that. Maybe not, if the, "originals," get salted onto servers in Panama, North Dakota, and Kaliningrad, by bots. How do you enforce a world-wide takedown notice? You aren't allowed to sue an online publisher, and you can't sue anonymous bots. Sucks to be you.
Want to head that future off before it arrives? Make internet publishers liable. Do that, and editors will read everything they publish before it goes online. An editor with human intelligence—incentivized to share your interest in accuracy and verifiability—will recognize what's up before it happens. Not because the human intelligence is necessarily more brilliant than Joe Keyboard's, but because the liability encourages the editor to take a keen interest in getting obviously damaging stuff verified before publishing it.
Amazingly, the solution is as simple as that. That is because the cause of the problem is that simple, too. A law to cut out online publisher liability is what created the mess.
Left unsaid, of course, is that the only thing that "created the mess" is Stephen Lathrop's delusions. It doesn't exist. The problem is too much liability, not too little. (Also, repealing § 230 would not in any way prevent bots from creating such material on servers in Panama or Kaliningrad.)
Right Nieporent. Nobody thinks internet publishing is a mess, and no one wants government to do anything about it. Also, compared to ink-on-paper publishing (which shared liability turned into an ornament of civilization), internet publishing liability is practically gone, but far too much. It's paradoxes all the way down. Another also? QAnon is the best thing yet for the public life of the nation. How else could those unique viewpoints make their proper impression on public affairs? Everyone agrees, right? Or is it just you? WTF do you think is the point of your advocacy?
Of course some people think Internet publishing is a mess. Many of those people think that non-Internet publishing is a mess, too.
What is the point of your advocacy, other than special pleading for the jobs of newspaper editors/publishers?
Nieporent, I worry that the public life of the nation has lost capacity to inform self-government by its citizens. I expect that trend to worsen.
I am dismayed by the nation's loss of news gathering capacity, which internet giantism has inflicted. It turns out that news gathering as practiced by professional journalists does not seem to be much fostered by putting publishing power in the hands of vastly more people. I do not think that was predictable, but it does seem to be what experience has proved.
It seems manifest that social dynamics abetted by an unlimited flow of unvetted information have harmed people whose habit has been to derive their opinions mainly from those around them—QAnon is an imposing and baleful novelty which should not pass unexamined. But QAnon only examples that kind of harm. Other examples will follow. It is possible that personal instability and political weakness which ignorance inflicts on its victims is now become—to a greater degree than ever before—a recognized and readily exploitable business model among the unscrupulous. PT Barnum looks quaint and benign by comparison.
Internet giantism concentrates too much power in the hands of only a few publishers. Tens of millions on all sides of all issues have been discomfited by that, and now urge government censorship as a corrective, which would be profoundly unwise. But leaving the situation as it is cannot promise better, nor deliver prospect for anything but increasing political volatility.
Near-impunity for libel is an evil in itself. It is also a political goad to those who have been damaged, to those who know folks who have been damaged, and to those who fear future damage. They too often demand censorship for protection. Those demands find increasing sympathy, especially among law and policy makers at state and local levels of government. That completely understandable political tendency is a growing threat to press freedom.
Long term, I do not suppose press freedom can continue, except on the basis of independent, self-funding private businesses, put by law beyond the reach of government interference. Those private publishers must thrive in profusion, compete with each other, and dynamically exploit every niche developed by shifting trends in public opinion. Internet giantism is delivering the opposite of that scenario, with no present sign of improvement. Business models necessary to correct present shortcomings in internet publishing seem contrary to key business precepts upon which the internet giants rely, such as network effects. So no trend toward self-correction seems in the offing.
"sexual attacks on kids...AI bots...assholes like you...death threats"
So, with all that out of the way, what's your plan for Amazon reviews?
Absaroka, what makes you think anyone needs a plan for Amazon reviews? They are opinions. They are liability free. If one of them isn't (hard to imagine how to make that happen in the review format), take it down. Which you would have to do now anyway.
As for the other stuff, none of it is out of the way. But you can't have utopianism without denying reality, so if you think that's what you need, have at it.
"They are opinions. They are liability free."
"My widget broke in half the first time I used it" is an opinion, not a factual claim?
"If one of them isn't (hard to imagine how to make that happen in the review format), take it down."
I'm interested in your view of the practicalities here. I post a review that says 'My AcmeCorp jet rollerskates didn't work at all, causing me to miss my tasty Road Runner dinner'. AcmeCorp contacts Amazon to say 'that's not true'.
What process do you expect Amazon to follow to judge whether my claim is true? I'm looking for some specificity here, rather than airy generalities.
Absaroka, what process?
You must suppose you are barking up some tree or other, given the hound-dog persistence. What possible significance do you think you smell? Is it because you are trying to impress me that reading Amazon reviews is some undoable gargantuan task, where the quantity of work repays the effort so poorly it is not worth doing? If so, I absolutely do not care. I do not care if you are right. I do not care if you are wrong. I do not care what happens, or what does not happen.
There could evolve a legal custom that Amazon reviews do not have to be read, because people who write them are liars representing competing products, and everyone knows it. Alternatively, the common law could decide people who rely on Amazon reviews are loons, mentally incapable to benefit from the laws' protection. It would not bother me, either way.
It could happen that Amazon, to protect its spectacular sell-everything-to-everyone-everywhere business, decides it must hire 10 million editors to read Amazon reviews before they are published. It would make no difference to me.
Amazon could discover too late, to its horror, that Amazon reviews are the tip of a mighty legal iceberg floating athwart Amazon's course, and after a horrific collision Amazon sinks without a trace. So what?
I do not care what process Amazon follows, nor about any process Amazon does not follow, nor whether Amazon is left free to do what it wants, nor whether Amazon is compelled by government to do something it does not want, with regard to Amazon reviews.
After giving the topic all consideration it deserves, I have decided Amazon reviews are the least interesting, least revelatory, most banal topic in the history of thought. Go to work for Amazon, and assure them I empower you to choose whatever process you think best. It does not matter to me.
"I do not care what happens"
Well, that's fine. But the rest of us do care. Arguing 'change the law to X, and don't bother me with the consequences you care about but I don't' is somehow not persuasive.
Absaroka, I certainly do not expect to persuade you. You appear to be an internet utopian. With a quirky predilection for Amazon reviews.
That is a category, but not one to justify a response like, "But the rest of us do care." You do not speak for the rest of anyone, any more than I do.
You did not give my response enough attention to notice that whatever you want for your pet concern is fine with me. So your retort is misplaced to suggest my carelessness somehow hampers anything you want. Have at it, just as you like. What I told you is that you are worrying an insignificant detail.
You may be in plentiful company. I suspect you are. Most of that company manifestly do not give the issues I mention much thought. I see little evidence they care how internet publishing works, or what its effects on the public life of the nation might be. That appears to be you too.
"With a quirky predilection for Amazon reviews"
Amazon reviews. Yelp reviews. Comment sections like this one. A zillion and one forums with topics from car repair to investing to birdwatching to gardening to photography. Those are genuinely useful to people. Not to you - I get that.
"Most of that company manifestly do not give the issues I mention much thought."
Obviously!
Absaroka, oh, comments sections. Why didn't you say so? Comments sections I do care about.
What is wrong with private editing of comment sections to prevent your publication from libeling people?
Or reverse the problem. What is right about permitting a private publication to libel people with impunity?
If you don't care, feel free to say so.
Comment sections and yelp reviews have the same issues here.
"What is wrong with private editing of comment sections to prevent your publication from libeling people?"
We just discussed this upthread. Nothing is wrong with it - it's a question of practicality.
Consider, say, a forum for subaru owners. Someone posts a question asking whether anyone knows a good Subaru mechanic in the Quad Cities. Some responds 'Well, don't go to Joe's Motors, he totally effed up my 2007 Outback'. Joe sends you - the guy running the forum as a sidelight - a denial and demands you take down the post or he will sue. Do you go inspect the car (remember - you run the forum from your home in Vermont)? Do you call up your lawyer and tell him to defend? Do you take down every post that a shoddy mechanic complains about?
I run a forum like that. Here is what I'm not going to do - hire a lawyer to defend against Joe's lawsuit. I'm not even going to spend hours every day reading every single post proactively looking for something someone somewhere might want to sue me for. If I had to do either of those things I just wouldn't be running the forum. I get that you don't view that as a loss, but the people using the forum presumably would.
I moderate for civility - no RALK or AWRP types allowed. I would delete anything I thought was out of line, but I don't have the resources to do a deep dive on every post someone makes. I don't make money on this, I do it as a public service. If Joe can credibly threaten me - and the company hosting the forum (for free!) - with a lawsuit because someone was unhappy with his repair work, you simply wouldn't have forums like that.
Remember, unlike your newspaper, no one is paying me to do this - no advertising, no subscriptions. Maybe that rubs you the wrong way, but lots of people find great value in this kind of thing.
Absaroka, you have written an appeal for a power to libel with impunity.
You want to be a publisher in fact. You do not want to be held to a publisher's responsibilities. You say that if you had to be responsible for what you publish, it would not be practical to do it, and your forum would disappear. You write this:
I get that you don't view that as a loss, but the people using the forum presumably would.
Why would the people using the forum think they got anything worthwhile out of reading damaging misinformation you publish recklessly?
Does that seem harsh to you, to judge your well-meaning, uncompensated effort as if it were worthless, or even harmful? Here is the avowal of irresponsibility you wrote just before that last quote:
I'm not even going to spend hours every day reading every single post proactively looking for something someone somewhere might want to sue me for.
For pity's sake, take the hypothetical post down and thank your lucky stars you are hypothetically getting off easy. The guy you hypothetically libeled would view shutting that forum down as a gain.
I think the reason you, like so many people, remain an internet utopian is that you have not adjusted a traditional frame of reference to a new reality. Previously, stuff showed up all the time on purely local forums, in face-to-face communities, where the posts were ephemeral, and reached tens or hundreds of people, until they were forgotten. You take that bygone reality as analogous to internet posts. It is not.
Instead, the internet posts circulate world-wide, potentially on literally millions of devices, to reach literally billions of people, and remain permanent. For a time interval which will outlast most of us, anywhere in the world, a person interested in anyone mentioned by your little local enterprise, can type a few keystrokes—and bring to the fore whatever misinformation your virtuous-feeling but insistent inattention let slip through.
Those are consequential differences.
If you find yourself inclined to minimize the seriousness of any of that, reflect a bit more about how utopianism works.
"you have written an appeal for a power to libel with impunity."
On the contrary, I should be liable for libel I create.
That liability shouldn't extend to the forum, the ISP, the newsprint supplier, the manufacturer of the computer, etc, etc. You, I think, are the one not adjusting to the new reality. You want to draw the line in one way, but the sovereign people - remember them? - want to draw it somewhere else.
"For pity's sake, take the hypothetical post down"
Again, it's not taking down a hypothetical post - it is taking them all down. The voters just disagree with you on this.
"Guy who repeatedly claims one needs to understand the subject of a law to understand the law… continues not to understand the subject of the law."
That's not publishing and will never be publishing, no matter how many times you invent a fake definition with factors that do not define publishing.
How does it gut libel protection?
You can still sue the original tweet-er who actually wrote the libel.
S230 does not stop that.
What it prevents, is the grand scam of suing everyone remotely connected to the situation and hoping to rack up some settlements.
Or the 'deep pockets' nonsense of suing Twitter because a user who isn't even a paying customer wrote something libelous and transmitted it through Twitter's website.
Section 230 exists for a very, very good reason: to prohibit the 'if you censor user posts you are a publisher' theory of law.
Unlike a traditional publisher - who had a compensated business relationship with the writers providing it content, and thus can police libel by threat of firing or contract cancellation (As well as excluding the writer from their property (and thus from access to their printing equipment) under trespass law) - an online service has no such relationship with it's users....
Today's social media - where users are no longer paying subscribers, and the business has no way of verifying identity - is in an even worse position in terms of controlling user behavior.
They can ban your account, you can register a new e-mail address, and get right back to posting whatever you want even though you have been 'banned'. At most they catch you and ban you again, but for a site as big as FB or Twitter that's unlikely...
Under these circumstances it is simply not reasonable to hold anyone except the original author of a libelous post liable for libel.
Dave_A, some of what you wrote is not inaccurate. None of it makes sense in terms of actual publishing, which is an activity you do not understand.
I explained some of the points you overlook above. You should go back and re-read. Unless you want to persist in confusion, there is nothing there you can discount to zero.
But to be fair, I also mentioned that there was more to it. In the interest of not making the post too daunting, I did not cover everything. If I had, it would at least have helped you get rid of your bias to consider everything exclusively from the point of view of an internet poster, which is apparently the only part of publishing with which you have experience.
Internet publishing problems cannot be fixed by prioritizing the perceived grievances of would-be commenters, while ignoring every other aspect of the problem. To think that is utopian.
The publishing issue is not to control the commenters, the issue is to okay the opinions (online, almost everything), verify the few factual comments, and to not publish factual allegations which threaten damage. And to do all that while earning by purely private enterprise enough money to keep doing it. That could actually be accomplished. But not with Section 230 in place.
You vastly underestimate the 'possibility' of moderating every post for libel on a public, free-to-use website.
In an actual publisher-writer relationship, the publisher has power over the writer's income/career (eg, the publisher can fire the writer or end their contract). So they can control libel by controlling the authors of it.
Further, a website that any member of the public can comment on plays none of the roles that a publisher does in terms of interviewing/hiring writers who they publish.
The only means available to control libel posted on an open-to-the-public site is to manually review every post. Which is not practical.
Which makes absolving the website for libelous material written by it's 'random non-employee member-of-the-public' users the correct choice.
A defamed individual or business may still sue the user who posted the material - they just can't play the 'deep pockets' game (which is generally a good thing - not just for libel litigation but for the economy as a whole)....
Are classic publishers completely immunized from anything they publish (even in print media) as long as what they publish was at one time transmitted to them (i.e.provided) by “another information content provider,” e.g. an author, by means of an “interactive computer system”?
Does it actually have to be transmitted over the interactive computer ssystem? Or is it good enough that, as long as they use an interactive computer system with multiple users for something, (making them “users of an interactive computer system”), they are immunized frkm being regarded as publishers no matter how the “information content provider” provided the information content?
One can imagine a version of the old taxi cases, where publishers create paper shallow-pocket (asset-poor) corporations whose sole job is to be the designated content provider, and hence the only ones who can be sued by potential libel plaintiffs.
It has to be an online service, and the individual posting the information to the online service has to be a 3rd-party (eg, not an employee of the service).
The whole point of S230 is to prevent the likes of Prodigy from being sued as a 'publisher' of libel, under the theory that their engaging in censorship makes them a 'publisher' of their members/users posts.
In short, S230 was written *specifically* to *prohibit* the legal theory (if you censor your users you are a publisher not an information-service) that the likes of Trump are pushing - and to restrict libel suits against online media to situations where the author of the libelous statement is an employee of the online service/media/site/whatever company rather than a customer or user...
You can sue gawker.com for libel if one of their employees posts something libelous.
You can't sue Facebook or Twitter if joe-rando-user posts similarly.
The lawyer in the original case is playing word-games (Claiming Starbucks is a 'user' of Twitter, but you as a Twitter account-holder are not) in an attempt to breathe life into an obviously prohibited case.
Dave_A, you may suppose that to understand the words in a law is all it takes to understand what the law means. But to understand what a law means, you not only have to understand what the text of the law says, but also have to understand the activity the law purports to govern. You have some work to do before you understand the activity of publishing.
Just for starters, libel damages occur when false and defamatory factual allegations get published. In internet publishing, if that is not prevented before it happens, in most cases, no sufficient remedy for resulting damages can ever be imposed. The ruling in the case in point from the OP, would make that a universal rule, without any exceptions at all.
Also? You ignore completely what I explained to you about the separate roles of publishers and contributors. The fact is, a publisher's activity is generally far more the agent of damage than a contributor's. Your assertion that a damaged party can be made whole by suing only a contributor is speculatively possible, in a tiny minority of cases. Mostly it is nonsense.
In internet publishing, even suing both the publisher and the contributor, and winning against both, would in many cases prove inadequate, but better. But with that arrangement mandatory, the need to sue would arise far less often, because the publisher, unlike most contributors, would have self-interest in preventing damage before it happened.
Technologically/economically the only practical method to avoid that damage would be to shut down every comments section, every open-to-the-public bulletin board, and every social media site on the web.
And all for what? To allow for one of the worst features of our legal system - the suing of a tangentially connected wealthy party over the actions of a 3rd party that they have minimal relationship with....
As a rule, most aspects of civil litigation in the US allow for a far broader spectrum of plaintiffs than what is reasonable.
Remington should not be liable for the use of it's firearms products to commit murder. Cirrus Aircraft should not be liable when a customer violates the FARs and crashes/dies in a Cirrus aircraft. And Facebook should not be liable when someone other-than an employee speaking on behalf of the company posts defamatory material.
These prohibitions should be imposed in such a manner that cases are never even filed (punitive sanctions on any lawyer that files one) - to prevent defendants from incurring litigation expense, since that will inevitably be passed to their customers.
It would be an incredible boost to the economy, save for those who make money filing such inherently abusive litigation.
To repeat a libel is a libel. Unless you do it on the internet.
Publish a libel in a newspaper, and the newspaper is liable. Publish a libel on Twitter, and Twitter is immune from liability.
Section 230 is bad enough. Courts, by expanding it way beyond its text and intent, have managed to make it even worse.
It's well past time to repeal this special exemption for Big Tech. If that happens to kill social media and comments sections, the world will be better for it.
Except that's not what any of this is about.
If you are the original author of a libelous post, S230 does not protect you at all.
S230 protects your victim's lawyer from going after the online service you posted to, or your ISP, or their ISP, or some random person who quoted you.
It is needed because the law of libel was developed around holding businesses responsible for the actions *of their employees*. Eg, you sue the NY Times for libel *written by one of their staff reporters*, which they allowed to be published.
But companies like Prodigy (this is 1996, remember - there is no social media, no online 'public square' of any kind - just highly censored, paying-members-only online-services) do not have such a relationship with their *users* and could not reasonably be expected to police user posts for libelous information - especially using 1996-vintage computing power and software (Facebook style algorithmic moderation was not technologically possible in 96).
And so, when Prodigy got sued for libel because of a post by one of their users containing accusations of fraud (which were, amusingly, true - and thus not libelous, although Prodigy as a 3rd party lacked the ability to prove this) and lost due to the claim that their censorship policies made them a 'publisher', we got S230 as a result.
It is good law, and regardless of the new-right spin-machine's BS (S230 was written to PROTECT the right of online services to censor posts without fear of libel litigation - NOT to create a censorship-free public square) it should stay in effect.
Yes, that is what this case is about. If someone posts something libelous on Twitter, Twitter is immune from liability under Section 230. This case says someone who retweets the libel is also immune under Section 230.
In a 2020 interview with the New York Times, Joe Biden said. "The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false.”
Who realized Joe Biden was part of "the new-right spin-machine". as you so elegantly put it?
Yes, Twitter should be immune.
They did not interview, hire, or pay the poster.
They have no actual relationship with said poster, nor any means to keep that individual (not the account, but the human being writing the post) from opening a new account on Twitter and continuing to post libel.
Biden's comments aside, the largest constituency campaigning against Section 230 are the Trumper 'new right' types, who seek to revise it in such a way that any company which restricts what users may post is exposed to libel liability - with an eye towards more or less stripping social media companies of their 1A right against compelled-speech (via exposing them to libel litigation if they restrict what may be posted).
The above position is motivated by the fact that most social media sites prohibit discussion of the new-right's favorite conspiracy theories, which the owners of said sites deem misinformation.
There is no "special exemption for Big Tech." Stop lying.
What a bizarre response. Anyone who can read and understand English knows that is the precise purpose of the statute.
Internet publishers are given an immunity other publishers do not have. Publish a libel on Twitter , and Twitter is immune from liability. Publish a libel in an ad or a letter to the editor in a newspaper, and the newspaper is not immune from liability. A bar owner may even be found liable for libel in graffiti left on the bathroom wall of his establishment. Hellar v. Bianco, 244 P.2d 757 (Cal. App. 1952). That last is as perfect an analogy for Twitter as any.
No. Is Reason magazine "Big Tech"? § 230 protects Reason magazine. "Internet publishers" != "Big Tech." The statute was enacted before anything people are calling "Big Tech" even existed. Essentially every website, big or small, is protected from liability by § 230.
The Hellar case stands for the proposition that a bar owner who knowingly allows a particular piece graffiti to remain after having been informed of it and having been given a reasonable amount of time to remove it can be liable. That's not analogous to Twitter, which has hundreds of millions of tweets per day. The legal regime doesn't scale.
Then change the scale, Nieporent. If Twitter needs to edit hundreds of millions of tweets per day, maybe it needs to hire half a million editors to do it. I'm fine with that. If Twitter finds it can't keep up, then that is a practical limit on the size of its publishing business. I'm fine with that too.
In fact, it is exactly that practical limit which delivered profusion and diversity to publishing throughout nation prior to the internet. Previously, most publishers could not grow any larger than permitted by the their ability to pay for editorial effort out of the proceeds raised by available advertising sales. That left room throughout the nation for other publishers to fill other niches, supported by advertising available where they were. For the public life of the nation, for news gathering, for the quality of published discourse, it was a much better system than internet giantism has proved to be.
Section 230 was the mechanism which permitted internet publishing to escape that previous size limitation. So now, the scale is wrong. Change it back.
Apparently you've never heard of the concept of broadcast media. (Websites don't use the public airwaves, of course, but they're broadcast in the underlying meaning of that phrase.)
Nieporent, you have no idea what you are talking about. You do not understand publishing business models. Traditional print publishers, and traditional broadcast publishers are constrained pretty much alike. Both are based on curating mass audiences, and on delivering mass content. Both are constrained by geographic limitations. Exceptional broadcasters with very large service areas do not add proportionate sales capacity. Too many advertisers cannot make much use of the extra reach. Mostly it is national advertisers which find that useful, but they have choices, and can bid down prices. In any case, for either traditional media type, expansion of that model to larger geographic areas inevitably adds somewhat-proportionate editorial costs as the audience populations served increase.
Internet publishers can escape those cost constraints almost completely, but only if granted liberty to disconnect advertising sales from editorial effort. That is accomplished, as you know, by suspension of liability, which otherwise requires every item published to add a cost increment to pay for prior review. For an internet giant, without a curated mass audience, vast geographic reach requires massive amounts of editorial content—according to many internet fans, too much content to permit practical prior review.
Thus, with its internet business model based not on serving mass content, and not on curating mass audiences, but instead on customized one-to-one delivery of targeted editorial content, and individually-targeted advertising, the internet model is economically unlike the others. The resulting businesses do not work comparably. Their economics are not alike.
Note however, that the advantage of the internet business is not inherent in the medium. Make the internet model compete on terms of equal liability with the traditional models, and the latter, using decentralized multiple businesses, would likely outcompete any consolidated, giantistic internet publishing model. Its editorial costs in that scenario would be disproportionately much higher than the others.
A more interesting comparison would ask what would happen if decentralized models, and equal liability, were featured on both sides. At that point, the inherent internet cost advantages would loom large, and likely outpace the older models easily.
Point out a situation, prior to the internet, where a 'publisher' existed to print - for free - the random comments of members of the public.
That did not happen.
Only those employed or contracted by a publisher could be published by that publisher.
A publisher interviews/vets, employs or otherwise pays, and may take economic action against those who's work it publishes.
Facebook does none of the above, and having to do so would make their business untenable (eg, if you had to work for FB to post cat-pics for your friends on FB.com, how would social media function)....
Further, while sites like NYT, CNN and Reason *do* have such relationships with their staff (which is not covered by 230) they do NOT have such relationships with those who post to the 'comments' section (which is covered).
'Comments' sections that are open to the public (or to anyone who buys a subscription when subscribing is open to the entire public) would similarly be unworkable if libel liability applied.
Internet sites that allow public posting without an employment relationship *are not* equivocal to press-and-ink publishers and should not be treated similarly.
They should be treated akin to a pre-internet retail business that hosts a community 'cork-board' and allows it's customers/visiting-public to pin fliers/ads to it - a more accurate equivocation than calling them 'publishers' as if they were a newspaper.
No defamation liability should be allowed.
So your objection was a pedantic one to the label "Big Tech". Okay then.
My analogy comment was more of a joke to Twitter being akin to a bathroom and Tweets being the graffiti. Obviously, a "reasonable amount of time" will depend on the circumstances. No one suggests Twitter should be strictly liable for every libelous Tweet that gets posted, but should be judged by normal rules of libel law, considering factors like actual notice and a reasonable time to investigate it, especially since they are relatively quick to delete Tweets or label them as "misleading". If there were no moderation, they could more credibly claim innocence, but when they claim loudly to be arbiters of truth, this suggests tacit endorsement of Tweets they don't censor.
Again, claims of libel, absent their total immunity, would be judged on a case by case basis, as with everyone else.
No, my objection was a substantive one to the common but mistaken notion that § 230 was passed to give special rights to a few big social media companies. § 230 protects everyone on the Internet. Including traditional publishers like the NYT or Reason. It's not special rights for anyone; it's just a different treatment of different technology.
It's not special rights for anyone; it's just a different treatment of different technology.
There is nothing about internet technology which requires impunity for libel. A congress as confused about internet publishing as commenters here typically are, made an unwise policy choice about libel. That is all that happened.
What the congress and the commenters got wrong was that when libel occurs, publishers commit it. Always. Every time. Not just authors.
There is never a libel without a publisher behind it. Less often, the author of the libel and the publisher are one and the same. Usually they are separate. Indeed, absent the self-interested commercial activity of a publisher, many would-be libel authors would prove unable to damage their targets if they tried.
"Guy who repeatedly claims one needs to understand the subject of a law to understand the law… continues not to understand the subject of the law."
Nieporent, you are sort of right in a small way, and really wrong in a big way. It is not written as a special exemption for big tech, true. But it is an exemption which opens the way for publishing tech to get big—orders of magnitude bigger than before the exemption—and buy up or force out competitors. By now, a lot of people seem to understand that. F.D. Wolf seems to get it. Why not you?
Because he's wrong, and you're wrong, and censorship is not the response to "You have a big audience."
Bystanders may need an experienced guide to explain to them what Nieporent means when he says, ". . . and censorship is not the response to 'You have a big audience.' "
Nieporent routinely conflates private editing with censorship, on purpose. I am not sure why he practices that tic. He may suppose it is libertarian advocacy. Where a forthright person, understanding the shape of a controversy, might say, "I oppose private editing, just as I oppose censorship," Nieporent goes instead for deliberate conflation. He may suppose he can find a moral advantage somewhere in the dust cloud of confusion he raises on purpose.
Lathrop routinely conflates private editing at gunpoint with private editing by choice.
Try reading NYT v. Sullivan, and its progeny, even once.
This is just a lawyer trying to play word-games for a client.
The word 'user' meant the same thing in 1996 it does today: the individual account-holder who creates a post.
Trying to claim that it meant 'coffee shops and libraries' is just some underhanded gamesmanship in an effort to subvert S230...