The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
What Should We Do About Section 230?
Thoughts on the Attorney General's workshop.
Yesterday, the Attorney General held a workshop on Section 230 of the Communications Decency Act. The question was whether the law can be improved. Section 230 does need work, though there's plenty of room for debate about exactly how to fix it. These are my mostly tentative and entirely personal thoughts on the question the Attorney General has asked.
Section 230 gives digital platforms two immunities – one for publishing users' speech and one for censoring users' speech. the second is the bigger problem.
- Immunity for what users say and do online
When section 230 was adopted, the impossibility of AOL, say, monitoring its users in a wholly effective way was obvious. It couldn't afford to hire tens of thousands of humans to police what was said in its chatrooms, and the easy digital connection it offered was so magical that no one wanted it to be saddled with such costs. Section 230 was an easy sell.
A lot has changed since 1996. Facebook and other have in fact already hired tens of thousands of humans to police what is said on their platforms. Combined with artificial intelligence, content fingerprinting, and more, these monitors work with considerable success to stamp out certain kinds of speech. And although none of these efforts are foolproof, preventing the worst online abuses has become part of what we expect from social media. The sweeping immunity Congress granted in Section 230 is as dated as the Macarena, another hit from 1996 whose appeal seems inexplicable today. Today, jurisdictions as similar to ours as the United Kingdom and the European Union have abandoned such broad grants of immunity, making it clear that they will severely punish any platform that fails to censor its users promptly.
That doesn't mean the US should follow the same path. We don't need a special, harsher form of liability for big tech companies. But why are we still giving them a blanket immunity from ordinary tort liability for the acts of third parties? In particular, why should they be immune from liability for utterly predictable criminal use of warrant-proof encryption? I've written on this recently and won't repeat what I said there, except to make one fundamental point.
Immunity from tort liability is a subsidy, one we often give to nascent industries that capture the nation's imagination. But once they've grown big, and the harm they can cause has grown as well, that immunity has to be justified anew. In the case of warrant-proof encryption, the justifications are thin. Section 230 allows tech companies to capture all the profits to be made from encrypting their services while exempting them from the costs they are imposing on underfunded police forces and victims of crime.
That is not how our tort law usually works. Usually, courts impose liability on the party that is in the best position to minimize the harm a new product can cause. Here, that's the company that designs and markets an encryption system with predictable impact on victims of crime. Many believe that the security value of unbreakable encryption outweighs the cost to crime victims and law enforcement. Maybe so. But why leave the weighing of those costs to the blunt force and posturing of political debate? Why not decentralize and privatize that debate by putting the costs of encryption on the same company that is reaping its benefits? If the benefits outweigh the costs, the company can use its profits to insure itself and the victims of crime against the costs. Or it can seek creative technical solutions that maximize security without protecting criminals – solutions that will never emerge from a political debate. Either way it's a private decision with few externalities, and the company that does the best job will end up with the most net revenue. That's the way tort law usually works, and it's hard to see why we shouldn't take the same tack for encryption.
2. Immunity for censoring users Detecting bias.
The harder and more urgent Section 230 problem is what to do about Silicon Valley's newfound enthusiasm for censoring users whose views it disapproves of. I confess to being a conservative, whatever that means these days, and I have little doubt that social media content mediation rules are biased against conservative speech. This is hard to prove, of course, in part because social media has a host of ways to disadvantage speakers who are unpopular in the Valley. Their posts can be quarantined, so that only the speaker and a few persistent followers ever see them but none knows of that distribution has been suppressed. Or they can be demonetized, so that Valley-unpopular speakers, even those with large followings, cannot use ad funding to expand their reach. Or facially neutral rules, such as prohibitions on doxing or encouraging harassment, are applied with maximum force only to the unpopular. Combined with the utterly opaque talk-to-the-bot mechanisms for appeal that the Valley has embraced, these tools allow even one or two low-level but highly motivated content moderators to sabotage their target's speech.
Artificial intelligence won't solve this problem. It is likely to make it worse. AI is famous for imitating the biases of the decisionmakers it learns from – and for then being conveniently incapable of explaining how it arrived at its own decisions. No conservative should have much faith in a machine that learns its content moderation lessons from current practice in Silicon Valley.
Foreign government interference. European governments, unbound by the first amendment, have not been shy about telling Silicon Valley to suppress speech it dislikes, which include true facts about people who claim a right to be forgotten, or charges that a politician belongs to a fascist party, or what it calls hate speech. Indeed, much of the Valley has already surrendered, agreeing to use their terms of service to enforce Europe's sweeping view of hate speech--under which the President's tweets and the Attorney General's speeches could probably be banned today.
Europe is not alone in its determination to limit what Americans can say and read. Baidu has argued successfully that it has a first amendment right to return nothing but sunny tourist pictures when Americans searched for "Tiananmen Square June 1989." Jian Zhang v. Baidu.Com Inc., 10 F. Supp. 3d 433 (S.D.N.Y. 2014). Today, any government but ours is free to order a US company to suppress the speech of Americans the government doesn't like.
In the long run it is dangerous for American democracy to give highly influential social media firms a blanket immunity when they bow to foreign government pressure and suppress the speech of Americans. We need to armor ourselves against such tactics, not facilitate them.
Regulation deserves another look. This isn't the first time we've faced a disruptive new technology that changed the way Americans talked to each other. The rise of broadcasting a hundred years ago was at least at transformational, and as threatening to the political order, as social media today. It played a big role in the success of Hitler and Mussolini, not to mention FDR and Father Coughlin.
American politicians worried that radio and television owners could sway popular opinion in unpredictable or irresponsible ways. They responded with a remarkable barrage of new regulation – all designed to ensure that wealthy owners of the disruptive technology did not use it to unduly distort the national dialogue. Broadcasters were required to get government licenses, not once but over and over again. Foreign interests were denied the right to own stations or networks. A "fairness" doctrine required that broadcasters present issues in an honest, equitable, and balanced way. Opposing candidates for office had to be given equal air time, and political ads could to be aired at the lowest commercial rate. Certain words (at least seven) could not be said on the radio.
This entire edifice of regulation has acquired a disreputable air in elite circles, and some of it has been repealed. Frankly, though, it don't look so bad compared to having a billionaire tech bro (or his underpaid contract workers) decide that carpenters communicating with friends in Sioux Falls are forbidden to "deadname" Chelsea Manning or to complain about Congress's failure to subpoena Eric Ciaramella.
The sweeping broadcast regulatory regime that reached its peak in the 1950s was designed to prevent a few rich people from using technology to seize control of the national conversation, and it worked. The regulatory elements all pretty much passed constitutional muster, and the worst that can be said about them today is that they made public discourse mushy and bland because broadcasters were cautious about contradicting views held by a substantial part of the American public.
Viewed from 2020, that doesn't sound half bad. We might be better off, and less divided, if social media platforms were more cautious today about suppressing views held by a substantial part of the American public.
Whether all these rules would survive contemporary first amendment review is hard to know. But government action to protect the speech of the many from the censorship of the privileged deserves, and gets, more leeway from the courts than the free speech absolutists would have you believe. See, e.g., Bartnicki v. Vopper, 532 U.S. 514 (2001).
That said, regulation has many risks, not least the risk of abuse. Each political party in our divided country ought to ask what the other party would do if given even more power over what can be said on line. It's a reason to look elsewhere for solutions.
Network effects and competitive dominance. Maybe we wouldn't need a lot of regulation to protect minority views if there were more competition in social media – if those who don't like a particular platform's censorship rules could go elsewhere to express their views.
In practice, they can't. YouTube dominates video platforms, Facebook dominates social platforms, Amazon dominates online book sales, etc. Thanks to network effects, if you want to spread your views by book, by video, or by social media post, you have to use their platforms and live with their censorship regimes.
It's hard to say without investigation whether these platforms have violated antitrust laws in acquiring their dominance or in exercising it. But the effect of that dominance on what Americans can say to each other, and thus on political outcomes, must be part of any antitrust review of their impact. Antitrust enforcement often turns on whether a competitive practice causes consumer harm, and suppression of consumer speech has not usually been seen as such a harm. It should be. Suppression of speech it dislikes may well be one way Silicon Valley takes monopoly profits in something other than cash. If so, there could hardly be a higher priority for antitrust enforcement because such a use of monopoly strikes at the heart of American free speech values.
One word of caution: Breaking up dominant platforms in the hope of spurring a competition of ideas won't work if the result is to turn the market over to Chinese companies that already have a similar scale – and even less interest in fostering robust debate online. If we're going to spur competition in social media, we need to make sure we aren't trading Silicon Valley censorship for the Chinese brand.
Transparency. Transparency is everyone's favorite first step for addressing the reality and the perception of bias in content moderation. Surely if the rules were clearer, if the bans and demonetizations could be challenged, if inconsistencies could be forced into the light and corrected, we'd all be less angry and suspicious and the companies would behave more fairly. I tend to agree with that sentiment, but we shouldn't kid ourselves. If the rules are made public, if the procedures are made more open – hell, if the platforms just decide to have people answer complaints instead of leaving that to Python scripts -- the cost will be enormous.
And not just in money. All of the rules, all of the procedures, can be gamed, and more effectively the more transparent they are. Speakers with bad intent will go to the very edge of the rules; they will try to swamp the procedures. And ideologues among the content moderators will still have room to seize on technicalities to nuke unpopular speakers. Transparency may well be a good idea, but its flaws are going to be painful to behold if that's the direction our effort to discipline Section 230 takes.
3. What is to be done?
So I don't have much certainty to offer. But if I were dealing with the Section 230 speech suppression immunity today, I'd start with something like the following:
First, treat speech suppression as an antitrust problem, asking what can be done to create more competition, especially ideological and speech competition, among social media platforms. Maybe breakups would work, although network effects are remarkably resilient. Maybe there are ways antitrust law can be used to regulate monopolistic suppression of speech. In that regard, the most promising measures probably are requiring further transparency and procedural fairness from the speech suppression machinery, perhaps backed up by governmental subpoenas to investigate speech suppression accusations.
Second, surely everyone can agree that foreign governments and billionaires shouldn't play a role in deciding what Americans can say to each other. We need to bar foreign ownership of social media platforms that are capable of playing a large role in our political dialogue. We should also use the Foreign Agent Registration Act or something like it to require that speech driven by foreign governments be prominently identified as such. And we should sanction the nations that try to do that.
And finally, here's a no-brainer. If nothing else, it's clear that Section 230 is one of the most controversial laws on the books. It is unlikely to go another five years without being substantially amended. So why in God's name are we writing the substance of Section 230 into free trade deals – notably the USMCA? Adding Section 230 to a free trade treaty makes the law a kind of a low-rent constitutional amendment, since if we want to change it in future, organized tech lobbies and our trading partners will claim that we're violating international law. Why would we do this to ourselves? It's surely time for this administration to take Section 230 out of its standard free-trade negotiating package.
Note: I have many friends, colleagues, and clients who will disagree with much of what I say here. Don't blame them. These are my views, not those of my clients, my law firm, or anyone else.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"Often libertarian."
Baker certainly does not claim to be libertarian or Libertarian. After a year or two of seeing his posts get shredded in the comments, I started listening to his Cyberlaw Podcast and now it’s one of my favorites. Almost all of his weekly guests disagree with him, and several are clearly leftwing on many issues (certainly social issues). In fact, his friendly (but sometimes pointed) arguments with non-lawyer Nick Weaver of Berkeley (a computer scientist, I believe, and expert on all things hacking and cyber security) makes for truly entertaining and informative listening, IMO.
Worth a listen for anyone who hasn’t tried it, even if you disagree strongly with Baker (since you’ll be in good company with many of his other guests). My two cents.
Authoritarian right-wingers are among my favorite faux libertarians.
The Volokh Conspiracy is among my favorite faux libertarian blogs.
As even the dumbest of the Firpo brothers would say, "connect the dots."
Kirklands are among my favorite asshats.
All it would take to win your approval would be some ranting about brown people, the "war on Christmas (and Christians)", militant gays, and the Deep State.
Baker isn't right wing. He's just a straight up statist who only pays lip service to issues about who is in charge right now.
Baker is clueless. His only redeeming value is talking with people who disagree with him. Which is basically everyone.
A rare quality these days, especially since he gets them to talk back.
Statist through and through. People must bend to make the State's life easier. People must bow to the State.
And who controls the State? Not the People, not when the State controls the People. No, it will be the usual self-selected elite.
No thanks.
One of the most infuriating things about this "must allow the state to intercept your communications" stuff is that the original ability to wiretap telephone and telegraph communications was a complete accident--neither Morse nor Edison had on their list of requirements "easily intercepted by a third party without the communicating parties being aware of the eavesdropping."
But now we have to incorporate an emulation of this accidental "feature" into every new means of communication? Bah.
You're showing your age: people still play Macarena at parties. It's a staple.
Relying on antitrust regulations to deal with speech issues is a bad idea. Even if one likes the idea of antitrust laws they are still ridiculously slow (with suits lasting long after the trust naturally falls apart, if there ever was one) and prone to politically motivated use. Bringing them into an entirely new domain to be explicitly involved in politics would see them abused even more.
Your first argument is incredibly flimsy. One could just as well say the 4th amendment is a dated vestige of a time when it was technologically unfeasible to monitor everyone at all times. Or the 5th amendment from a time when we didn't have space to lock everyone in prison. And lets not forget that old chestnut, the 2nd amendment is a relic of times when people only had muskets.
I wonder, what is the "cost" of encryption that companies should be responsible for? Is it the same as the cost door companies bear for things that happen literally "behind closed doors." How about the companies that made the walls, doors, window coverings, and other privacy-protecting items that allowed a man in Cleveland to hold 3 women captive in his basement for over a decade. Should they not be held responsible for his actions?
They should definitely be held responsible, as they facilitated his owning of a high-capacity basement. Nobody "needs" such a high-capacity basement.
And the idiocy of saying AOL couldn't afford to hire ten thousand censors in 1996 but Facebook already has now, without acknowledging the growth in number of posts to censor. Anyone else would wonder how corrupt those 10,000 censors are, and who censors the censors. But not a statist trying to hide reality long enough to become one of the elite.
It's excuses for statism all the way down.
I don't think we need to change Section 230.
What we need to do is enforce Section 230 as written. The "in good faith" language has basically been written out of the law by treating the catch-all "or otherwise objectionable" clause as licensing moderation on any basis whatsoever. When by normal principles of statutory interpretation it only means objectionable in these same sense as the listed items.
And we need some anti-trust enforcement. There's been some obvious coordination in deplatforming people and sites the left dislikes, which probably should have resulted in anti-trust action. They aren't content to just censor their own sites, they're clearly engaging in some level of coordination to prevent alternative sites such as Gab from being available.
Finally, we need to come down like a ton of bricks on financial services companies that start picking and choosing which legal transactions they'll process. Operation Choke Point has now been taken private, ending it within the government didn't terminate it.
Anti-trust is a farce. The only real monopolies are State-creations. Cartels always self-destruct. Natural monopolies, the few which actually do exist, can't be broken up.
Telling the State to decide who is a monopoly and letting the State decide what to do about it is about as sensible as letting the State define its own limits.
There's nothing wrong with antitrust law. You know nothing about it and are just spewing ignorance.
Your paranoia about financial companies seems to have trumped whatever principles you might have had. If financial companies really do have the lockhold you imagine, it is because the State is backing them, or more likely, threatening them or commanding them.
More likely, they are just blundering around incompetently, the market will cure them as it cures all other incompetence, and your desire for State action is a sign of abandoned principles. Either you believe in markets or you don't.
Oh, I'm sorry, I didn't realize that I was fantasizing that Gab was systematically chased off the internet by coordinated action by multiple companies.
It's pretty rich to say that you should create your own site if you don't like moderation at the existing sites, when we know what happens to any new site that tries to not moderate in the same way.
For one thing, Gab is still around. For another, it wasn't coordinated: the companies independently decided they didn't want to be associated with a platform associated with a mass shooter and the racism and antisemitism that inspired the shooting. It only took them a few days to find a new host.
Yeah, they didn't admit it was coordinated. But they sure all spontaneously came to the same decision in a remarkably short timeframe, now, didn't they? "Once is happenstance. Twice is coincidence. The third time it's enemy action."
If you think you need to invoke anti-trust laws to correct the market, then you are a statist. If you think anti-trust laws are necessary, you don't believe in markets. If you think monopolies are a real thing needing the State to correct, you do not believe in markets.
And if you think financial corporations are acting as a cartel, needing government intervention, you not only don't believe in markets, but you are so naive that you think government is not behind them, and your idea that government would bring its anti-trust weapons against the cartel is laughable.
Not that I’m. It enjoying your taking Brett’s paranoia-fueled authoritarianism you task, but
lots of economists that like markets note that under certain conditions markets can fail.
Monopoly is one of those conditions.
That’s manifestly not happening with Internet publishing platforms, but you can believe in markets but realize there need to be guardrails.
Insider trading refs are another curbing of market failure.
It's not a monopoly, but it's pretty close to one; FB for social media, Youtube for videos, Twitter for short comments. All of these have a seriously large percentage of their respective markets, due to network effects being so important.
But what's really causing the problem isn't that. It's that combined with these platforms coordinating with payment processors, hosting services, and allied infrastructure. GAB didn't get shut down by Twitter or Facebook.
Multiple payment processors stopped processing their payments.
Multiple hosting services stopped hosting them.
Their domain register pulled their domain name.
Apple and Google removed their app from their app stores.
Firefox deauthorized their browser extension.
Every time they found a home pressure was applied, and they got kicked off it.
And this started before the shooting people claimed prompted it.
There's a whole digital infrastructure out there working together to prevent platforms from being provided that don't replicate FB/Youtube/Twitter style censorship. If you're going along with the FB/Youtube/Twitter style censorship, you're permitted to compete with the big boys, but if you opt for free speech, you can't unless you build everything from the ground up.
I wouldn't be shocked if, trying to host on your own machines, using your own DNS, you'd find your contracts to buy servers canceled. The enforcement of this censorship is pretty comprehensive.
I have nothing to say to this that has not been said by many others on this thread to you.
Your repetitiveness bespeaks more of an advocacy agenda than discussion. Wrong place for that.
Going through Gab's wikipedia page is a riot.
Their frequent "de-platforming" has mostly been over two things: hate speech and porn.
In most cases, it's "de-platforming" was 100% reactive.
And it's not just Gab's users that are horrible human beings! Gab itself has dipped into antisemetic posts from time to time.
Even Republicans have said "woah nelly, this is a bit much, don't you think?" and dropped them.
All of which is to say... there is no conspiracy. It's just that 99% of people want nothing to do with them.
IOW, they didn't censor users. Which is what I said.
I don't think tech companies have neared monopoly/market power level. They are incredibly, viciously biased against and censorious of right wing content. Far more than people are generally aware. That needs to be talked about and exposed more but theres not much if anything illegal about it yet. Whats really funny is that the left and right even in Congress are decrying them in unison, but the right is decrying censorship while the left is screaming that they arent nearly censorious enough.
Payment processors on the other hand absolutely wield monopolistic market power, this is not even debatable.
"Your repetitiveness bespeaks more of an advocacy agenda than discussion. Wrong place for that."
What kind of a lame response is that? I think this couldn't be more relevant.
The only problem I see with Section 230 is that the "in good faith" language has never been enforced. So the platforms are able to enjoy the special protection Section 230 provides, while engaging in the sort of ideological curation that ought to make them responsible for content on the plain terms of the statute.
But the only reason that's becoming a problem is that the whole digital ecosystem tends to share FB/Youtube/Twitter's ideological leanings, and so is suppressing the development of any competitors that don't abusively moderate in the same fashion. Gab is just a good example of how they've gone about that.
THAT's where the real problem is located, not Section 230.
Your in good faith thing has been addressed and dismantled my multiple posters here, particularly NToJ. And yet you continue to make the exact same argument.
This is some of the stupidest posting I've ever seen on the VC.
It wasn't a remarkably short timeframe considering that it happened within a few days of a mass shooter liveposting his attack on a synagogue and the prep for it using the service.
Yeah, and if he'd used a phone company for the same purpose, you suppose their electricity would have been cut off?
Gab got deplatformed for refusing to engage in censorship. That's the bottom line.
No, because phone and power companies are regulated as utilities, and as such don't have Freedom of Association.
Expanding the industries that are regulated as utilities is certainly a course we could take, but it's not currently being discussed (even by the folks that want to change Section 230).
WTF do you mean "deplatformed"? GAB is still around. You can go talk there.
That's a terrible analogy. The "phone company" isn't a platform or a service to itself, it's a utility.
Gab was "deplatformed" for a few days at most and it wasn't because they "refused to engage in censorship." It's because the company is toxic to be associated with due to their users. If it were due to censorship, they would have been dropped long before. And they were only "deplatformed" by two companies - hardly a conspiracy. They were just denied access to the old subscription model by the other two. Besides, they weren't even deplatformed: they just lost a host (which is a fairly routine thing for small, rapidly changing internet companies). They still had access to the internet and could have put it up themselves but chose to go with another host just a few days later (which is how long it takes to negotiate and set it up).
Ending a business relationship because one partner makes the other look bad is pretty routine. Lots of tech companies dropped Chinese partners well before the government actually started pushing for that. There wasn't any conspiracy to deprive Weinstein of his business opportunities: that was all up to his being a creep.
Be serious. Gab didn't just get kicked of one host, and have to go to another. Better than a half dozen companies cooperated in taking them down. Hosting services, domain registrars, payment processors.
A coordinated effort by people to not do business with another person they don't want to do business with. It's a free country.
In any event, you can go post on GAB right now. It's open for business.
This purported danger that criminals will use "warrant proof encryption" seems to me a red herring. Law enforcement has already enjoyed success in cracking systems such as the iPhone; and if social media were to bow to Baker's demand by using backdoored encryption, nothing would prevent criminals from using their own end-to-end schemes on top of that. So Baker's regulation would fail to achieve its ends, even as it endangers the data of innocent users by making it easier for other third parties to breach.
As for content-blocking practices by media service providers -- I trust competition to solve that, provided that the half-dozen giant firms who control the lion's share of all communication industries worldwide aren't allowed to engage in blocking themselves. (If they get to, then really effective free speech will no longer exist.) They need to either be broken up, or forced into the traditional duty of common carriers to accept all traffic and treat it neutrally. Related services such as banking providers (I'm looking at you, Chase, PayPal, and MasterCard) also need neutrality imposed upon them.
I have always held that the car manufacturers, and maybe their parts manufacturers, should be held liable for all traffic accidents, and all robberies where a car is used to get away.
Pin the liability on the car manufacturers? And let the contractors who built the roads off scot-free?
Baker's problem is that he operates from the mindset that the government should always get all the evidence it wants in any criminal case and anyone who interposes any obstacle is obstructing justice.
But that's not really how the Fourth Amendment works. As Orin Kerr likes to say, the Fourth Amendment imposes an equilibrium, a balance between the needs of the government and the rights of criminal suspects.
When viewed that way, "warrant-proof encryption" isn't some new threat, but just the latest way in which criminals will sometimes succeed in hiding evidence. Just as they always have. Evidence is moved offshore, and the government has to seek the assistance of other governments. Evidence is encased in safes or vaults or locked up; the government has to crack or pick them. Evidence is given to third parties; the government has to track them down.
And yes, evidence is sometimes encoded, and the government has to try to break the code.
In the case of encryption, it comes up mostly because of the invention of the networked smartphone, which in general is an adjustment of the equilibrium in law enforcement's direction. Suddenly there's all this information concentrated in one place that didn't used to exist. So it would of course be easier for the government than it used to be if it could just get right into the phone or the network. But that's the point- overall, the equilibrium has to hold, which means it shouldn't be easier for the government than it used to be.
Encryption just puts the government back where they were before widespread use of networked smartphones. They can still get evidence, but they have to either crack the codes or get it in other ways. Baker wants the government to always win; the Constitution prohibits that.
I mostly agree with you here, but I would edit your second paragraph to begin, "But that’s not how the Fourth Amendment IS SUPPOSED TO WORK."
In sad reality -- and I can't remember who to credit for this formulation -- all too many of our law enforcement and justice people look at the 4th and 5th (and similar state provisions) as obstacles to be worked around, rather than as instructions as to how to do their jobs.
Any comment on why the "adverse impact" arguments used for racial discrimination cannot be used against these companies?
Run the statistics on the politics of those sanctioned by media giants against the universe of users and see if maybe, just maybe, there is a statistical 'proof' of bias.
There are no protected classes involved here.
Bias against black people? If your claim is that there is bias against conservatives (or liberals) what difference would that make? There are plenty of digital platforms that explicitly bias against conservatives and liberals (no need to rely on disparate--not "adverse"--impact). As is their right.
But enough negativity. Here's what we should do with Section 230: expand it to all laws. Enough with this piecemeal plodding! Bring back mens rea. That specific law which shields firearms manufacturers? Extend that to everything!
And bring in some accountability. If you waste everybody's time with a losing lawsuit under the new expanded 230, pay their bills!
Speaking of 230 accountability, should not Stewart Baker be held responsible for his trying to destroy the economy in favor of an all-powerful all-knowing all-busybody State? How about we put all of Stewart Baker's life under the microscope? 24x7 360° body cameras for a bare minimum starter package, with full audio, full recording, transcripts available within a reasonable time (say, five minutes), all at his expense of course.
"any government but ours is free to order a US company to suppress the speech of Americans the government doesn't like."
Make complying with foreign censorship illegal, similar to the Foreign Corrupt Practices Act regarding bribes.
I'll keep saying it: force Hollywood to put disclaimers in the front of movies, "Warning: This film is intended for sale in the China market, and as such, may be self-censored so as to not annoy them, exporting censorship beyond their borders."
If you have the right to know, you have the right to know.
The EU and the UK are not comparable to the US. Neither has a First Amendment.
What gets me are EU citizens bragging on censorship as a bizarre badge of honor, even as they sit with living memory of government dictatorship that used it as a flagship power.
We need to bar foreign ownership of social media platforms that are capable of playing a large role in our political dialogue.
How exactly are we to do this?
Some foreigners create a social media platform. It starts small, but is still used by a number of Americans. As it grows more popular, to the extent that someone thinks it is "capable of playing a large role in our political dialogue," we're going to bar it?
That doesn't make much sense to me.
Yeah, given that the internet doesn't stop at the border, I don't see how that's supposed to be possible.
I suppose you could regulate foreign ownership of US based social media companies, but that's about it.
Only because we chose not to stop it. How do you think China's firewall works?
Sure, there's always work-arounds, but you can stop 99% of folks from accessing a site.
Which is to say, it's not a technical question, it's a political question.
OK, I should have said, I don't see how that's supposed to be possible short of something like the Great Firewall. It did cross my mind.
/thumbsup
But someone has to decide that the site is "playing a large role," and then put up a firewall.
I don't think it's a good idea to let government, run by politicians, do that. The dangers seem pretty obvious.
I also don't think it's a good idea.
I also don't think it's a good idea for people to confuse political questions ("should we do this?") with technical questions ("can we do this?")
Fair enough, but I wasn't thinking about the ability to do it in purely technical terms. "Can do we do this " is also a political question.
Doing it involves setting standards, drawing lines, deciding who will have the authority to take action, etc. I don't think that, as a political matter, we actually could do this in the US.
... other then muddying the definition of "can", is there anything I've said you actually disagree with?
You haven't said anything I disagree with. It's not clear what I've said that you object to.
At the moment? I object to you wasting my time.
"Whether all these rules would survive contemporary first amendment review is hard to know."
IANAL but even I know whether all these rules would survive contemporary first amendment review.
And the answer is NO.
What a wishy-washy blog.
You're basically saying, "Some people are being unfair to us so let's do something!"
AND you want the big bad govt to do the "something."
Take a hike.
I really dislike the tech companies censoring conservatives speech, but i have come to the reluctant conclusion that the cure would be worse than the disease. Its just really unlikely that any regulatory scheme from congress or the bureaucracy is going to rectify things.
But i do agree with Brett about financial institutions blackballing legal businesses. We already have public accommodations laws that prevent people from imposing their personal moral beliefs on commercial transactions. So i don't see why porn stars, gun sellers, and marijuana distributors can't have the same principle applied to their livelyhoods.
Because current law doesn't include "type of employment" in it's non-discrimination provisions.
It's possible to add such things (see the attempts to make Law Enforcement a protected class in response to "BlueLivesMatter"), but it's not currently protected.
You appear to be conflating conservative with white supremecist.
No. I certainly don't have a problem with sites having a proviso in there terms of use that bans religious or racial bigotry. But of course they should spell it out.
Im talking about things like YouTube demonatizing and shadow banning Prager University. Or James Woods twitter ban for "'If you try to kill the king, you better not miss. ' #HangThemAll'", or banning the taunting of laid off journalists with #LearnToCode.
But like i say isn't likely that there is much to be done about that. My own proposal would be congress require platforms to adhere to their terms of service, and have an independent arbitrator decide if they are violating their terms of service, with monetary damages. But it wouldn't be long before the arbitrator was captured by the tech companies anyway.
Ah. So anecdotes about shadow banning, the amorphous thing that's hard to prove but easy to claim victimhood thereby.
James Woods has been loony on twitter for a long time. Dunno if you want to conflate him with conservative either.
Your own proposal is still some heavy regulation that's well beyond anything anyone on any side of the aisle would be in favor of, were it not for the right-wing victimhood machine going overtime.
Actually, shadow banning is fairly easy to demonstrate. It's not exactly subtle. You're going along getting new views at some rate, they shadow ban you, and suddenly you stop getting new views, like, on a dime. So you have somebody log on with an account that's not already associated with you, and look for your content, and it's invisible.
Bingo, you're being shadow banned. It's a built in feature on many platforms, you know. For instance, it's a part of the moderator controls for FB commenting packages.
And YouTube banning a Prager video on the topic of the 10 commandments for "violence" because "murder" is mentioned? (As in "Thou shalt not murder") Pretty clearly abusive.
Views go up and down for lots of reasons. Search algorithms don't turn up things in the same order for everyone. Easy to see how these two ingredients, plus some confirmation bias, turn every content creator into a shadowbanning victim.
Youtube does some silly bans across the board. Your confirmation bias is showing again.
I think public accommodation laws are stupid, but do you think your pitch is persuasive? Can you think of a meaningful difference between [being born black] and [deciding to commit a crime]? Hint: it has to do with agency.
But we are talking about perfectly legal conduct here, not criminal activity. And public accommodations laws also protect discrimination against non-inherent characteristics that implicate "agency" like religion. Chosing to either maintain or change your religion is a personal choice.
Exactly! It’s enough to make you think that public accommodation of religious choice is fucking idiotic!
"Its just really unlikely that any regulatory scheme from congress or the bureaucracy is going to rectify things."
I tend to agree; The basic problem is that half the political spectrum views the problem as that the tech companies aren't censoring conservatives' speech ENOUGH. And there are enough people with that perspective throughout the government, Congress and the bureaucracy, to block any effort to do anything about it.
Beyond some minor enforcement of the "in good faith" language, which won't do much more than force companies like FB to put some effort into constructing excuses for their censorship, I don't see anything being accomplished on that end.
The real problem here is the suppression of alternative platforms, and the source of that problem is that the right in this country have never appreciated the folly of letting their ideological enemies dominate whole areas of communications technology. It's a self-created problem. And the only real solution is for people who aren't left-wingers to get into the game, too.
The only area on that end that's really suited to any regulatory intervention are the financial services companies. We have to make it clear to them that, if they don't want to be legally responsible for what their customers spend their own money on, they have to stop picking and choosing which legal transactions they'll facilitate.
If the benefits outweigh the costs, the company can use its profits to insure itself and the victims of crime against the costs.
Not really. Because if the government has the power to assess such "costs" against online companies, they will not be calculated based on some supposed rational calculation of harm, but rather with a punitive attitude of "soak them so hard that they will never again dare to do what we don't like them doing, no matter how much they have put aside to protect themselves."
There is and should be absolutely no penalty for using encryption. What you do with your communications is totally up to you. There are all kinds of unbreakable or virtually unbreakable encryption methods. To say that any particular one is used is a cost that anyone else bears is nonsense. I'm really disappointed to read it here on Reason and Volokh.
So much of this is idiotic.
"A lot has changed since 1996. Facebook and other have in fact already hired tens of thousands of humans to police what is said on their platforms."
Yes. For one thing, the immunity has provided digital platforms the required assurance that partial performance of censorship would be protected. Without this protection, the degree of censorship would be prohibitively expensive, either because there isn't enough money in the world for digital platforms to perform this function, or because customers of digital platforms are unwilling to pay for that service. If you take away the immunity, the digital platforms will simply stop performing censorship functions. (Reminder: That was the whole point of the immunity.)
"Immunity from tort liability is a subsidy, one we often give to nascent industries that capture the nation's imagination."
No it isn't. Tort liability is itself a regulation. It's easy to accidentally think of it as non-regulation because most tort examples involve wholly innocent people being harmed by wholly guilty people in car crash hypotheticals. The liability you are suggesting has no relationship to black-white liability. The liability you are contemplating is that A is liable to B for harm C causes to B. All 50 states have placed limits on that liability either through common law duty rules, foreseeability holdings, or statutory changes to tort liability. This is not a subsidy in favor of B. It is a recognition that B didn't do anything wrong. You might as well say not taxing corporations is a "subsidy" or not imposing liability on the parents of gun manufacturers for mass shootings is a subsidy in favor of the parents.
"For one thing, the immunity has provided digital platforms the required assurance that partial performance of censorship would be protected."
So long as it was done "in good faith". That's the issue as far as I'm concerned: A good deal of their censorship is being done in bad faith. They're claiming to be censoring hate speech, say, when they're just censoring right-wing views.
You repeatedly over-rely on this "in good faith" provision in 230(c)(2)(A). Your interpretation of what that means (bad faith is when anyone does something Brett claims is not what they are really doing) is idiosyncratic.
But even assuming it is a requirement, it doesn't make the provider liable. First, if they censored right-wing views in bad faith, who is suing them, and for what? Your view is that they just lose 230 immunity entirely, but that's not how the statute is written. All it would mean is they can be held liable for that bad faith action. But since no one who was censored has a claim against the digital platforms in the first place, for being wrongfully censored, what's the liability?
Second, you can't prove bad faith even under your treatment of the term, because "or otherwise objectionable" can include things like "right-wing views". (Or "left-wing views".)
Finally, and most importantly, 230(c)(1) isn't limited to "in good faith". If Facebook is not a "publisher" there's no defamation liability even if it does allow defamatory materials to remain on its platform. There may be non-publisher liability you had in mind, but what is it? Breach of contract? Who has a contractual right to sue Twitter for banning them from Twitter? The problem with such a claim is not that it's barred by 230. It's that it's a meritless, shit claim to begin with.
"Second, you can’t prove bad faith even under your treatment of the term, because “or otherwise objectionable” can include things like “right-wing views”. (Or “left-wing views”.)"
No, it can't. This isn't a new issue. It provides a list of types of objectionable things to be moderated, the catch all phrase has to refer to things of a similar nature.
They COULD justify ideological moderation if the TOS specified that the service was going to be ideologically curated. None of them do.
You still would get the benefit of Ejusdem Generis so long as the provider thought that certain conservative viewpoints were "filthy" or "harassing" or "lewd" or "obscene" or "lascivious". So it doesn't save your position. What would stop the provider from saying "Yes, we targeted conservative viewpoints because we find them lewd"?
You have not addressed the separate problems with your theory. Section 230(c)(1) provides independent protection since "publisher" liability is what Facebook or Twitter would be worried about. And, separately, what liability did you have in mind? Suppose you proved that Twitter "in bad faith" (however defined) was excluding conservative posters. 230(c)(2)(A) is a grant of immunity, not an imposition of liability.
Finally, how will you ever prove bad faith against Twitter, for example, since they reserve the right to suspend or terminate "at any time for any reason or no reason". If Twitter just says "I terminated that account for no reason" how will you get to bad faith?
You know the trick; we've already seen it: "Their CEO once gave a speech in which he said the company isn't biased, but then it banned a conservative, so therefore it's acting in bad faith."
"how will you ever prove bad faith against Twitter, for example, since they reserve the right to suspend or terminate “at any time for any reason or no reason“. If Twitter just says “I terminated that account for no reason” how will you get to bad faith?"
Same way you show discrimination in housing or provision of otherwise public commercial services -- "We reserve the right to refuse service to anyone" doesn't literally mean you really get the right to refuse service to anyone because the law will look behind the refusal and look at the purpose behind the refusal and litigants have the right to discovery to determine the purpose behind the refusal.
The staff at Denny's can't refuse to serve black people because doing so will expose them to liability under the law for unlawful discrimination. If a business does it to one person it may be able to manufacture some bullshit reason, but if it does it to too many persons in the protected group then a jury will be free to find that discrimination occurred. Likewise, if next-to-no leftist provocateurs are kicked from a service that purports to serve all, but 80% of conservative commenters are banished, a jury can easily see a discriminatory intent that belies the forum provider's claims of neutrality.
You're missing the point. The law requires that they not discriminate on the basis of race. It does not require that they not discriminate on the basis of ideology.
"Usually, courts impose liability on the party that is in the best position to minimize the harm a new product can cause."
Yes, and rather obviously that would be the person who actually harmed the plaintiff, as opposed to the digital platform who merely failed to prevent the harm. It is apparent that you don't know "how our tort law usually works" specifically as it relates to nonfeasance versus misfeasance. But more generally you're also ignoring societal concerns with the imposition of tort liability, that go beyond merely deciding who is in the best position to minimize harm. In fact, courts often limit tort exposure even when the defendant is the one who directly causes it, based on these types of considerations:
"Extending theories of liability may not always be the more moral course, especially in such a case as this, where the extension, in the course of awarding damages to unnumbered claimants for injuries that are unavoidably speculative, may well visit destruction on enterprise after enterprise, with the consequent loss of employment and productive capacity which that entails."
"and rather obviously that would be the person who actually harmed the plaintiff"
That's rarely possible anymore since the digital platforms don't bother verifying who the users are. If one uses a TOR/VPN combo they're free to do as they like.
What are you saying? That it's rarely possible for anyone other than a digital platform owner to harm a plaintiff? The idea here is that the person who is performing the illegal act is the one that should be liable to the plaintiff.
The day some anonymous comment actually harms someone will be a first, and "harm" does not mean annoys or upsets or offends or insults. Show me the actual unavoidable financial harm, because you're sure not going to get any actual physical harm from a bunch of bits.
If I knew who you were I could make social media accounts in your name, harass people, message underage people, etc. I could write a blog on blogspot, that's apparently written by you, that will hinder your chance to ever get a job, find a date, or could get you sued. I could have you get arrested, and while charges may never be filed that arrest is going to be there on your record forever. Your mugshot will pop up right away when anyone googles you and the charges will be listed. I can do this all anonymously and leave a trail that is nothing but a dead end in some foreign country.
Keep in mind that the lawsuit that lead to CDA 230 being made was the Prodigy one that pretty much had that happen. People made fake Prodigy accounts with a credit card generator and ruined someone's name. His name being in the court case didn't help either.
"Keep in mind that the lawsuit that lead to CDA 230 being made was the Prodigy one..."
Which lawsuit are you talking about? Because the one that gave rise to 230 was about some unknown user on Prodigy's bulletin board posting in 1994 that Stratton Oakmont (of Wolf of Wall Street Fame) and specifically Danny Porusch were a bunch of fucking frauds. And Prodigy was held liable not because it published or otherwise furthered that claim, but because it (1) posted "Content Guidelines" for users, (2) occasionally enforced those guidelines through "Board Leaders", and (3) used software to remove "offensive language" (entirely unrelated to claims against Stratton Oakmont).
None of which, obviously, had anything to do with whether Danny Porusch or Stratton Oakmont were a bunch of frauds, which they were. What actually happened is that Danny Porusch was indicted for securities fraud and money laundering 5 years later, for which he served 39 months after paying $200M in restitution. The reason you don't remember Danny Porusch from Wolf of Wall Street is because he threatened to sue Paramount Pictures, and so they had to change his name to Donnie Azoff for the film. And here you are defending the result in fucking STRATTON OAKMONT VERSUS PRODIGY.
Whatever fake incident you're referring to with the "Prodigy accounts with a credit card generator" the obvious solution is that the people who are publishing the information "ruin[ing] someone's name" are liable, and not the innocent digital platform that is merely used to send that information. You should hope that imaginary credit card frauds are stupid enough to use Prodigy or its ilk (which can be subpoenaed) rather than just distributing it to the Russian mafia via some black market program hosted entirely in Cyprus, since Prodigy can at least be subpoenaed here in the United States. But that will only happen if 230 exists.
Lunney v. Prodigy Services
Prodigy was not held responsible for defamation because it didn't bother moderating. This ruling lead to companies having incentive not to moderate. As a result CDA 230 happened to correct that. If CDA 230 goes expect this ruling not to be how things will be. They'll fix it again. The law can simply class interactive websites as media companies. Facebook and others know this which is why they are moving business models to a more private experience. Things like email and text are probably going to be able to exist as is.
An innocent platform makes sure it knows who is using its service and by doing so they dodge regulation. We don't have to like that. It's how it is and we shouldn't deny that reality.
If I use my phone to commit a crime I'm going to get caught. If I use my phone and a third-party product to hide my number my phone company is innocent. If I use my ISP to commit a crime I'm going to get caught. If I use a third party product to hide my IP my ISP is innocent. If I use my ISP to create a fake facebook account to commit crimes and hide my IP my ISP is innocent. I can't anonymously sign up with my ISP. I can do that with facebook. The design enables it. ISP and phone companies avoided that possibility.
You’re a fucking idiot. The Lunney opinion WHICH HAS NOTHING TO DO WITH 230 came out in 1999. Section 230 was enacted in 1996. The court, in Lunney, never addressed 230 because it didn’t have to. Prodigy couldn’t be held liable under either current law or pre-230 law BECAUSE it didn’t moderate. 230 was enacted to give companies an incentive to moderate. The pre-230 case you’re looking for is Cubby v compuserv. But distributors (like Prodigy and Barnes and Noble) have never been liable for mere distribution.
You’re a god damn dangerous moron.
Doxxing.
I'm having trouble seeing what novelty has to do with it? Shouldn't Craftsman or Stanley be liable of someone uses a hammer they manufactured to bludgeon somebody? The fact that the basics of the design--a hard, heavy weight at the end of a stick--have been know literally since the Stone Age--shouldn't allow them to escape liability for the foreseeable misuse of their product!
/PoesLawMode
"Artificial intelligence won't solve this problem."
There is no problem to solve. Even if you assume that Twitter and Facebook and other social media platforms are disadvantaging conservative views, there are platforms available that don't limit views at all. GAB is one. 4chan is another. If you want a place that doesn't limit the views of conservatives, go to Redstate. Or this website. The "problem" you're trying to solve is that there is a private group out there that doesn't want to hear you, and you insist they must. And now you're suggesting that the law be changed so you get your way.
We're acting like any of this has to exist. It doesn't.
Back in the 90's I had a website and I wrote things on it. Some people viewed it. I'm not an important person and I didn't have anything important to say. As a result, no one cared about it but me. I'm sure if I was important or if I was saying something useful more people would have read my website. Or if I paid for advertising.
We have a right to voice our opinions. Last time I checked we don't have a right to be heard and the legalities around anonymous speech haven't fully been tested. In the case of the internet anonymous speech flies directly in the face of the laws that allow civil litigation. It's preventing it.
If you own the website you should be responsible for what's on it. To claim that someone should be immune for a public-facing website that allows anyone to post is moronic. This means I can own a website, allow others to post, not retain any data, create my own posts, claim it's not me, and then remove all liability from myself. I can use this as well as exploit the CDA 230 immunity of others to amplify my message.
I can impersonate anyone and be quite visible as I do it, but be invisible at the same time. Poor business model.
A telecom company generally connects one person to another and the communications stay between those two parties unless someone chooses to broadcast them. An interactive computer service connects one person to anyone on the internet. The latter should require liability.
Requiring identity verification of users in order to remain CDA 230 immunity could wind up being unconstitutional because it is giving a privilege. Not certain about that, but it could present a problem.
Yanking 230 and claiming that if it's on your property you should be liable is probably the best way to go. Everyone who posts on a website is a guest of that site. They were invited onto the property by the owner. Fox News or CNN doesn't just let anyone onto their broadcasts and has a live delay. They own the network. They're responsible for what happens on it. That's the way it should be.
The solution is to own your own network, site, property, etc. No one sees it? Probably because no one cares about you. Sorry.
You lose millions because your business can't function anymore? If you had a better business model you would have survived. Designing your business around a law that says you're not responsible for the horrible shit that happens as a result of your business was not a good long-term strategy. The proper thing to do would be to design a business where horrible shit can't happen rather than one that has it occur and a single law removes liability.
"We're going to create a platform where anyone can broadcast live in front of millions. The lawyer asked what happens if someone shoots up a bunch of muslims, shows a rape, or staples their penis to a tree in a playground in front of kids, but we said we're cool because of CDA 230".
As you can see it's not a good business plan if you think two steps ahead.
"If you own the website you should be responsible for what’s on it."
This sentence alone explains why you're missing the point. Before 230, people who owned websites were not responsible for what's on it, because people who owned websites were distributors of content, and not publishers of content. Do you think a book store should be liable if it sells a book with defamation in it? Do you expect every book store owner in the country to read every book it sells, fact-check the claims in it to ensure there is no defamatory material, prior to selling? No. That's never been the law.
What happened is that people started hosting websites and deleting content from users the website owners didn't want. Some plucky plaintiff's lawyer argued this transformed the website owner from a distributor/book store, into the actual publisher of the third-party content. It would be as if a book store that refuses to publish Book A because it is defamatory became immediately liable for all defamatory material in any book in its possession, because something about doing a half-ass job weeding out defamatory content makes you liable for all defamatory content.
That was an idiotic result, and 230 fixed it.
"No one sees it?"
Removal of Section 230 would not mean "No one sees it". It would just mean "No one edits it." Digital platforms that host third-party content would remain free and not-liable so long as they never exercise any role in moderating content. So we'll get 4chan and GAB and that will be it. Fucking wonderful.
I don't think that would be a bad outcome, and I wish for that reason alone that Section 230 had never been written. Instead of modifying the content and becoming "publishers" with faux-accountability, web sites would have introduced better moderation, and meta-moderation, and the noise would have disappeared fast enough to not be a problem.
Some websites already do this, but not many, and not well, because they have the 230 out. I'd rather markets had been left alone instead of this sorry-ass 230 subsidy we sort of have.
I also realize I am somewhat alone in this regard ....
"...web sites would have introduced better moderation, and meta-moderation, and the noise would have disappeared fast enough to not be a problem."
230 was intended to encourage moderation. I don't know why you think there would be more moderation in a world without 230. The problem was that the market wanted immunity, but a government regulation (pointlessly extended tort liability) risked preventing consumers from getting to choose more moderated platforms. Because they wouldn't exist.
From a consumer choice standpoint, we know we will get unmoderated platforms regardless of 230, since we had them before, and we still have them. The issue is whether we will get moderated platforms without 230. And we won't.
I totally get that. The law case that predated 230 was a fuck up and 230 was made to correct that.
Just have the owner be liable. Quick fix. Change interactive computer service to something similar to a media company.
If that's the case then the owner isn't going to allow anything that could get them sued. If someone wants to post that they can get their own site and be held responsible.
That's better than a law requiring all posters to identify who they are.
Your ability to comment on the internet even in comments like this isn't necessary and having you make your own website doesn't hinder free speech at all.
You don't get anything. You're a fucking idiot. If the "owner [is] liable" there will never be digital platforms like the one you are posting on, right now. This consumer good that you enjoy for free will go away. I'm not going to argue with people on a platform that can only exist due to the thing you are arguing against.
WTF do you think it means for "the owner isn’t going to allow anything that could get them sued" anyway? Do you think Reason.com or the Volokh editors read your posts before you submit them? And if they can't moderate content, do you think anyone will read your comments between the 8,000 posts of spam from a porn site that are posted by a robot but which the editors cannot edit, for fear that doing so will expose them to defamation liability for any post, ever, put on their website? Grow up.
" If the “owner [is] liable” there will never be digital platforms like the one you are posting on, right now."
I totally 100% understand that. That is exactly what I am getting at. If it is removed I can make my own website. No one is stopping me from speaking. If someone created a business where you walked into a booth with a hole and could get your dick sucked and it was shut down for whatever reason I would just get my dick sucked elsewhere. No one is stopping me from receiving fellatio.
If 230 is removed I imagine something else will go into place. The Prodigy case was a poor legal decision that resulted in no moderation being an incentive. As a result they fixed that. If they pull 230 they'll fix it again. Either by stating you have to verify users or by saying the company is liable.
YOU DONT EVEN KNOW WHICH PRODIGY CASE IS RELEVANT. You fucking literally have the purpose of 230 backwards. 230 ENCOURAGES moderation. If 230 is pulled, no one will “verify users” (whatever you think this means?????).
Do your teeth hurt?
I'm generally in agreement with/sympathetic to your arguments on this subject, but I think you're being cavalier here.
Yes, distributors have not been susceptible to publisher liability, but the notion that websites are to be treated as distributors rather than publishers is not the black letter law you portray it to be. Even in a world with § 230, plaintiffs' lawyers have spent many lazy afternoons thinking up ways to get courts to impose tort liability on websites for user provided content. Maybe most of them would have lost even w/o § 230, but (a) remember that § 230 is immunity rather than a mere defense; it allows for quick dismissals that would otherwise require full blown expensive litigation; and (b) those lawyers only need to win once, not every time, to destroy a business. (Ask Nick Denton.)
For purely passive websites such as message boards, perhaps there wasn't much risk of publisher liability being imposed. But something much more interactive, like Facebook? I don't think a prudent lawyer would have advised anyone to invest money in Facebook's business model without § 230.
"We need to armor ourselves against such tactics, not facilitate them."
It's not even clear what you are proposing we do, besides arbitrarily punish domestic companies for complying with foreign laws. This is just throwing rocks at our own harbors bullshit. It does not fall on Facebook or Google to solve Chinese statism. And it can't be the case that the solution to Chinese statism is for Americans to replicate it domestically.
The harder and more urgent Section 230 problem is what to do about Silicon Valley's newfound enthusiasm for censoring users whose views it disapproves of.
How is this a problem? No one is required to use YouTube or Facebook or whatever.
"No one is required to use YouTube or Facebook or whatever."
Tell that to the masses. They don't know how to live without it. Ever be in a place with no internet service and see how uneasy people get?
Yes, let's take this insanely popular product that is functionally free, and which has brought unimaginably more joy to "the masses" than anything you or your miserable family will ever produce, and... kill it? Why? For spite? What's your fucking problem?
The sweeping broadcast regulatory regime that reached its peak in the 1950s was designed to prevent a few rich people from using technology to seize control of the national conversation, and it worked."
Heavens, thank goodness it all worked. I guess we can all rest easy that the democraticizing effects of massive regulation have solved the problem of rich people using technology to seize control of the national conversation.
Social media is not government. The ability of social media to censor speech is not a violation of free speech. It is an exercise by the owners of a social media network, or any communications network to publish speech that they choose to publish.
The author states that he is a conservative. Consider the Wall Street Journal. Its editorial pages are filled daily with the most ultra conservative rants, and they published distorted views of any policy or proponents of policy with which they disagree. Those so attacked have no right to respond, and the WSJ gives them no opportunity.
And yet I see nothing in this post about forcing the WSJ to act fairly, to give oppponents of their editorial position a say. And that is correct, it is their paper. And the fact that they do not offer the opposing views space does not violate any free speech laws. The opponents can speak as long as they can and publish their own responses, free of any contraints from the WSJ.
One strongly suspects the concern here is not with free speech, but that the author consider policing of speech by the operators of social media as being biased against conservatives. Which it appears it is not, of course.
"In particular, why should they be immune from liability for utterly predictable criminal use of warrant-proof encryption? "
Here we go.
Baker, no you are not a conservative--at least, not one in any recognizable American tradition of conservatism. You're a statist, and a much bigger danger to the republic than many of those whose use of encryption you worry about.
Encryption has been a bogeyman to politicians for basically all of human history (governments are universally perturbed at the idea that their citizens can keep secrets from them), so I'm gonna call "No True Scotsman" here. Every political "tradition", of which "conservatism" is one, has been against encryption.
Every political “tradition”, of which “conservatism” is one, has been against encryption.
Go tell that to "Publius".
Disambiguation needed. I don't know if you're referring to some random Roman, or the Federalist Papers.
That said, I'm gonna hedge my bets and suggest you're playing "No True Scotsman".
Either way though, I'm not interested in the larger point ("every political tradition"), and will instead narrow the scope of my statement to "every major American political tradition".
Get rid of Section 230, and what you get is more censorship, more gatekeepers, less speech, less dialogue.
If you think going after Section 230 is going to stop "deplatforming", you're dead wrong. It would be the biggest "deplatforming" in the history of the internet.
The only reason there would be less speech is because people would choose not to post online. People were always able to buy their own domain. Having people be free of liability so others can make money off of people not having to buy their own website is stupid. The other fix is to regulate what data on each poster is required. Better to have people buy their own websites.
Try signing up for a cell phone or an ISP without providing your name, address, DOB, and SSN. That's not required by law, but they did it to avoid regulation.
Nope.
There would be less because most places, for example, Reason, would just drop their comment sections rather then be liable for what commentators said.
Start-ups that used user-generated content? Wouldn't start-up. Everything would have to be pre-screened.
We'd basically go back to the "users" only interacting in the "reader letters" sections of magazines, and never interacting directly except in private forums.
"There would be less because most places, for example, Reason, would just drop their comment sections rather then be liable for what commentators said."
Suppose 230 is repealed. Couldn't Reason.com avoid liability even under pre-230 by just not moderating comments at all?
Which will work fine... right up until some idiot posts links to child porn and snuff fics again.
Have you read NToJ’s other analysis? No liability for what has become merely a digital platform.
Such a stupid example. A book store can't sell child porn. 230 isn't about child porn. It is about not holding distributors liable for third-party content just because the distributor moderates (including moderating child porn, which is what we all want them to do).
I'm less worried about that since child porn stuff is not frequently posted here. What I'm more worried about is that if Reason.com or Volokh decide it makes no sense to ever moderate comments, this place will get inundated with spam from advertisers for adult porn. And then we wouldn't be able to have this discussion at all, except on 4chan.
"There would be less because most places, for example, Reason, would just drop their comment sections rather then be liable for what commentators said."
Yeah, and you'd probably CHOOSE not to have a website where you write your comments about the stories. That's on you then. People thinking that their speech has to be provided by private companies is why everyone has their panties in a twist, claim the 1A, when in reality your ability to say what you want has always been there and no one is taking it away. My point still stands. It would be you choosing not to speak for whatever reason. You still can.
Start-up with user-generated content? Probably best to screen it. Otherwise, that's a shitty business model/idea. Similar to metal-tipped lawn darts. It doesn't take a genius to see the troubles that could happen.
When you have users and don't identify who they are you have a product that can be used nefariously. One cannot deny that they don't know that now. Your business plan then sucks.
Verizon, AT&T, Sprint, Cox, etc all have a good business plan that protects them and users. The goal is not to have a law preventing people from suing you, but to create a product where people have no need to sue you. In the event that something happens and police come at you with a warrant you hand over the user's name, DOB, SNN, address, and entire usage history. You avoid even the desire of a lawsuit because of how much you helped the affected party in their case.
You don't want to give more people that much identifying info? I don't blame you. This is the only site I post on. If it went my life would not be hindered nor would my speech be stopped.
"...and what you get is more censorship, more gatekeepers, less speech, less dialogue."
You may get less dialogue. May not. You'd certainly get less censorship. Why do you think repealing 230 would result in more censorship?
Currently Reason has pretty lax moderation. You have to post some really awful stuff before they'll step in.
If Reason can get sued for the libel/slander that users post here, do you think they're going to censor less or more?
How 'bout Facebook? Currently is "censors" to protect it's reputation. It takes things down that it doesn't want to be associated with it's site. If it could be sued when a dude goes on a 13-post slanderous tirade about his ex-wife, do you think they're going to censor more or less?
Adding liability to companies encourages risk-aversion in those companies. Risk-aversion, in the case of hosting user-generated content, means rejecting more, not less, user-generated content.
Site's that don't remove things to protect their reputation don't do well. 4chan is a dead weight. It's not profitable. Who the fuck of any value wants to advertise there? They can only accept crypto from users as a result of their reputation. Same with every other site with similar content. They don't make money.
I think what you want is a government-owned social media because otherwise we're forcing private businesses to do things like baking cakes they don't want to.
If a website wants to write a script that turns the use of the acronym "LGBT" to "gay-ass homo" then I think they should be able to. I also think they'll lose money as a result so it probably is a bad idea to do that.
Facebook has learned that what you desire makes them less money. As a result you shouldn't rely on businesses for your speech.
"Currently Reason has pretty lax moderation. You have to post some really awful stuff before they’ll step in."
None of which has anything to do with 230, since Reason.com enjoys immunity, at least right now.
"If Reason can get sued for the libel/slander that users post here, do you think they’re going to censor less or more?"
It's a fucking dumb question. If 230 is repealed the only way Reason can get sued is if they moderate (or "censor"). If they don't moderate, they are still distributors and are not subject to liability. So my response is: Less!.
"If it could be sued when a dude goes on a 13-post slanderous tirade about his ex-wife, do you think they’re going to censor more or less?"
Less, so long as we understand that Facebook's liability is not for allowing the idiot to post 13 slanderous tirades about his ex-wife, but that it might be held liable if it deletes 1 of the 13 slanderous tirades rather than zero of the 13 slanderous tirades.
"Adding liability to companies encourages risk-aversion in those companies."
Unless the liability--as it was pre-230--is for moderating comments. If you impose liability on a company for moderating, why do you think they will moderate more? The safest option is to just stop moderating.
Again, NToJ, I disagree with your premise. You keep stating this as though it's a doctrine handed down at Sinai (or at least by SCOTUS). Who says that without moderation they are distributors and not subject to liability? You may think that's the best application of the common law of defamation;¹ you may be correct that most courts would have held that way. But you can't possibly claim that this is an unquestionable truth about the law, that no trial-lawyer-friendly court could possibly have held that some websites were more analogous to publishers than to distributors. (Again: even with § 230 we see all sorts of clever pleading about how the website owner helped "develop" the content in some way. Sometimes it even works: Roommates.com.)
Moreover, even under that traditional approach, distributors were liable if they knew, or should have known, that the material they were distributing was defamatory. So all it would take was an allegation that the website owner was on notice, and voila: potential liability.
¹I say that because the context we're talking about is typically defamation. But of course aggressive lawyers have come up with all sorts of other theories of liability to hold websites liable for user-provided content. Why isn't Tinder liable for inadequately screening its users if someone is killed by someone who they met on the site? The distributor/publisher dichotomy isn't the answer.
We agree. I shouldn't speak confidently about what 50 state courts will do. I'm obviously a supporter of 230 immunity for precisely that reason.
In the context of what digital platforms that allow third-party content to be posted, my belief is that if 230 is repealed, more digital platform providers will behave like Compuserv rather than Prodigy, in the pre-230 environment, since it will give them a better chance of avoiding liability. The non-moderation will put the platform in the best position to claim Smith v. California status as a distributor rather than publisher.
Part of the reason I don't see as much potential liability is that I live in a jurisdiction with aggressive duty rules limiting tort liability for defendants accused of merely failing to prevent another person from harming the plaintiff. I understand that the purpose of 230 is to protect people from all potential jurisdictions, not just the ones that get things right (in my view).
The simplest way to fix this 'problem' is to only have one federal regulation: no unlimited internet access.
If everyone actually paid for each and every thing they did online, there would be a lot less noise to have to evaluate.
Since we are fine with requiring a full background check for the second amendment, 'common sense web control' could require a full background check and use of only your real name as your online identifier for the first amendment.
"Since we are fine with requiring a full background check for the second amendment..."
Who is this "we" you are referring to here? You, Nanny Bloomberg, and his paid minions?
You could probably do this and it would work. You have users sign up for auto payments through a credit card or paypal and only allow accounts that use the same information on the credit card/paypal account. People may not like that, but who cares? They can hang out in the locker room at the gym instead of using Grindr. (The gym also has their name and credit card info. A lot have a photo on file to make sure it is the customer going and not someone else. It's good to get out of the house though.)
Maybe just eliminate the second half of 230, the part that give companies a pass on censorship.
So you'd require YouTube to leave up profanity-laden screeds in the comment section of MyLittlePoney videos?
Why is this person allowed to blog on Volokh? Just another statist promoting myths about the need for more policing and more government control. But worse than that, he doesn't even seem to understand the relevant law, despite other Volokh contributors like Orin Kerr having recently explained aspects of it.
"In particular, why should they be immune from liability for utterly predictable criminal use of warrant-proof encryption?"
I don't know. Why should the post office be immune to liability from utterly predictable criminal use of the mail system? Because the mail carrier has nothing to do with that criminal abuse of the system maybe? It's entirely analogous.
"Immunity from tort liability is a subsidy"
Not really in this case. Because they *shouldn't be liable even without section 230*. Making that immunity explicit just means they don't have to bear much of the cost of troll lawsuits. It's like arguing SLAPP laws are a subsidy. No, they're trying to rein in nuissances who abuse the legal system. Section 230's immunity is far better understood as 'there should be no legal theory which holds companies liable for the independent expression of their users', and is thus a blanket prohibition on any such legal theory.
"jurisdictions as similar to ours as the United Kingdom and the European Union"
Neither of which have strong free speech protections like the US does, so irrelevant. Also, they're in the wrong here.
"In the case of warrant-proof encryption, the justifications are thin. Section 230 allows tech companies to capture all the profits to be made from encrypting their services while exempting them from the costs they are imposing on underfunded police forces and victims of crime."
There's so much wrong with this.
On the one hand, warrant-proof encryption is nothing new. It existed when the country was founded. Examples appeared in the Burr trial. Gentlemen regularly cyphered their correspondence in the 18th century.
On the other hand, if encryption isn't strong, it might as well not be encryption. The whole point is to stop bad actors from reading your messages. If the US government can read it, you can be pretty sure North Korea, China, and Russia can all read it. And the US government is itself frequently a bad actor - something Volokh and Reason expose constantly.
On the third (!) hand, 'warrant proof' is a misnomer. No warrant is needed for decrypting transmissions if you can crack the encryption. A warrant for access to an encrypted message is a directive to a person to decrypt the message for police. That is just as effective (or ineffective, if the person refuses to comply and serves time for contempt instead) no matter what kind of encryption exists, because it has nothing to do with the strength of the encryption. Mr Baker's failure to even understand the legal issues involved is appalling given the nature of this blog.
On the fourth (!!!) hand, the right to have your mail not read by third parties is pretty fundamental. It's a federal crime to open someone else's mail for a reason. Encryption is simply a way to secure this right. Failure to acknowledge this by Mr. Baker means his argument is made in bad faith.
"Many believe that the security value of unbreakable encryption outweighs the cost to crime victims and law enforcement. Maybe so. But why leave the weighing of those costs to the blunt force and posturing of political debate?"
Because you shouldn't need to pay to vindicate a right.
Mr. Baker's solution is not just to leave it up to private companies, but to aggressively encourage a theory that the company is victimizing people by protecting the contents of people's communications. That latter theory is intellectually bankrupt, and promoting it leads to only one outcome - the lack of strong encryption, from which only criminals (like identity thieves) and government tyrants benefit.
"Today, any government but ours is free to order a US company to suppress the speech of Americans the government doesn't like."
The correct response here is to declare such orders to US companies unlawful and forbid their enforcement. Let the companies decide if they care to comply or not, but don't let them be forced into compliance.
The second correct response for people who don't like restrictions on speech content with a given platform is *create new platforms* and let competition decide how speech is distributed. After all, free speech is a right, but so is being able to choose whether or not to listen. Vote with your feet and wallets. Asking the government to force other people to make the decisions you want is evil.
------
But the bottom line here is ultimately that it's not clear why Section 230 is central to the discussion of either encryption or supposed social media monopolies. The fact of the matter is, people who say things online are the only ones who are responsible for that speech. The desire to go after platforms for what unaffiliated individuals say on those platforms is a clear lawyer special interest, because those platforms have deep pockets for lawsuits (and the individuals generally don't). All the smoke and mirrors Mr. Baker throws up are just that, smoke and mirrors, hiding a nakedly selfish desire to get bigger lawsuit settlements. This is everything that's wrong with the legal profession in a nutshell, and why the average person hates lawyers as a profession.
Well done.
His posts result in the highest quality comment threads here these days.
I am in total agreement (good analysis).
"Why should the post office be immune to liability from utterly predictable criminal use of the mail system?"
Probably not a good analogy.
The post office is allowed to open and X-ray your mail. Everything that the post office is allowed to do and does makes doing anything illegal regarding the US Postal Service one of the dumbest mistakes you can make and the penalties are insane. The system makes it incredibly easy to catch the person receiving and sending once something illegal is discovered. Then throw in the all the history between the two addresses and it's an open and shut court case.
When that package is dropped off and inspectors watch the person retrieve it that person then can't say they didn't receive the package. The person mailing it is on video as well.
Are you retarded?
He can't read.
So there are plenty of factual problems or other shortcomings with your argument. For example, the penalties for criminal use of email (or other internet services) aren't exactly small potatoes. Get caught distributing child pornography or impersonating a nigerian prince will land you in prison. (See also: mailing stuff anonymously is actually pretty easy, and so forth).
But the bottom line has nothing to do with how much trouble you're in when caught, or how likely you are to get caught. It's about liability.
The post office doesn't catch all criminal missives. Even bombs and anthrax have made it through the mail. Some criminal missives are literally just text (various fraud attempts, blackmail, etc...). In no case where the post office has merely transmitted such criminal items has the post office itself ever been held liable, nor any of its employees. And that's the bottom line, and why it's a perfect analogy.
Yeah, Baker is always trying to shoehorn various topics into a discussion of why the government needs to spy on everyone. It's really bizarre. What's this got to do with defamation and Section 230? Not much, it seems, other than seizing on a political moment that is somewhat adverse to tech companies as an opportunity to push a divergent surveillance agenda.
"Because they *shouldn’t be liable even without section 230*"
Right. As I've commented before, Section 230 is intended to merely clarify that where a technology is used by an unrelated speaker/publisher, the provider of that technology shouldn't be treated as the speaker/publisher merely because they provided the technology. The same principle would have held in the 18th century, I would think, in that if someone manufactured a printing press, or even owned one and leased it to someone else to use for the day, that doesn't make them the speaker/publisher.
But on the other hand, there is a hypothetical line-drawing problem, and Section 230 is pretty much completely unhelpful in addressing it. I don't think it's materialized very much in reality, but I offer one example and one hypo:
1. YouTube, a service that heavily curates and censors the content distributed on its "platform," pays certain content providers enormous sums for producing content. At what point are they partnering with certain content providers such that they are lumped in with the speaker/publisher? What if they front some money for new video equipment and offer some production consulting? Etc.
2. What if the New York Times went online-only and made all of its "content providers" independent contractors with less oversight? As I understand it some places like Buzzfeed, Forbes, and Huffpo have done things like this. At what point would they be on par with YouTube?
So, while I am fine with Section 230 for the time being, there are unresolved issues.
You've identified a gap, but it's one that existed before 230 was enacted, and for which there is case law addressing it. There's plenty of case law on it. See FTC v. Leadlick, 838 F.3d 158 (2nd Cir. 2016). In that case, the defendant was an information content provider (and therefore not immune), because it recruited affiliates to advertise products online, knowing that false news sites were part of that sort of industry. They specifically edited content (not just made decisions about what to include or remove) to affiliate pages to avoid the really big frauds, while still maintaining their weight-loss claims were plausible. Put together, the defendant's "role in managing the affiliate network far exceeded that of neutral assistance." To your examples:
1. YouTube is probably going to have liability for content it pays people to produce. Their potential exposure will increase to the extent YouTube exercises editorial control over the specific content (assuming that the editorial decisions are the ones leading to liability).
2. Existing case law will hold an information content provider liable for its contractor's work, depending on the provider's involvement in editing the actual content. Buzzfeed editing articles from its independent contractors and publishing them under the Buzzfeed heading is going to make Buzzfeed liable for that content, if defamatory, etc.
The issues you raise are necessarily fact-intensive and I don't think it helps to leave the decision to Congress. Even if this was a problem that needed central planning, they just aren't trustworthy enough. I'd prefer Congress's role to limiting itself to dialing back aberrant, bizarre judicial decisions, rather than building the framework from the ground up..
Interesting. I think the potential for earning a few bucks in ad revenue sharing is, for the most part, widely available to anyone on YouTube. Also hadn't thought of implications beyond defamation such as advertising laws.
"I don’t think it helps to leave the decision to Congress. Even if this was a problem that needed central planning, they just aren’t trustworthy enough. I’d prefer Congress’s role to limiting itself to dialing back aberrant, bizarre judicial decisions, rather than building the framework from the ground up."
I agree. That's why I'm not necessarily a big fan of Section 230. I don't know that it's really led to any bad results yet. But just looking at the text of it, it doesn't seem useful. It may either go too far, or it's vague enough that it doesn't necessarily go too far but doesn't do much either.
For example, the infamous (c)(2)(A) clause -- what is even the point of this? Others have zeroed in on "good faith" but I'm looking at "otherwise objectionable" and thinking -- isn't everything conceivably objectionable to someone? If so most of the words in (c)(2) are superfluous.
"Second, surely everyone can agree that foreign governments and billionaires shouldn't play a role in deciding what Americans can say to each other."
What does this have to do with anything? Foreign governments and billionaires today don't play any role in deciding what I can say to other people. I just spoke to several people and neither Mark Zuckerberg nor China stopped me.
What we should all be able to agree on is that people who run digital platforms should play a role in deciding how their digital platforms are used, and that consumers should remain free to select the digital platforms based on a market that is free.
Thanks to NtoJ for succinctly revealing, in multiple posts, the numerous flaws in this idiotic post.
"What should we do about section 230?"
Not a damn thing.
"The harder and more urgent Section 230 problem is what to do about Silicon Valley's newfound enthusiasm for censoring users whose views it disapproves of."
The solution for people who do not want other people censoring their choice of expression, is to stop trying to use other peoples' computer systems and equipment to MAKE their choice of expression. If you don't like the terms YouTube imposes, don't use YouTube. Build your own site, and operate it the way you prefer. Or just do without. Or bite the fucking bullet, and accept that when other people let you use their computer equipment, they get a say in how you use it.
The fundamental problem with Section 230 is that it violates the 1st Amendment. It specifically provides tort immunity for "the provider" for suppressing content which it finds objectionable "whether or not such material is constitutionally protected." By permitting the suppression of constitutionally protected material the US government itself is a party to that suppression. The constitutional violation is by the government itself.
By taking it upon itself to censor constitutionally protected material which it, in its sole judgment, finds to be objectionable, the provider (Facebook, Twitter, whoever) has in fact assumed the role of author of the material which it does permit to be published on its site. It therefore should bear exactly the same liability for defamatory posts as does the person who actually posted it. There is no justification for the government providing blanket immunity for such publications.
Section 230 was intended to protect website owners who are neutral hosts of others' posts. It has become something else entirely. The solution is straightforward: eliminate immunity for sites which censor any content that is not objectively criminal (child pornography, etc.). Otherwise, if a site chooses to act as a moderator of the content which it permits to be published on its site (as all do these days), it should bear legal responsibility for everything which it chooses to allow there.
I think you should die in a fire.
False. Hint: things you read in Breitbart are pretty much always the opposite of true. Section 230 was intended specifically to protect website owners that are not neutral hosts of others' posts.
How can it be a 1st amendment violation? Platforms are not the government.
Only government can violate the 1st amendment by censoring speech. Private individuals and companies are allowed to censor whatever speech they want in venues they own.
Well, it SOUNDED good when he thought it up.
What shall we do with Section 230
What shall we do with Section 230
What shall we do with Section 230
Early in the morning?
Way, hay, and up she rises
Way, hay, and up she rises
Way, hay, and up she rises
Early in the morning
https://www.youtube.com/watch?v=qGyPuey-1Jw
Arrh, ration my rum!
"In particular, why should they be immune from liability for utterly predictable criminal use of warrant-proof encryption?"
How's about because a perfect back door isn't possible and it's fundamentally wrong to mandate sacrificing everyone's security to go after a small number of criminals that could almost always be caught in other ways?
You're either ignorant or intellectually dishonest about the consequences of banning secure communications or the capability of a back door that couldn't be used by hostile parties. I think it's the latter, since you acknowledge the risks and pretend to want the issue weighed, but politicians passing a law imposing crippling liability for not destroying security shows you already have the answer, and have no fucks to give about any consequence of putting everyone under a police state for the illusion of security.
Section 230 is indeed a subsidy. Its basically free, but it is a subsidy. And I think we should keep it. It allows discourse like this comments section.
The only amendment needed is a "written justification" amendment. Entities claiming section 230 protection from libel should have to provide written decisions justifying censorship choices. Citing specific sections of the EULA. This should also create a COA in federal court if a person disagrees with the decision, with a classic fee shifting provision similar to anti-SLAPP legislation.
It's worrisome that there are Americans who think digital platforms have to justify their existence to the government through written decisions. But, what exactly would the written decisions say?
Suppose Redstate wants to censor liberal views, or comments supporting Democratic candidates. Why shouldn't it be allowed to do that? And if it should be allowed to do that, why do you think it needs to submit that basis in writing to Congress?
If Section 230 were repealed, digital platforms would still be free to censor. And Twitter would still not be the publisher of its users' tweets.
If Section 230 were repealed, digital platforms would still be free to censor, but if they make any mistake, and let a defamatory statement stay up, they risk liability for it.
Safer to just stop accepting user content, ie, comments, entirely.
The online site associated with my former hometown newspaper recently decided to drop user comments, and that was with NO risk of legal liability. Obviously, if you added the legal risk back, they'd rush to re-implement user commentary...