The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Does the Government Have the Right to Control Content Moderation Decisions?"
My conversation with Prof. Eric Goldman (Santa Clara) via the UCLA Institute for Technology, Law, and Policy.
I enjoyed it very much, and hope you do, too! Here's a rough summary:
As private entities, social media platforms are not bound by the First Amendment, and are free to permit—or block—content and users as they see fit; and 47 U.S.C. § 230 preempts any state statutes that would impose greater limits on such companies. That, at least, is the traditional view.
But some state legislatures are considering statutes that would ban viewpoint-based blocking by platforms; and some scholars are arguing that those laws might prevail, notwithstanding § 230. What are these theories? And what are their strengths and weaknesses?
I should note that my views on the subject are far from settled; I was trying to outline arguments that I think need to be considered, though I'm not certain what policy, ultimately, makes the most sense here.
Thanks to UCLA law student Leeza Arbatman for moderating, and to my colleague John Villasenor for organizing.
UPDATE: Eric has very helpfully posted his notes for the program (though we didn't get to all the questions he had anticipated); I hope to write something up on the subject, but likely not until early April.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Seems like the discussions have been one sided. If you are provided a shield against liability I don't think a caveat that in return you do not censor is too much to ask. That is you are a platform that delivers regardless of your viewpoint. Just like a utility delivers electricity in the same manner. The utilities have exclusive right of way in return
But is that where this is going?
Note also how congress considered Russian Facebook bots as interference in the 2016 election. So why isn't FB and Twitter and basically the big tech universe viewpoint censoring election interference?
Indeed, the very large social media platforms are common carriers as much as FEDEX and UPS are. As such they should be treated as common carriers and be free of said liabilities
"Indeed, the very large social media platforms are common carriers"
There's a decent argument that they should be, but no, under currently law they are very much not common carriers.
You're correct, in law they are not, but as they have all the attributes of common carriers they should be deemed legally as such. The second sentence makes that intent clear.
It is probably going to be somewhat difficult to craft legislation that allows legitimate moderation but restricts viewpoint censorship.
Why not just retaliate against all tech companies with draconian privacy regulations which cripples their revenue streams from selling data? By making them much less profitable you inhibit their ability to afford to censor speech, and you reduce the barriers to entry for competitors.
You might provide for a right to challenge moderation decisions in small claims court, with a mandatory loser pays provision. That would tend to discourage frivolous challenges, but also frivolous moderation.
"Why not just retaliate against all tech companies with draconian privacy regulations which cripples their revenue streams from selling data? By making them much less profitable you inhibit their ability to afford to censor speech"
Don't think it would work out the way you hope. Cut back their profits, and they're going to get WAY more defensive of whatever they have left, and thus far more likely to censor minority viewpoints on any substantial topic, out of fear that their customers (you know, the people and organizations that are actually providing them with income) won't like those.
Should the electric company be allowed to shut off service to Republicans? It is a government sponsored monopoly, and a utility.
Same for these platforms. The government allowed their monopoly. They are quasi-governmental organizations.
Eh, maybe - but probably not.
Utilities are natural monopolies because the sunk cost of providing the final distribution (the power lines from the generator to your house) is so high that once you've built it, no sane business is going to try to set up a competing service. It would be economic suicide to build a competing distribution network in the same area. The same logic worked for wired telephones. That logic broke down for long-distance service because the local monopolies had to rent use of the next locality over to complete the call - and once you established the principle of renting, we could disaggregate long-distance entirely and let in competitors who did nothing but LD without stepping on the toes of the local monopoly. Then cell phones came along and the disaggregation was complete. Phone service stopped being monopolistic while wire-delivered power still is.
Social media platforms do not have that same cost structure leading to local monopolies. The only "monopoly" power a social media company has is the network effect. MySpace is all the proof you need that network effect is not enough to protect a company from being out-competed.
Facebook, Google and Twitter may be bullies who are getting too big for their britches but they are not monopolies. Competitors exist and will keep the market honest.
It's worth noting, by the way, that the big tech companies are actually in favor of regulation. Even though it will tie their hands in some ways, they know that it will be far more burdensome on those upstart competitors.
Historically, it's been demonstrated that at least cable systems are NOT natural monopolies: Cities that don't grant monopolies end up with several. For example, my neighborhood in a suburb of Greenville, SC has 3 providers of fiber internet with penetration of 87-98%, and one cable internet provider. (All of them provide 'cable' TV service.)
In practice, the obstacle to cable competition was cable companies bribing local governments to grant them monopolies, not excessive cost of providing redundant service.
I'll grant you that having multiple power lines on those poles, too, might be a bit problematic. Might. I wouldn't categorically assume that it was economically infeasible until some communities had actually tried permitting it.
"
Should the electric company be allowed to shut off service to Republicans? It is a government sponsored monopoly, and a utility.
Same for these platforms. The government allowed their monopoly. They are quasi-governmental organizations."
Should Rush (and other conservative radio talk-show hosts) be allowed to just hang up on some liberal jackass who calls in to his show and starts spewing opinions that won't appeal to any of his audience of right-thinking Americans? Or do they get to hear the whole stupid rant because, after all, the radio show goes out over the public radio spectrum.
This is an easy fix. Platforms that moderate content ought to be required to "own" that content for all traditional liability reasons. You can't plead you are just merely a communications platform if you are actively curated user generated content. No more free swings for Big Tech.
Remove any statutory immunity and just let basic tort and contract law apply to those platforms. See how quickly they want to give up moderation when a torrent of lawsuits from every po-dunk jurisdiction start flowing in.
If these private companies want to regulate speech on their platform that is fine. No more public protections though. Pretty easy.
" Platforms that moderate content ought to be required to “own” that content for all traditional liability reasons."
That was the state of the law prior to the passage of section 230 and exactly what it was intended to change.
The phone company is not liable if we plan a bank robbery by phone. I agree, carrier, or publisher, not immune publisher.
Right. He’s arguing Section 230 was a bad idea and should be repealed. I think he has a point.
But not to the extent it did end up changing it, due to the non-enforcement of that "good faith" language. They've had a much freer hand at moderation than was intended.
Maybe. I'm not convinced the "good faith" language means anything remotely close to what you say you think it means.
It ought to be given at least SOME meaning, the language IS in the statute. Shutting down a video about the 10 Commandments on the basis of "violent content" because one of the commandments mentions murder, for instance, is pretty hard to characterize as good faith moderation. Or moderation actions that cite "TOS violations" without specifying what they are.
Well, with one explicit exemption. They should be able to remove any posts or comments that are actually illegal, such as child pornography. But they should cite the law causing the removal.
In addition, it would be helpful t the populace (although not the platforms) if they were forced to make the terms of service specific enough to cite the exact reason for each removal.
I did that by just reading the terms all the way through and deciding the service was not worth it.
But that strikes to the very problem that Sec 230 was intended to solve. Determining whether something is child porn is not as easy as looking at a metadata flag. It requires judgement and sometimes the full determination of a court. Platforms that tried to stay hands-off were prosecuted for "facilitating" the distribution of child porn. But platforms that tried to eliminate it were accused of censorship, sometimes for things that turned out not to be child porn. They were also getting in trouble for attempts to manage spam (admittedly, some of those attempts were pretty ham-handed) and trolling.
I'm all for holding social media companies accountable for their moderation decisions but it should be through the terms and conditions - civil suits taking them to task for failing to live up to the promises that they made to you as a consumer. Government intervention sounds good but will only make the problem worse.
The problem is cases like SilkRoad and Backpage has demonstrated that zero moderation is an untenable position. Your proposal would just make it so anything less than total moderation is similarly untenable. Facebook and Twitter (as well as every other forum and blog site) would become like New York Times letters to the editor; thousands get submitted, only a select few ever see the light of print
"This is an easy fix. Platforms that moderate content ought to be required to 'own' that content for all traditional liability reasons."
Astounding stupidity. Why should they take ownership of content they didn't want and tried to get rid of?
Should a mommy blogger be able to remove lewd comments from her posts without being liable for the possible libel other commenters leave?
If "yes", then you're A-OK with "viewpoint discrimination in content moderation".
If "no", then you're saying that everyone that wants to set-up their own blog either needs to accept every disgusting, harassing thing someone wants to post on their own site, or not allow any reader interaction.
Individuals, grandmothers included, are liable for the contentss of their posts under current law whether they edit comments or not. Nothing being proposed would change that. The only thing being discussed is whether the platform owner should also have some liability. Making the platform owners liable would likely give the grandmothers of this world some relief. People are much more likely to sue a Facebook or a Twitter than your grandmother. They have more money.
In additioj, any new federal legislation can easily distinguish between between an individual noncommercial user editing individual comments on individual posts and a large commercial platform owner denying service entirely. The Raich case, holding that the federal government’s commerce jurisdiction reaches to a potted plant in a home, suggests it may not have to make this distinction. But it can, and it has often done so in the past. If there is a situation where it would not be fair to apply portions of a new law to users as distinct from platform owners, the law could easily be written to distinguish the two.
But because grandma does have moderation powers over comments, she is the platform owner for her little corner of the blogosphere, and thus would be liable for comments if she chooses to remove any, regardless of her likelihood of being sued (and yes there are absolutely assholes out there who will sue grandma)
Individuals, grandmothers included, are not liable for the responses to their posts under current law whether they edit comments or not.
The changes y'all keep bandying about say that if hosts, grandmothers included, do any moderation, then they would be liable.
" Making the platform owners liable would likely give the grandmothers of this world some relief. People are much more likely to sue a Facebook or a Twitter than your grandmother. They have more money."
So, it's okay to give them liability for something they didn't write/create because they have money? Fuck the rich?
I'm going to go on a completely different tangent here -- what the dot-com crash of ~20 years ago did was make long distance telephone calls essentially free.
They had laid so much long-distance trunk fiber, and so increased the capacity of each fiber strand that the end result is that long distance telephone calls are now essentially free. There's no more waiting until after 11 pm for the discounted night rates nor businesses purchasing phone numbers in adjacent area codes.
I remember paying what would have been $3-$7 a *minute* in today's money for long distance calls, in addition to the $40/month (in today's money) that I was paying for the telephone service. I remember purchasing an additional service that permitted me to make in-state calls what would be 26 cents per minute in today's money -- a major discount.
People tend to forget that the collapse of Worldcom led to essentially free long distance. And when Twatter and Farcebook inevitable crash, their technologies will enter the public domain as well.
That's a bizarre assertion you have at the end. What proprietary technology do Facebook and Twitter have? It's their network effects that matter here, I would have thought.
"That’s a bizarre assertion you have at the end."
this is Ed, so the only surprise is that the bizarre assertions didn't come 'til the end.
"What proprietary technology do Facebook and Twitter have? It’s their network effects that matter here, I would have thought."
Network effects are the largest element. The value in these companies comes from two things: 1) their ability to gather a large audience, and 2) their ability to monetize it.
the current value of most profitable "Big Tech" comes largely from indirect monetization, from realizing that some previously-disregarded information has value and finding someone who'll pay for it.
Furthermore, in the crash you cite the fiber was actually there in the ground for other people to use.
But if Facebook goes away, who is to say what happens to their server-side source code that is not visible to the outside world?
Both are property, aren't they?
And don't forget that a lot of the fiber in the ground was dark fiber -- it doesn't work well without the tech at both ends.
I'm curious what you think is magical about Facebook's code.
There have been competitors to Facebook, Twitter, Instagram, YouTube and so-on. The reason those competitors haven't taken off is not because of technical challenges, but largely due to entrenched user-bases and word-of-mouth.
For that matter, there's enough churn in programmers in Silicon Valley† that if there was some special sauce in Facebook's code, the others would know about it already and have taken the useful bits.
So yeah, the hardware and data center stuff would get auctioned off, but there's no particular reason to think that would have anything to do with "the next Facebook".
While in some ways digital enterprises are like physical ones, the kinds of assets you are talking about are not one of them.
________
†One of the reasons Silicon Valley is a breeding ground for new technical ventures is (A) California not enforcing non-compete contracts, and (B) the culture of hopping from one venture to another taking the best ideas with you that California's decision enables.
"One of the reasons Silicon Valley is a breeding ground for new technical ventures is (A) California not enforcing non-compete contracts, and (B) the culture of hopping from one venture to another taking the best ideas with you that California’s decision enables."
But the main reason is that Stanford attracts a lot of really smart people interested in learning engineering to the area, and they find productive employment after they've learned to become pretty smart engineers. The Stanford business school also attracts a lot of people interested in starting and operating new businesses, some of whom take their business ideas back home with them and some of whom can't wait to get started, and get those new businesses going right there.
"both are property, aren’t they?"
We see, once again, the great harm to clear thinking caused by the term "intellectual property". To quote the famous philosopher Robert Plant: "sometimes words have two meanings" -- in my previous comment I'm pointing out some significant differences in the two scenarios, and you respond, "But we call them both 'property'!"
If you keep interacting with him, this won't be the last time Ed proves to you that he hasn't a clue what he's talking about.
"But if Facebook goes away, who is to say what happens to their server-side source code that is not visible to the outside world?"
If Facebook falls into bankruptcy, then someone will buy whatever scraps are left that have any value. See, e.g., Yahoo.
Just a variation on Dr. Ed's typical "Remember that…" fabrication.
(To be clear, at one time long distance rates were really high, to the point where one had to wait until after 5 pm or even until after 8 pm to make calls at any reasonable rate. That point ended long before the dot com crash.)
The phone phreaks worked around the problems posed by high telecomm rates. Today's young tech-heads don't have that kind of challenges to build up their skills any more... all they have to keep them busy is figuring out how to access Facebook and porn on their school computers, which is not even in the same magnitude of difficulty as what the hackers had to solve back in the day.
What the government can do is (1) through its legislature, enact rules requiring that users of a communications service be dropped only for violating the specific terms of their contract, and (2) through its judiciary, determine whether rhe terms of rhe contract have in fact been violated. It can prohibit arbitration if it wants. It can set up a special Article I court to handle these matters to adjudicate more quickly and reduce the burden on Article III courts.
This means that if the contract says users can be dropped for making “indecent” or “racist” posts, judges can evaluate whether the posts actually were “indecent” or “racist,” same as for any other contract term.
They do this all the time. In a non-disparagement clause case, judges evaluate whether speech was disparaging, which is as content-based as it gets.
So government can’t set the terms. But it gets to enforce them. And in enforcing them, it gets to decide what the terms the platform company set up mean.
What would stop Facebook and Twitter from putting in their terms of service that they reserve the right to refuse service to anyone for any reason?
Putting things in your terms of service doesn't necessarily make them enforceable. Popeye's chicken could put in their franchise contracts that franchisees will only sell the chicken sandwich to black folks, but they can't sue to enforce it.
Absolutely. Saying that social platforms can only deny service for violation of specific, non-vague, prespecified contract terms would not violate the First Amendment.
In general, government restrictions on the ability to contract, at least ones that aren’t specifically about speech, don’t implicate the First Amendment at all.
Of course, these ones are specifically about speech, so I'm not sure how helpful your observation is.
"Absolutely. Saying that social platforms can only deny service for violation of specific, non-vague, prespecified contract terms would not violate the First Amendment."
Agreed, since violating the fifth amendment is not the same thing as violating the first.
So before Twitter can ban someone for harassing others, they have to literally take them to court?
That's your "fix"?
Do they have to take you to court to moderate any of your posts too? How many of your posts do they get to moderate before they have to take you to court?
That's going to either wind up as a rubber-stamp court (and thus no different from today, but with extra costs) or make it not worthwhile to moderate anyone, and thus be the "no moderation allowed" option. And who pays for the court? Is social media really such a national resource that we should spend tax-payer dollars to police it?
"So before Twitter can ban someone for harassing others, they have to literally take them to court?"
What's the problem with just not following somebody?
You realize, that in a lot of these cases where people demand content be taken down, they actually went looking for something to complain about?
"You realize, that in a lot of these cases where people demand content be taken down, they actually went looking for something to complain about?"
And they found it.
If I object to my neighbor planting landmines along the property line, do I have to wait until my kid happens to be standing near one when something sets one off to take him to court to get him to go dig the things up?
I said nothing of the kind. However, a law like the one I suggested would permit people claiming they were denied service arbitrarily (or for reasons other than violation of their terms of service) could go to court and bring this claim. I also said that the government could set up some sort of streamlined administrative procedure to adjudicate claims if it wanted to.
So, under your plan, a rando could go to the court house and file a lawsuit that said "wahhh!!! The big mean tech company won't me let me use their stuff the way I see fit! They claim that they get to decide what it gets used for, just because they own it! No fair, no fair, NO FAIR!"
Meanwhile, when an AM talk radio station wants to use the public spectrum to transmit conspiracy theories and other horseshit, the public gets no say in it, and isn't allowed to offer any input to what the radio station wants to send out, right?
"What the government can do is (1) through its legislature, enact rules requiring that users of a communications service be dropped only for violating the specific terms of their contract, and (2) through its judiciary, determine whether rhe terms of rhe contract have in fact been violated."
then they can litigate the fifth-amendment lawsuit and admit that they're taking private property for public use, and pay the previous owner the value of what they've taken.
Fifth Amendment jurisprudence has clearly establisjed thst laws like this don’t constitute takings. Many laws restrict the arbitrary ability to break contracts. For example, many states have laws protecting franchisees, dealers, and others whose businesses depend on a contract with a single company from having their contracts and their businesses arbitrarily terminated.
I see no reason why the courts should treat this any differently.
You may think the current state of the law wrong. But it is the current state of the law.
"Fifth Amendment jurisprudence has clearly establisjed thst laws like this don’t constitute takings."
Why would anyone assume otherwise. Just because they ARE takings...
What they could do is refuse to immunize it.
"What they could do is refuse to immunize it."
Or, if they really wanted to do something that would have no effect, they could do nothing.
Eugene seemed to be firm that Twitter could be regulated as a common carrier, but wasn't certain whether they should be. He said Twitter's dominance in the market argues for common carrier regulation. But, I was surprised he did not address his prior argument for not regulating them: doing so would not permit them from removing pro-terrorist content.
People have been banned from Twitter simply for saying things like "men aren't women". If you ban people based on their political beliefs, then functionally, how is that any different than banning them based on a protected class like religion? These are arbitrary categories.
saying "men aren't women" is also a religious value.
Saying "men aren't women" is basic science (biology).
Saying "men aren't women" is basic science to someone who doesn't understand basic science, but wants to pretend that they do.
Noted subject-matter expert Ray Davies said:
Girls will be boys, and boys will be girls,
It's a mixed-up, muddled-up, shook up world.
in a song that 's been played millions of times.
From what I understand that user was banned for directing that comment at a trans person during a back and forth conversation. It wasn’t just someone making a general statement.
Twitter claimed it violated their harassment policy.
I’m not agreeing with Twitter’s decision, just that they felt that it violated a specific rule given the context.
To the best of my knowledge, no court has found that Twitter, Facebook, or any other "social media" service is a public accommodation for the purposes of federal non-discrimination law (or any state's non-discrimination law).
The closest you get to such a successful case is California suing eHarmony for being anti-gay. But a critical distinction there, and probably why it's damn-hard for lawyers to extend that case to social media in general, is that eHarmony's customers were the people being excluded.
Which is to say... if Twitter refuses to let you buy an ad because you're too white, you might have a case. But Twitter banning you from being a user because you're too white? You're probably SOL.
" how is that any different than banning them based on a protected class like religion?"
Back up. define your terms. What's a "protected class like religion"?
If I want to allow people to use my property, I can post signs that say "no religious weirdos allowed" if I want to, and can include or exclude any particular religion from my personal definition of "religious weirdo". Why would anyone who ISN'T a follower of Quetzlcoatl want to come up on to the pyramid at sunrise? why would any non-druids want to attend the summer-solstice-fertility festival at my Stonehenge replica? There aren't any "protected classes", religious or otherwise, unless some law creates one or more "protected classes".
Your comment pretty much says “nothing is illegal unless there’s a law against it.”
True, that. But kind of tautological, don’t you think?
My comment pretty much says "you're using a term and it doesn't have the meaning you think it does".
But thanks for your input, irrelevant as it was.
I'm generally opposed to attempts to regulate social media platforms as "common carriers" even though many of them are engaging in pretty-overt (albeit probably legal) viewpoint discrimination. I would support requiring transparency for their terms of service including any "community standards" and rationale for removing content posted by their users. I also think that if they are going to engage in censoring content (which I think they have the right to do), they need to be upfront about it and they should be liable for any content that they do allow on their plaform.
I would favor making TOS more transparent and decision making behind enforcement of those terms more so. But I think the best way to accomplish this is by requiring content-moderating platforms to defend their actions in court relying upon those TOS agreements which would be subjected to normal contract law tenants.
"I would favor making TOS more transparent and decision making behind enforcement of those terms more so."
What you would favor is a requirement that the people that own things you want to use have to let you use them as you see fit, and aren't allowed to tell you "no, go away".
As a matter of commerce law, I would hold as null and void any contract language referencing "community standards" unless those standards came from a group comprised of a majority of customers and had non-overridable authority over what those standards were.
Otherwise, the platforms are just engaging in dishonesty by saying "community standards" when they actually mean "proprietary standards" which are wholly determined by the service provider.
Note I am not calling for proprietary standards to be prohibited, just that they have to be referred to by an honest name in the provisions of a contract.
I’ve never read the terms of service by these platforms. But I still don’t see why they can’t just say they maintain the right to refuse service to anyone for any reason just like many other businesses do.
This obviously wouldn’t protect them from certain laws (like discriminating based on race) but it would allow them to remove people for content they didn’t like.
If you are fluent in legalese, that is what they say.
And do.
(Fortunately, they do not add artistic decoration to wedding cakes)
Facebook, Google, and other social media platform's customers are typically advertisers (their users are their product). Since some customers spend a lot of money advertising on these platforms while others spend little, would each customer's voting power be weighted by their trailing six month expenditures on the platform or something like that?
Seeing as the SCOTUS-affirmed standard is "I know it when I see it", this hardly seems like a fair standard.
The SCOTUS standard is THEY know it when THEY see it. Your opinion is irrelevant. Of course the chance of any particular case reaching SCOTUS is damned small.
"I know it when I see it" is an attribute of pornography, not commerce law. (Unless you are engaged in commerce in pornography).
The term “community standards” is probably enforcible by a court, given a definition of the relevant community. But then, the court would be the one responsible for determining what “community standards” are.
"The term 'community standards' is probably enforcible by a court, given a definition of the relevant community. But then, the court would be the one responsible for determining what 'community standards' are."
No, the court would be there to determine instances of what the "community standards" aren't.
"I’m generally opposed to attempts to regulate social media platforms as “common carriers” even though many of them are engaging in pretty-overt (albeit probably legal) viewpoint discrimination"
they suppress anything they feel their customers might object to.
Ordinarily no, unless you have an entire oligarchical vertical industry that acts in unison from ISPs to banking institutions to effectively make it impossible to have a voice or start a platform that is not significantly constrained and is constantly persecuted by hack attacks, deplatforming from even high tier entities and threats to their ability to carry out financial transactions. We happen to have this scenario so some sort of intervention is warranted now.
Its just like public accommodations. I am normally strongly opposed the the concept of mandating which customers a business serves but if the SSM couple could show there was some coordinated mischief where every cake shop the state decided not to serve them and when they tried to bake their own cake the grocery store wouldn't sell them any ingredients and then when they tried with smuggled ingredients they turned on loud rock music to distract them, I would agree things had gone too far and the government needed to step in.
"Its just like public accommodations. I am normally strongly opposed the the concept of mandating which customers a business serves but if the SSM couple could show there was some coordinated mischief where every cake shop"
How about if they just hired one cake shop, which then turned around and refused to do the work they contracted for? No liability in your view?
"But some state legislatures are considering statutes that would ban viewpoint-based blocking by platforms; and some scholars are arguing that those laws might prevail, notwithstanding § 230. What are these theories? And what are their strengths and weaknesses?"
If the platforms respond by blocking literally all users in the state, how many of those state legislators will remain state legislators beyond the next election?