The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Free Speech

§ 230 and the Protecting Americans Against Dangerous Algorithms Act

My testimony today before a House Subcommittee on Communications & Technology hearing on proposed revisions to § 230.

|The Volokh Conspiracy |


You can see the PDF of my testimony (and the other witnesses' testimony as well), but I thought I'd also blog the text; I commented separately on five different proposals, so I thought I'd break this down accordingly. As I noted, my plan was mostly to offer an evenhanded analysis of these proposals, focusing (in the interests of brevity) on possible nonobvious effects. I also included my personal views on some of the proposals, but I will try to keep them separate from the objective analysis.

[IV.] Protecting Americans Against Dangerous Algorithms Act

This bill would deprive platforms of immunity from

  1. claims brought for conspiracy to interfere with civil rights (42 U.S.C. § 1985), failure to prevent conspiracy to interfere with civil rights (42 U.S.C. § 1986), and international terrorism (18 U.S.C. § 2333) when
  2. "the claim involves a case in which the interactive computer service used an algorithm . . . to rank, . . . recommend, [or] amplify . . . information . . . provided to a user of the service if the information is directly relevant to the claim."

Responses to a user's "specifically search[ing] for" "information" are excluded, as are services with 10 million or fewer unique monthly visitors and infrastructure companies that provide hosting, domain registration, and the like.

The bill would also exclude recommendations that come from simple and nonpersonalized algorithms—sorting "chronologically or reverse chronologically," "by average user rating or number of user reviews," "alphabetically," "randomly," and "by views, downloads, or a similar usage metric." But platforms are unlikely to want to use such simple algorithms in place of their usual, more complex algorithms (which turn on, for instance, a user's own viewing history), since those more complex algorithms generally increase user engagement and thus platform profit.

[A.] Pressuring Platforms Not to Recommend Material That Appears Like It May Have Been Put Out by Terrorist Groups

The chief effect of PADAA would be to hold social media platforms liable for recommending material that later turns out to have been put out by foreign terrorist groups (or by people working directly with those groups). The difficulty, of course, is that a platform can't know with any real certainty whether particular material is indeed put out by (say) Hamas employees or associates, or whether it is instead just constitutionally protected expression of support for Hamas. But because of this uncertainty, platforms would likely internally flag material that appears like it could have been put out by foreign terrorist groups, and exclude it from any recommendations they offer.

[B.] Pressuring Platforms Not to Recommend Pages That Appear to Involve Conspiracies to Interfere with Civil Rights

The bill's allowing liability for 42 U.S.C. § 1985 violations is likely to have no real effect, because § 1985 basically just covers conspiracies to violate civil rights; to intimidate parties, witnesses, or jurors; to intimidate people to affect federal elections; or to injure people based on their advocacy of federal candidates. A conspiracy requires a specific purpose to promote a shared criminal objective,[9] and platforms are not likely to have any such specific purpose.

But the bill also allows liability for 42 U.S.C. § 1986 violations, and § 1986 imposes liability for failure to prevent others' conspiracies:

  • "Every person who, having knowledge that any of the wrongs conspired to be done, and mentioned in section 1985 of this title, are about to be committed,"
  • "and having power to prevent or aid in preventing the commission of the same, neglects or refuses so to do,"
  • "if such wrongful act be committed, shall be liable to the party injured …."

It's hard to tell for sure, since successful § 1986 claims against private entities are so rare; but in principle, it seems that, under PADAA, once a platform learns of material that appears like it could violate § 1985, it would need to exclude it from any recommendations, or face the risk of liability. Such exclusion from recommendations, after all, may "prevent or aid in preventing the commission" of the conspiracy.

What we should think of such proposals to enlist platforms to police potential foreign terrorist advocacy and potential conspiracies to commit various domestic crimes is a difficult question. On one hand, such proposals may indeed make it harder for conspirators, foreign and domestic, to effectively organize and promote their crimes. On the other hand, they are also likely to lead to cautious platforms suppressing even constitutionally protected advocacy, since the platforms will have only limited information about who is posting material, why they are posting it, how the posters and their readers are likely to interpret it.

[9] Ocasio v. United States, 136 S. Ct. 1423, 1429 (2016).