Free Speech

§ 230 and the Justice Against Malicious Algorithms Act

My testimony today before a House Subcommittee on Communications & Technology hearing on proposed revisions to § 230.

|

You can see the PDF of my testimony (and the other witnesses' testimony as well), but I thought I'd also blog the text; I commented separately on five different proposals, so I thought I'd break this down accordingly. As I noted, my plan was mostly to offer an evenhanded analysis of these proposals, focusing (in the interests of brevity) on possible nonobvious effects. I also included my personal views on some of the proposals, but I will try to keep them separate from the objective analysis.

[I.] The Justice Against Malicious Algorithms Act

JAMAA would sharply limit interactive computer services' immunity for personalized recommendations, for instance for YouTube's recommendations of videos that come up alongside a video that you select. (YouTube recommends such videos in large part based on your search history.)

If the recommended material proves to be—for instance—defamatory, then under the bill YouTube could be liable for damages, since defamation often involves "severe emotional injury." (The Act would be limited to recommendations that "materially contributed to a physical or severe emotional injury to any person.") Likewise with Twitter or Facebook recommending posts based on your past interests, and more.

On the other hand, interactive services would remain immune for unpersonalized recommendations—for instance, recommendations of material based on its general popularity, uninfluenced by whether you've shown an interest in such material. And interactive services would be practically protected from liability for recommending mainstream media material: That material is less likely to be defamatory or otherwise injurious, and in any event mainstream media organizations have deep pockets, so the computer services could require that those organizations agree to indemnify the services in case of a lawsuit.

JAMAA would thus create a strong incentive for

  • YouTube, Facebook, Twitter, etc. to stop recommending user-generated content that they think you would find especially interesting and
  • instead to start recommending (1) generic popular material or (2) mainstream media content.

This strikes me as a bad idea. Users benefit from seeing recommendations for things they are especially likely to enjoy: If you like hip-hop, for instance, you'd presumably want to see recommendations for the most popular hip-hop video and not for the most popular material of any genre (which this week might be, say, Adele or Taylor Swift). Indeed, the more personalized the recommendations are, the more you're likely to enjoy them. Why pressure platforms to shift to generic material?

And the public also benefits, I think, from being able to see user-generated conduct and not just professionally produced mainstream media content. The established professional material already has a huge advantage, because of its existing marketing muscle. Why extend that privilege further by making it risky for platforms to recommend user-generated content (even when their algorithms suggest that such content might be exactly what you would most enjoy), and safe to recommend the professional material?