The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Regulation of Algorithms" Panel at Federalist Society Faculty Conference,
featuring Prof. Saurabh Vishnubhakat (Yeshiva), Profs. Gregory Dickinson (Nebraska), Prof. Christina Mulligan (Brooklyn), Dhruva Krishna (Kirkland & Ellis), and me.
I much enjoyed participating, and I hope some of you will enjoy watching. Here's the panel description:
Opaque algorithms shape what news stories you see on social media, dictate how artificial intelligence answers prompts, and can even decide whether applicants get a mortgage or a job interview. Amidst claims of algorithmic race, gender, and viewpoint discrimination, more and more individuals of all political affiliations are calling for greater government regulation of algorithms, while regulatory skeptics worry that government intervention will impede important technological innovation. This panel will explore the wisdom of efforts to regulate algorithms and how best to frame concerns about algorithmic errors and bias.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I worked at a bank using statistical algorithms to avoid expensive (very!!) CFPB fines. And one of my enlightening sidelines was investigating how credit is determinded. Books like "Fundamentals of Credit and Credit Analysis: Corporate Credit Analysis"
If anything having outsiders look at this will fail miserably. And bringing in the public will be dumptrucks of mud.
THe greatest hurt to the public is in things like Biden did, taking credit ratings and then penalizing those with good scores. There is the first horror to go after.
The best way to frame concerns about algorithmic curation of a publishing audience is to insist that it should not happen. There are multiple reasons. For instance:
1. As a practical matter, algorithmic audience curation precludes human editing prior to publication. Without that practice, no useful means will be found to control frauds, defamations, election hoaxes, libel-based business models, deep-fake AI generated political disruptions, etc. Only an editor's care to attend to the provenance of a would-be content item can answer that need.
The notion that fact-checking should be a critical part of editorial practice is thus diminished. No one, least of all an algorithm, can judge the veracity of published content presented without provenance. Alternatively, a human editor is capable to recognize an absence of reliable provenance, and decide accordingly. A human editor does not have to know what is true. An editor merely needs to be able to understand the import of the question, "How do I know that?"
The import is, that after reading the item, and after recognizing whether it is or is not pure opinion, the editor confronts a choice. If the editor cannot discern the truth or falsity of factual assertions, the item may not be worth publishing. Choices made in such cases touch on a range of factors so unpredictable that algorithms with competence short of full human intelligence will not serve.
2. Algorithmic audience curation enables giantistic platforms to grow without limit. That crowds out of the marketplace of ideas smaller competitors run by more diverse proprietors, who are better incentivized to compete with each other on the basis of content quality. Algorithmic content curation—absent a comparative standard established by human editing—will remain incapable to judge content quality. Even with a comparative standard in place, algorithmic audience curation may not be able to accomplish that. It has not yet happened.
3. The effects of 2) above include setting up a marketplace of ideas so impoverished of competition that it becomes too easy for government to control publishing. That process is ongoing now.
4. Audience curation based on non-algorithmic methods—principally by editing to maximize audience size and capacity to facilitate advertising sales—gets healthy constraint. That method must weigh multiple aspects of all content to be published, to tailor an information product satisfactory to the scrutiny of broad audiences, wealthy audiences, skeptical audiences, or audiences full of enthusiasts—whatever characteristics best serve the publisher's business model.
Algorithmically curated content does not work like that. It can and does disconnect audience curation from editing practice. It thus enables presentation of alternative realities tailored to prejudices of each audience member individually.
Algorithmic audience curation thus becomes inherently untruthful. Instead of delivering information which audiences generally judge reliable on the basis of revealed preferences arrived at by consensus, algorithmic audience curation tailors content to deliver mass disorientation, one content consumer at a time.
5. Algorithmic audience curation is vulnerable to commercial corruption, in ways which traditional audience curation is not. One dilemma faced by pre-internet publishers was a tendency for principal advertising clients to demand influence over content choices.
The long-time solution to that publishing dilemma was to sell the client on the commercial advantages of presenting advertising opposite high quality content, demonstrably approved by a broad audience, as attested by audited circulation. As unlikely as it may sound, that method proved effective, so long as claims of publishing virtue remained well-founded. Which in turn promoted a virtuous spiral, as publications strove to compete for advertising on the basis of satisfying either large and approving audiences, or smaller specialized audiences, or some other mix made up by a publisher with an eye to experiment.
By contrast, algorithmic audience creation usurps the former editors' prerogative, enabling a click-bait strategy instead. That altogether bypasses-content quality considerations, in favor of promoting as content the prejudices of the managers of advertising clients. Those, of course, may be ordinary commercial advertising clients, explicitly political advertising clients, outright fraudsters, or even government agents overt or covert. Whatever they are, advertising clients gain power from practice of algorithmic audience curation to turn platforms they buy advertising from into unedited conduits. They can thus deliver to audience members anything at all, regardless of truth, regardless of public norms, and regardless of respect for the notion of a marketplace of ideas.
Indeed, algorithmic audience curation enables mass attacks to discredit the notion of truth itself. It thus opens a way to discourage belief that public life could even be worth an audience member's attention.