Artificial Intelligence

Can We Trust A.I. To Tell the Truth?

Humanity has always adjusted to the reliability of new information sources.

|

"Disinformation is by no means a new concern, yet…innovative technologies…have enabled the dissemination of unparalleled volumes of content at unprecedented speeds," reads a November 2022 United Nations report. Likewise, a January 2023 NewsGuard newsletter said "ChatGPT could spread toxic misinformation at unprecedented scale."

The very idea of "disinformation" sounds terrible: poor innocent victims viciously assaulted by malicious liars. At first glance, I'm sympathetic to the idea we should stop people from saying false things, when we can find truth-authorities to at least roughly distinguish what is false. Thankfully, many widely respected authorities—including journalists, academics, regulators, and licensed professionals—do offer such services.

But we also have meta-authorities—that is, authorities on the general topic of "censorship," such as John Milton, John Stuart Mill, Voltaire, George Orwell, Friedrich Hayek, Jürgen Habermas, Noam Chomsky, and Hannah Arendt. Most meta-authorities have warned against empowering authorities to limit what people can say, at least outside of extreme cases.

These meta-authorities have said we are usually better off if our truth-authorities argue against false claims rather than censoring them. When everyone can have their say and criticize what others say, then in the long run most can at least roughly figure out who to believe. In contrast, authorities empowered to censor are typically captured by central powers seeking to silence their critics—which tends to end badly.

Some say that made sense once upon a time, back when humanity's abilities to speak persuasively were in a natural talk equilibrium with its abilities to listen critically. But lately, unprecedented new technologies have upended this balance, putting those poor innocent listeners at a terrible disadvantage. This is why, they say, we must empower a new class of tech-savvy authorities to rebalance the scales, in part by deciding who may say what.

Many pundits have spoken gravely of the unprecedented dangers of disinformation resulting from social media and generative artificial intelligence (A.I.), dangers for which they advise new censorship regimes. Such pundits often support their advice with complex techno babble, designed to convince you that these are subtle tech issues that must be entrusted to tech experts like themselves.

Don't believe them. The latest tech trends don't usually make that much difference to what are the best policies. Most of the meta-authorities who have warned against censorship lived in eras long after a great many unprecedented techs had repeatedly introduced massive changes to humanity's talk equilibrium. But as the analysis of these meta-authorities was simple and general, it was robust to the tech of their day, and so it remains relevant today.

Social media and generative A.I. might seem like big changes, but humanity has actually seen far larger changes to our systems of talking, all "unprecedented." Consider: language, translation, reason, debate, courts, writing, printing, schools, peer review, newspapers, journalism, science, academia, police, mail, encyclopedias, libraries, indexes, telephones, movies, radio, television, computers, the internet, and search engines.

Humanity has been generally successful at managing our growing zoo of talk innovations via the key trick of calibration: We make and adjust estimates of the accuracy of different sources on different topics. We have collected many strategies for estimating source reliability, including letting different sources criticize each other, and track records comparing source claims to later-revealed realities.

As a result, we sometimes overestimate and sometimes underestimate source reliabilities, but, if we so desire, we can get it right on average. Thus, with time and experience, we should also be able to calibrate the reliability of social media and generative A.I.

Our main problem, as I see it, is that we humans are generally less interested in calibrating our sources against revealed physical truths than against social truths. That is, we want less to associate with accurate sources, and more to associate with prestigious sources, so that their prestige will rub off on us, and with tribe-affiliate sources, to affirm loyalty to our tribes. 

In light of this, it could make sense to ask if any particular talk innovation, including social media or generative A.I., seems likely to exacerbate this problem. But in fact, it seems pretty hard to predict what effects these new techs might have on this problem. Maybe social media weakens prestigious sources but increases tribe-affiliation sources. It seems way too early to guess which ways generative A.I. might lean.

However, what seems clearer is that our most prestigious powers and our most powerful tribes would have big advantages in struggles over the control of any institutions authorized to censor social media or generative A.I. That is, the very act of increasing censorship would probably make the social-influence problem worse—which, of course, has long been the main warning from our meta-authorities on censorship.

Widely respected authorities should tell us if (and why) they think we are over- or underestimating the reliability of particular social media or generative A.I. sources. Then they should let us each remain free to decide for ourselves how much to believe them.