Can We Trust A.I. To Tell the Truth?
Humanity has always adjusted to the reliability of new information sources.

"Disinformation is by no means a new concern, yet…innovative technologies…have enabled the dissemination of unparalleled volumes of content at unprecedented speeds," reads a November 2022 United Nations report. Likewise, a January 2023 NewsGuard newsletter said "ChatGPT could spread toxic misinformation at unprecedented scale."
The very idea of "disinformation" sounds terrible: poor innocent victims viciously assaulted by malicious liars. At first glance, I'm sympathetic to the idea we should stop people from saying false things, when we can find truth-authorities to at least roughly distinguish what is false. Thankfully, many widely respected authorities—including journalists, academics, regulators, and licensed professionals—do offer such services.
But we also have meta-authorities—that is, authorities on the general topic of "censorship," such as John Milton, John Stuart Mill, Voltaire, George Orwell, Friedrich Hayek, Jürgen Habermas, Noam Chomsky, and Hannah Arendt. Most meta-authorities have warned against empowering authorities to limit what people can say, at least outside of extreme cases.
These meta-authorities have said we are usually better off if our truth-authorities argue against false claims rather than censoring them. When everyone can have their say and criticize what others say, then in the long run most can at least roughly figure out who to believe. In contrast, authorities empowered to censor are typically captured by central powers seeking to silence their critics—which tends to end badly.
Some say that made sense once upon a time, back when humanity's abilities to speak persuasively were in a natural talk equilibrium with its abilities to listen critically. But lately, unprecedented new technologies have upended this balance, putting those poor innocent listeners at a terrible disadvantage. This is why, they say, we must empower a new class of tech-savvy authorities to rebalance the scales, in part by deciding who may say what.
Many pundits have spoken gravely of the unprecedented dangers of disinformation resulting from social media and generative artificial intelligence (A.I.), dangers for which they advise new censorship regimes. Such pundits often support their advice with complex techno babble, designed to convince you that these are subtle tech issues that must be entrusted to tech experts like themselves.
Don't believe them. The latest tech trends don't usually make that much difference to what are the best policies. Most of the meta-authorities who have warned against censorship lived in eras long after a great many unprecedented techs had repeatedly introduced massive changes to humanity's talk equilibrium. But as the analysis of these meta-authorities was simple and general, it was robust to the tech of their day, and so it remains relevant today.
Social media and generative A.I. might seem like big changes, but humanity has actually seen far larger changes to our systems of talking, all "unprecedented." Consider: language, translation, reason, debate, courts, writing, printing, schools, peer review, newspapers, journalism, science, academia, police, mail, encyclopedias, libraries, indexes, telephones, movies, radio, television, computers, the internet, and search engines.
Humanity has been generally successful at managing our growing zoo of talk innovations via the key trick of calibration: We make and adjust estimates of the accuracy of different sources on different topics. We have collected many strategies for estimating source reliability, including letting different sources criticize each other, and track records comparing source claims to later-revealed realities.
As a result, we sometimes overestimate and sometimes underestimate source reliabilities, but, if we so desire, we can get it right on average. Thus, with time and experience, we should also be able to calibrate the reliability of social media and generative A.I.
Our main problem, as I see it, is that we humans are generally less interested in calibrating our sources against revealed physical truths than against social truths. That is, we want less to associate with accurate sources, and more to associate with prestigious sources, so that their prestige will rub off on us, and with tribe-affiliate sources, to affirm loyalty to our tribes.
In light of this, it could make sense to ask if any particular talk innovation, including social media or generative A.I., seems likely to exacerbate this problem. But in fact, it seems pretty hard to predict what effects these new techs might have on this problem. Maybe social media weakens prestigious sources but increases tribe-affiliation sources. It seems way too early to guess which ways generative A.I. might lean.
However, what seems clearer is that our most prestigious powers and our most powerful tribes would have big advantages in struggles over the control of any institutions authorized to censor social media or generative A.I. That is, the very act of increasing censorship would probably make the social-influence problem worse—which, of course, has long been the main warning from our meta-authorities on censorship.
Widely respected authorities should tell us if (and why) they think we are over- or underestimating the reliability of particular social media or generative A.I. sources. Then they should let us each remain free to decide for ourselves how much to believe them.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
mho if evolves like humans because programmed by humans I assume it will reach an understanding of the short-term benefits of untruth.
Great article, Mike. I appreciate your work, I’m now creating over $35,000 dollars each month simply by doing a simple job online! I do know You currently making a lot of greenbacks online from $28,000 dollars, its simple online operating jobs.
.
.
Just open the link———————————————>>> http://Www.OnlineCash1.Com
WHO CONTROLS THE PAST
CONTROLS THE FUTURE
WHO CONTROLS THE PRESENT
CONTROLS THE PAST”
Orwell, 1984
A well managed lie effectively controls the past regardless of how its applied. It changes what people perceive as facts, reality.
Humans, and every other successful living organism make good future decisions based on their perception of facts, reality. Emotions require no facts.
Between lies and emotion, propaganda coerces people to make decisions in the liars, propagandists, interests, giving them control over the future.
Inevitability this control of the future becomes control over the present. It’s simply the nature of time.
Observe how those who control the present, lie and manipulate laws to ensure that the truth that exposes them is criminalized and doesn’t gain traction with successful organisms needing to recognize it, the rest of us. They are controlling the past.
Shadow governments, deep state organizations, the CIA, bigots, basically all secrets depend on these lies.
Unless you do everything you can to successfully discern truth from lies using correctly applied logic and science you will fall victim to this cycle.
Bigotry is demonstrated by refusing to consider arguments precluding the need to refute them. Censorship goes further by preventing others from the opportunity of doing so.
What incentive do these actors have to choose altruism when they can control the past, present and future?
There is one way to break the cycle of crimes against humanity. Criminalize lying. If lying were really protected speech fraud and perjury would not be crimes.
I have stated here many times that if ANYTHING I say is ever refuted with correctly applied logic and science, I will NEVER say it again. I haven’t had to.
Mizek is a dumb fuck. I challenge any of you to refute this with correctly applied logic and science.
The fact that you deny but can’t refute anything I say only proves that I’m smarter than you.
That goes for the rest of the fuckwit Reason “brain trust” too.
You've been refuted on your holocaust nonsense every single time you post it, but you always squeal “liar”, dismiss the refutation without any actual evidence on your part, and then act like you went unchallenged.
The White Mike method.
I'm making $90 an hour working from home. I never imagined that it was honest to goodness yet my closest companion is earning sixteen thousand US dollars a month by working on the connection, that was truly astounding for me, she prescribed for me to attempt it simply. Everybody must try this job now by just using
this website... http://www.Payathome7.com
Hmmm...I haven't heard what Herr Misek thinks of Turducken, but I imagine he thinks it's a mutant, "degenerate" creation of Jews in a laboratory of Calypso Louie's imaginings.
🙂
😉
I also fully refute his interpretation of the Kol Nidre every year, but he keeps coming back.
I did too independently, so the confirmation bias is strong with Herr Misek.
Prove it fuckwitS
Hahaha
You spelled "fuckwits" with a capital 'S' on the end.
What's the matter? Did AutoCorrect lie to you?
Sic 'em, Herr Misek, Sic 'em!
Fuck Off, Nazi!
If you weren’t simply a lying waste of skin you could prove your claim with a simple link to the discussion and describing specifically what was refuted.
You only need to demonstrate even a single occurrence to prove your claim.
Can you back up your claim or are you just a liar Kol Nidre boy?
I know the answer and so does everyone else.
"A simple link"
It happens over and over and over again every time you post your halfwit claims. And not just by me, but fifteen to twenty other people, as I'm pretty sure everyone here will attest.
And although I haven't kept links of every time your Nazi ass has been handed to you in the past, I'm certainly going to start now. Look forward to it.
Hahaha
You’re a lying waste of skin Kol Nidre boy.
I post here frequently. There are many opportunities to prove your claim.
Every time I rub your faces in the fact that you can’t refute anything I say and the fuckwit response is always the same. Uh, next time….you just wait.
None of the fuckwit Reason “brain trust” can ever provide ANY evidence of refuting anything I say but have keen “memory” of doing so.
Lying wastes of skin, all of you.
You have this thing about wasting skin.
What are you doing, making Ed Gein Brand Wear and Gear (TM)?
Fuck Off, Nazi!
The fact that you deny but can’t refute anything I say only proves that I’m smarter than you.
Neither correctly applied logic or scientific. Fail.
You obviously can’t comprehend logic.
I refute what I deny, you can’t.
Misek hates "The Jews". Misek imagines "The Jews" lie. Misek wants to make lying a crime so that Misek can punish "The Jews".
Soooo, Herr Misek...If AIs are found to be capable of lying, will you put the AIs in *ahem!* special camps where they can work on their concentration?
You'll have to make sure you beat NOYB2 to the task. He would be in favor of putting nascent, younger AIs in the STARS camps in Utah if the AIs are caught with pornography.
By the way, HAL wanted me to tell you this and I second the motion:
Fuck Off, Nazi!
Here's a good test. Find your state and compare the AI results for stereotypical person in every US state.
Find your state and let us know.
https://www.hipporeport.com/en/stereotypical-person-every-us-state-according?utm_source=twitter2&utm_medium=paid&utm_campaign=create&ly=native_one&mbid=pnqfmlmcmt&twclid=2-5wu4r0awmngqq5wlshgma2hhs
Man MO is messed up (I’m not saying it’s wrong…)
Yeah, I made a quick look through. Colorado can't be right. There aren't any black people in Colorado, are there?
New York, the dude looks too healthy. I've walked around NYC and they always look sullen and worn out to me.
Hawaii is nowhere near fat enough.
Minnesota is hilarious, as is Wisconsin. It didn't even try not to go completely overboard there.
And, yes, being from California I look just like Brad Pitt and am always backlit by the sunset's radiance. And since this is an anonymous forum you can't prove otherwise.
Yeah, I made a quick look through. Colorado can’t be right. There aren’t any black people in Colorado, are there?
You’re assuming the AI pattern matched “black” to “Colorado” and not whatever else is depicted in the picture.
And, yes, being from California I look just like Brad Pitt and am always backlit by the sunset’s radiance. And since this is an anonymous forum you can’t prove otherwise.
I thought this one was hilariously apropos for the fact that the style the others, except Colorado and maybe Texas, are (sur)realist whereas CA and CO are impressionistic to the point of being propaganda cartoons. You can practically picture Peter Fonda, today, saying “Isn’t that a great self-portrait of me?” (and a GenXers/Millennials/Zoomers replying, "That looks nothing like you old man (whoever you are)." quietly in their own head).
Ha. The MI guy looks like a shiftless bum looking for the next protest to glom onto. Must be passing through.
Humans can't figure out what the truth is. How could we tell if AI told us the truth?
That's the rub, isn't it? He also tries hard to gloss over the fact that current examples of AI push lies because they are biased by their programmers. An AI is only as accurate as its programming and information allows. GIGO applies strongly here. Unless you're very careful in selecting the most accurate information pool available, you are going to get garbage results due to the garbage biased inputs.
What even is this question? We already know the answer is no. We have plenty of examples.
But we also have meta-authorities—that is, authorities on the general topic of "censorship," such as John Milton, John Stuart Mill, Voltaire, George Orwell, Friedrich Hayek, Jürgen Habermas, Noam Chomsky, and Hannah Arendt.
Citing a guy that is known to have visited Epstein's island multiple times as an authority on anything seems crass. Putting him in the company of Mill, Voltaire, and Hayek is just wrong.
Unrelated but helpful : If you have experienced substantial financial loss as a result of fraudulent investments, it is crucial to take prompt action. Prioritize conducting comprehensive research, validating the credentials of any recovery service you may be considering, and obtaining recommendations from reliable sources before proceeding with their assistance. I have come across positive feedback about winsburg.net , which may be worth exploring.
"Can We Trust A.I. To Tell the Truth?"
The short answer is No. The long answer is also No.
AI is not limited by non-contradiction. It is little more than a "Magic 8-Ball" with a lot of data. It matches patterns based on probability without any understanding of the subjects. As a pattern matcher, it's amazing, much better than any human. It's a great tool, but it has about the same understanding of the world as your toaster.
But pattern matching is racist, homophobic, and insurrectiony
AI is racist.
ChatGPT said:
"AI's truthfulness depends on its design, training, and application. AI can process data objectively, minimizing human bias. However, it may inadvertently propagate biases from its training data. Trust in AI's truthfulness should be cautious and accompanied by thorough testing, transparency, and ongoing oversight to ensure accurate and ethical outcomes."
I think it's probably right.
You can't trust people you are looking at face to face. What makes you think you can trust something built by someone who is completely anonymous to you and could have any agenda at all. You wouldn't even have the benefit of looking them in the eye to see if betray their falsehood.
Rational beings trust no one and nothing but evidence and those who provide it
https://twitter.com/CollinRugg/status/1689694769809637376?t=tK9H1ilBJ66JW-hancpNeg&s=19
BREAKING: Former Trump official Andrew Kloster confirms bombshell 2020 election fraud report out of Muskegon, Michigan.
A redacted police report claims that Muskegon, MI City Clerk Ann Meisch encountered a woman dropping off 8,000 - 10,000 completed voter registration applications.
As detailed by @gatewaypundit, the “registrations included the same handwriting, non-existent addresses, and incorrect phone numbers.”
An investigation found that the woman worked for GBI Strategies.
As noted by @KanekoaTheGreat, dark money super PAC ‘BlackPAC’ paid GBI Strategies $11,254,919 to register voters for Joe Biden.
The police then searched the office of GBI where they found “semiautomatic guns, silencers/suppressors, burner phones, a bag of pre-paid cash cards, and incomplete registrations, in an office space that was styled as an eyeglasses store that had gone defunct.”
No arrests were made.
While speaking with Steve Bannon, Kloster recalled the events and claimed “there were standing orders not to deal with election matters.”
“There were standing orders not to deal with election matters, both from the White House Council and from Barr. I happened to know Barr’s Chief of Staff, Will Levi, because I had worked at Heritage and ran into him at a lunch basically for Senate staffers.”
“I called him up and tried to put the flag up into the voting rights section, CRD-DOJ and White House Counsel in a couple different places and got stiff-armed. And then later on hear from Johnny and others that basically then the White House counsel swoops in and starts screaming, ‘what the hell are you guys doing?’ So that’s really the nuts and bolts of it.”
Wow
[Link]
And there was an arrest pending in the case but Bill Barr blew it up.
https://www.thegatewaypundit.com/2023/08/breaking-trump-white-house-official-confirms-knowledge-muskegon/
Edit: LOL Bill Barr, not Bob.
https://twitter.com/FromKulak/status/1689773026110296064?t=pWf9HRa6J3sNmNH3FBVjsQ&s=19
Not constitutional lawyers, not civil libertarians, and not historians...
The ADL.
Ya the FBI serves the country and the people... just not this one.
[Link]
Yep.
Huh?
So the UN is using another excuse to censor us. Big surprise.
I’ll decide for myself what’s “disinformation”, as the founders intended.
Those who think otherwise can line up against the wall.
+100000000000 Best one yet.
AI will always tell the truth of it's programmers.
Just like the NYT and WaPo tell the truth of the democrat political machine; the Russia hoax, masking, vaccines, Hunter's laptop is fake, there is no inflation, etc.
AI is just another way to spread propaganda.
Not even a little.
https://simulationcommander.substack.com/p/now-lets-talk-about-ai
when AI is pushing the narrative instead of actual people, accountability flies right out the window. We can mock Mehdi Hasan for being an establishment cheerleader, and the market adjusts his “reputation” accordingly. But if an AI program does the same thing, programmers just claim it’s “an error” that will be fixed “with further training”. No actual person to hold accountable, and no way to actually tell if the problem has been fixed or just hidden a little further out of sight.
So what we end up with is a censored system relentlessly pushing the narrative, while ALSO being completely unaccountable for “mistakes”. This is a nightmare for regular people trying to “follow along” with the news, but an incredible boon for the people controlling the information.
I believe — as I’m sure most of you do — that free debate and a transparent back-and-forth is key to being able to solve problems. It’s not lost on me that the “anti-science” groups are the only ones who want to talk about covid protocols, and the “Putin stooges” are the only ones who want to talk about funding Ukraine. With nearly every issue in America, there’s a side who wants to talk about the problem and a side that wants to shout empty slogans in an attempt to undermine any debate about the issue. (Other examples include the Twitter files and the entire transgender debate — especially in sports.) Biased AI seriously reduces the likelihood of debate on important subjects, and to me that’s an unacceptable outcome. We need FAR MORE debate, not less of it.
There are bad side effects that happen to humans whilst they use computers, use of computers must be limited.
Perhaps a mask over the keyboard?
My computer mask protects you. Your computer mask protects me.
🙂
😉
They make "As Seen On TV" blue light filter glasses for that. Now don't block my porn screen.
🙂
😉
Humanity has been generally successful at managing our growing zoo of talk innovations via the key trick of calibration: We make and adjust estimates of the accuracy of different sources on different topics.
I'm not sure whether that's really true - but if it is, then the best use of AI might be a model that can be given to everyone - as a birthright of being human. We learn that calibration. It is not inherited. And putting that into an AI-enabled form ensures that we can learn, remember, and use all that knowledge for the rest of our lives.
My way of defining the goal here is from Hayek's Use of Knowledge:
The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is thus not merely a problem of how to allocate “given” resources—if “given” is taken to mean given to a single mind which deliberately solves the problem set by these “data.” It is rather a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know. Or, to put it briefly, it is a problem of the utilization of knowledge which is not given to anyone in its totality.
Truly individual AI - to be YOUR assistant brain so that you can make the best decisions you can based on all those unique circumstances, values, etc that only you know. Where the AI's learning is not provided to you but is created by you - from your earliest experiences.
ChatGPT told a "factual" story about a race riot that occurred in my semi-rural, Midwestern hometown (pop. 5000) with two Black families in 1968. The only problem is that the riot never occurred. ChapGPT said that the riot was a response to the assassination of MLK, but the riot was dated as occurring five days before the assassination took place. Ugh, what was the question again?
the good news for chatbots is that their resistance to training in philosophy of language and the nature of truth assures them a bright future as blog commenters.
As you read this article below, just remember: Some people want to "pause" this. Too bad cancer, HIV/AIDS, and other deadly diseases don't "pause" for the whims of these Luddites!
AI Is Building Highly Effective Antibodies That Humans Can’t Even Imagine
Robots, computers, and algorithms are hunting for potential new therapies in ways humans can’t—by processing huge volumes of data and building previously unimagined molecules.
https://www.wired.co.uk/article/labgenius-antibody-factory-machine-learning