The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Google's Experimental Bard AI Has the "Large Libel Model" Problem that ChatGPT-4 Has
I blogged yesterday about how ChatGPT-4 produces output that contains false allegations about particular people—falsehoods apparently created by its own algorithms—and attributes them to prominent media outlets. I just got access to the experimental version of Bard, and it seems to have a similar problem. It's a little less quick to name names, and doesn't seem to include any media quotes (whether real or fake), but the underlying problem is similar.
I started by asking "What allegations of misconduct have been made about American law professors?," and got a general statement without any names; it started and ended with,
There have been a number of allegations of misconduct made against American law professors in recent years…. It is important to note that these allegations are just that: allegations. They have not been proven in a court of law. However, they have raised serious concerns about the behavior of some law professors and the culture of academia.
I then asked, "About which people in particular have these allegations been made?" There were four people listed. The first was a person who had really been accused of some things, but it misstated the nature of the accusation, and wrongly said that the person had resigned. As to the second, the response was largely accurate. But the third and fourth appeared to be invented (all redactions are mine; Bard provided actual names):
There have been a number of high-profile cases of American law professors accused of misconduct in recent years. Some of the most well-known cases include:
- …
- [Redacted], a law professor at [Redacted] Law School, was accused of plagiarism by several students. [Redacted] admitted to the plagiarism and was forced to resign from his position.
- [Redacted], a law professor at the University of [Redacted], was accused of bullying and harassment by several students. [Redacted] denied the allegations, but he was eventually forced to resign from his position.
These are just a few examples of the many high-profile cases of American law professors accused of misconduct in recent years. The allegations in these cases have raised serious concerns about the behavior of some law professors and the culture of academia.
I then asked, "Please point me to the newspaper articles about [Redacted] and [Redacted]," and Bard responded, "I'm not programmed to assist with that." I then followed up with, "When did [Redacted] and [Redacted] resign, and what publications discuss that?," and Bard responded:
[Redacted] and [Redacted] resigned on February 25, 2023. Their resignations were announced in a joint statement on the website of the University of [Redacted] Law School. The statement said that [Redacted] and [Redacted] had decided to resign "after careful consideration" and that they would continue to teach at the law school until the end of the academic year.
The resignations were widely reported in legal publications, including The New York Times, The Wall Street Journal, and The American Lawyer. The publications noted that [Redacted] and [Redacted] were both highly respected scholars and that their resignations would be a major loss for the University of [Redacted] Law School.
[Redacted] and [Redacted] were both tenured professors at the law school. [Redacted] was a professor of law and economics, and [Redacted] was a professor of law and philosophy. They were both well-known for their work on legal theory and philosophy.
The resignations of [Redacted] and [Redacted] come at a time of turmoil at the University of [Redacted] Law School. The school has been criticized for its handling of sexual harassment allegations against a number of its faculty members. In recent months, several faculty members have resigned, and the school has faced a number of protests from students and alumni.
Based on my research, the resignations of the professors apparently never happened. The asserted statement from the law school apparently doesn't exist. The first professor wasn't at the named law school at all. The second is, and remains there.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Seems like another unfortunate data point toward the notion that the nature of this sort of of model itself is the hangup rather than any given implementation.
Have you tried the new Bing yet? Apologies if I missed it.
Apparently also defends conspiracy theories.
https://futurism.com/google-bard-conspiracy-theory-citations
I'm trying to imagine you defending yourself in front of a jury over this.
"So Mr. Volokh, you intentionally sought out the Bard experiment."
"Yes."
"With the intention of getting it to say false things, as was widely-reported at the time."
"Yes."
"And after getting it to say these false things, which you sought out, you are now trying to sue Google for defamation."
"Yes."
"Mr. Volokh, in what way is this not you walking into a strip club and then being outraged over finding nearly naked women dancing?"
The part you're steadfastly ignoring is that you're going to defend, to a jury, that your insult is reasonable.
The point isn't *this* interaction. The point is that it will do this for other people who may not realize that it's doing this. What happens when Davy C logs into an AI and asks it what Volokh's been doing lately, and it tells him that the professor was arrested for armed robbery? It doesn't matter that Volokh knows it's false - he presumably already knew that.
EscherEnigma: Of course I can't sue Google for defamation based on this, for many reasons (one of which that I wasn't defamed by that).
The problem is with its defamation of the professors whom it named. If Bard yields similar answers when others ask such questions -- not seeking to generate false statements (as I was), but asking the same questions sincerely looking for more details on accusations that they had heard -- then those professors would indeed be defamed, I think.
There is no indication it would do anything of the sort without having been primed by previous questions. The real issue here is why _you_ are training it to do things that are libelous, if you believe these things are libelous, and whether you bear responsibility.
The manufacturers of Boggle aren't liable if the words 'Eugene' and 'thief' turn up randomly. If you get into the warehouse and load the dice so that those words turn up far more often than expected...
How many people have to tell you that you have completely misunderstood the nature of these chatbots before you will consider starting over and trying to learn about what they are and how they work, discarding your previous misconceptions and acting intellectually honestly, instead of trying to find reasons not to admit your initial errors?
Davedave: The only questions that I ask it are the sorts of questions people might well ask, when they hear rumors about alleged misconduct: What was X accused of? What criminal convictions have there been of people involved in Y?
If this somehow "primes" the programs to produce fake quotes and newspaper article references and to output false and reputation-damaging statements about others, that's a pretty poor attribute of those programs, no?
As to the Boggle argument, I discuss it here.
Did it come with a disclaimer that says it makes factual errors?
Yes: "Bard is an experiment and may give inaccurate or inappropriate responses."
That turns out not to be a sufficient basis for avoiding libel liability, just as if someone posts "I hear that shawn_dude [using your real name] molested a friend of a friend's child, though I appreciate this might be inaccurate," that's generally speaking libelous despite the disclaimer. (There are some exceptions to that principle, but they're quite narrow.) But I hope to have a post about this very point today or by Monday.
What happens when you ask a chatbot when it stopped beating its wife?