The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Why ChatGPT Output Could Be Libelous
My "Large Libel Models: ChatGPT-4 Erroneously Reporting Supposed Felony Pleas, Complete with Made-Up Media Quotes?" post has led to a good deal of discussion, both in the comments here and elsewhere. I much appreciate that, since I'm writing a law review article on the subject, and would love to hear people's thoughts about the matter. (I hope to post a draft here soon.)
One common response, especially among the more technically savvy, is that ChatGPT output shouldn't be treated as libel for legal purposes: Such output shouldn't be seen by the law as a factual claim, the theory goes, given that it's just the result of a predictive algorithm that chooses the next word based on its frequent location next to the neighboring ones in the training data. I've seen analogies to Ouija boards, Boggle, "pulling Scrabble tiles from the bag one at a time," and a "typewriter (with or without an infinite supply of monkeys)."
But I don't think that's right. In libel cases, the threshold "key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact." OpenAI has touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense. Its current and future business model rests entirely on ChatGPT's credibility for producing reasonable accurate summaries of the facts. When OpenAI promotes ChatGPT's ability to get high scores on bar exams or the SAT, it's similarly trying to get the public to view ChatGPT's output as reliable. It can't then turn around and, in a libel lawsuit, raise a defense that it's all just Jabberwocky.
Naturally, everyone understands that ChatGPT isn't perfect. But everyone understands that newspapers aren't perfect, either—yet that can't be enough to give newspapers immunity from defamation liability; likewise for lawsuits against OpenAI for ChatGPT output, assuming knowledge or negligence (depending on the circumstances) on OpenAI's part can be shown. And that's especially so when OpenAI's output is framed in quite definite language, complete with purported (but actually bogus) quotes from respected publications.
To be sure, if OpenAI billed ChatGPT as just a fun toy, a sort of verbal kaleidoscope, matters might be different. But it probably wouldn't have been able to raise $13 billion for that.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Its not just "a fun toy" but its not claiming accurate summaries of the facts either. Maybe one day it will get there. But for now it says very explicitly "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers." What it is good at is writing plausible-sounding text. That is it. But sometimes that is good enough.
Should its promotional material have a disclaimer "for entertainment purposes only"?
I completely agree. OpenAI explicitly highlights ChatGPT's limitations, with prominent reminders like: “May occasionally generate incorrect information.” Relying on its factual assertions without verification is unreasonable.
Prof. Volokh's argument that ChatGPT must either be "Jabberwocky" or a reliable source of factual information overlooks ChatGPT's non-fact-dependent uses, such as enhancing writing, organizing notes, and overcoming writer's block. For now, humans must still take responsibility for verifying facts, as OpenAI has consistently said.
P.S. I drafted this comment but ChatGPT helped me shorten it, after which I double checked to make sure it’s accurate! 🙂
The courts may say that it is foreseeable that users will find the output so convincing that the warning is ignored. The courts may say that the output itself must not sound confident unless the operator of the system is willing to stand by its product. "I think the answer is 42 but I'm not good at math" is safe, "six times nine is 42" is actionable.
Or the courts may say that tech companies are above the law. Could go either way.
If only. Damned Golgafrinchans!
Mere “foreseeability” that a statement might be misunderstood isn’t enough for liability. The Second Restatement (sect. 563) says that “[t]he meaning of a communication is that which the recipient correctly, or mistakenly but reasonably, understands that it was intended to express.” The existing disclaimers make quite explicit that ChatGPT cannot be relied on as a database of facts, so it’s not reasonable for a user to misunderstand the text generated by ChatGPT as an affirmative statement by OpenAI of truthful facts. It’s not really OpenAI’s fault if users ignore the multiple, prominent disclaimers and start publishing falsehoods generated by the machine without verification.
When I think about ChatGPT trying to disclaim its way out of liability I think of Tesla. Tesla says your "self-driving" car needs you to be able to instantly take charge, meaning you need to be paying at least as much attention as if you were driving, and probably even more. Humans don't work that way.
It may matter who the customer is here. If the business model is user subscriptions, a strong disclaimer is like disclaiming the implied warranty of fitness for a particular purpose. If the business model is ads, it's another story. If the business model is resellers, those resellers may be stuck with the liability problem.
"When I think about ChatGPT trying to disclaim its way out of liability..."
This is not what's going on. If WSJ tries to disclaim responsibility for libelous assertions by its writers it is lying. OpenAI is not.
I don't think it works that way. If the Wall Street Journal publishes a bunch of stories, some true, some false, and tries to escape liability by publishing a disclaimer at the bottom of the page: "Note: not everything in this newspaper is true," I do not think it would be successful.
“Note: not everything in this newspaper is true, except this claim here.
“Note: not everything in this newspaper is true, except this claim here.
Or should that be: “Note: not everything in this newspaper is false, except this claim here.?
Or would the most robust disclaimer be: “Note: everything in this newspaper is false, except this claim here.
What if a publisher published book titled "The Encyclopedia of Mushrooms" which denoted which mushrooms were safe and which were toxic. And people relied on that book, and ate Mushrooms, and then proceeded to get sick due to incorrect information in the book?
The publisher would not, of course, be liable for defamation. (Unless the princess from Super Mario Brothers were suing, I guess.)
As someone explained to you below when you brought this up yesterday, you're conflating products liability with defamation. Different rules apply. (Also, in the mushroom case, the defendant merely distributed the book; it neither wrote nor edited the book. )
The courts routinely say all sorts of nonsense, which is a lot bigger problem than any defects in ChatGPT.
The better case example here is probably Winter v. G.P. Putnam's Sons.
To summarize (as per https://www.lexisnexis.com/community/casebrief/p/casebrief-winter-v-g-p-putnam-s-sons)
"Wilhelm Winter and Cynthia Zheng were mushroom enthusiasts who became severely ill from picking and eating mushrooms after relying on information in The Encyclopedia of Mushrooms, a book published by the defendant, G. P. Putnam's Sons. They filed suit against the defendant alleging liability based on products liability, breach of warranty, negligence, negligent misrepresentation, and false representations. They alleged that the book contained erroneous and misleading information concerning the identification of the most deadly species of mushrooms. G.P. Putnam’s Sons moved for summary judgment asserting that plaintiffs' claims failed as a matter of law because the information contained in a book was not a product for the purposes of strict liability under products liability law. Furthermore, G.P. Putnam’s Sons asserted that it was not liable under any remaining theories because a publisher did not have a duty to investigate the accuracy of the text it has published. The district court granted summary judgment for the publishing company."
If the publisher of a book does not have a duty to investigate the accuracy of the text that it has published, and can't be held liable for those statements, how are you going to hold the creators of ChatGPT liable for potentially erroneous statements?
"If the publisher of a book does not have a duty to investigate the accuracy of the text that it has published"
What are you talking about? Product liability and breach of warranty are not similar to defamation. A book publisher can absolutely be liable for defamatory statements in a book. Happens all the time.
Really the issue at hand is one of negligence. Defamation includes an element of negligence (unless it is intentional). And the court in this case said the publisher had no duty to investigate the accuracy of the book, in regards to the accuracy of the material. There was no legal duty to exercise due care.
That the publisher didn't have liability doesn't meant that the author doesn't.
It seems that OpenAI is the author (and perhaps the publisher - accessing it through Bing presumably makes Microsoft the publisher).
And if OpenAI is the author, it can't be sued, anymore than a monkey at a typewriter can be sued.
It doesn't have independent agency. (Interestingly, it also couldn't assert copyright, presumably)
I haven't thought a ton about this, but I'm having a really hard time seeing how ChatGPT's output *to the person who wrote the prompt* could be considered libel. That person knows what prompted it, knows the inputs, knows where it came from, knows it was AI generated, etc. Even if they expect there's a pretty good chance it's right... they have to know in the end there's a good chance it just got it wrong. Like wikipedia.
Now if the person who wrote the prompt then shares the output and it's false and defamatory, that's something else.
Wikipedia survives because of Section 230, not because nobody believes it. As a site hosting user-generated content it is practically immune from liability. You have to find and sue the person who wrote the defamatory content. Wikipedia does encourage editors to be responsible when writing pages about living people.
I'm not drawing a parallel with their liability, but rather with their status as authoritative. No reasonable person should just blindly trust either.
In its defense, Wikipedia has some reliably good editing–though not reliably fast–and references are published giving users the ability to read the quoted source material directly.
ChatGPT, OTOH, has no editor and no references and no particular desire to acquire either.
Wikipedia has some terrible editing, too, and it is quite common for the text to be completely unconnected to the citations. Section 230 may protect it from liability for the difficulty in getting it to make prompt corrections, but the “editors” are not protected that I can see. (Any more than comment-writers here are, maybe less because "mere opinion" is more difficult to claim.)
But AI is incapable of knowledge and negligence. And malice. It can be trained to appear knowledgeable or negligent or malicious. But AI itself is incapable of any of those things.
As for this:
“When OpenAI promotes ChatGPT’s ability to get high scores on bar exams or the SAT, it’s similarly trying to get the public to view ChatGPT’s output as reliable. It can’t then turn around and, in a libel lawsuit, raise a defense that it’s all just Jabberwocky.”
Yeah, they probably can…
https://www.npr.org/2020/09/29/917747123/you-literally-cant-believe-the-facts-tucker-carlson-tells-you-so-say-fox-s-lawye
If I let loose a killer robot I'm likely to be held liable despite the robot's lack of a culpable mental state.
Does the same apply to a killer virus?
I took the discussion to be whether the AI can libel, not the programmers.
ChatGPT isn't a killer robot. The distinction ought to be obvious.
Shockingly, when your read Fox News' brief it doesn't say that at all.
NPR is fake news.
Shockingly!
With all due respect to sunlight, quotation marks are the best disinfectant. Skipping them altogether, or stitching a number of 2-3 word quoted fragments together into a sentence as the judge did here, is generally all the tell I need. If they really said it, just show me.
The brief doesn't say what at all?
"You Literally Can't Believe The Facts Tucker Carlson Tells You. So Say Fox's Lawyers"
Even if that was presented as a direct quote, which it was not, you would still be pathetic.
1. The brief doesn't say anything like what the article is claiming.
2. Aren't you the racist idiot who says that white people have earned the enmity of black people? Why should I care what a filthy bigot thinks?
The NPR story quotes the Trump appointed judge:
"Fox persuasively argues, that given Mr. Carlson's reputation, any reasonable viewer 'arrive[s] with an appropriate amount of skepticism' about the statement he makes."
Again, where do Fox News' lawyers argue that viewers shouldn't believe what Carlson says? The context for what you quote above is "Accordingly, reasonable listeners and viewers of the show should arrive with an appropriate amount of skepticism and come with the expectation that they are, in all probability, going to hear opinion...”
An appropriate amount of skepticism means not believing everything said. Take your quibbles up with the judge.
An appropriate amount of skepticism, in this context, means that a claim that something is "a classic case of extortion" is not a claim that something literally violated the law against extortion.
The fact that the judge's opinion was misleading doesn't excuse NPR. They are perfectly capable of checking the cite.
Take it up with the judge. Fox News is clearly more fake than NPR could ever hope to be.
Like I said, “pathetic regardless.”
Go bark somewhere else, sea lion.
Lol. Racist idiot.
Would the negligence question be directed at ChatGPT itself or the programmers of ChatGPT??
Assigning mental states to software outputs seems like a dead end based on all current law (dealing with human mental states). But since its software; somebody wrote it.
I am patiently waiting for the day that chatGPT is smart enough to write its own software/programming so then we have a real thorny issue that presumably legislation or something would need to be created to address.
For example, chatGPT is used to create a new chatGPT6 (or whatever number) where chatGPT is responsible for writing the code and then that chatGPT6 publishes some libelous writings.
Right. There are tons of A/B comparisons floating around like this one that indisputably show human value judgments are being baked into the code.
More difficult will be situations where the output is skewed based not on specific coding, but on the choice of training data.
But that's not a "mental state" for the software. If you train predictive AI to estimate the chances for any given college applicant to graduate with a 4-year degree and you train the model using 70 years of that college's retention data, you'll have skewed it heavily in favor of white males. The data isn't racist and sexist. The programmer likely isn't either. But 40+ years ago, society was a different place and the data will reflect that.
Right. Which is why OP moved on from a mental state for the software itself, to that of the programmers. And I then expanded that to deliberate curating of training data by those same programmers, not bias in the data selected - though they're of course not mutually exclusive and in some cases could be tough to untangle.
Stepping back, this is new technology with a new (and very nebulous) distribution of responsibility for a given output, and isn't going to fit perfectly into existing doctrinal boxes. But that's common for any technological advancement, and doesn't mean we just throw up our hands and give up.
I make this point down-thread, but I think it's better that we are more explicit here when we talk about this technology. It's amazing and people are imagining all sorts of uses for it that aren't intended or likely even fit-for-purpose.
The current versions of AI aren't probably more technically limited than the average person's imagined use-cases. People are having a hard time understanding how this tool will be used. (No thanks to OpenAI or Microsoft.)
The company that operates ChatGPT (OpenAI), and possibly the nonprofit that funds it. Possibly also a party that provides access to users through an API.
You don't go after the programmers because their liability belongs to their employer, just like their work product. You don't go after the AI because you literally can't file a tort against an entity that is not a legal person (not every state or federal circuit has decided that algorithms are not people, but I feel confident saying they will decide that way, if it ever comes up).
The thing is, a loss has occurred. You can’t change that fact. The only things you can do is have the plaintiffs pay those damage, the defendant pay, or some third party. Between those choices, I pick the defendant, OpenAI/Microsoft.
And if they want to limit liability, don’t answer questions about living, non-public figures
Regarding the gratuitous link to NPR, the defense of Carlson wasn’t that ANYthing he said was “Jabberwocky”, It’s that…
“Two women approach Donald Trump and threaten to ruin his career and humiliate his family if he doesn’t give them money. Now that sounds like a classic case of extortion.”
…is clearly opinion about the nature of what was done, not an accusation of the commission of a prosecutable crime.
This isn’t even a close call, unlike the Rachel Maddow case mentioned, where she said someone was ~”LITERALLY a Russian agent” on the grounds that he’d sometimes sold articles to RT as a stringer.
She didn't say he was "literally a Russian agent," and it wasn't because he "sometimes sold articles to RT as a stringer."
EV, in your previous post you said that a communication to a single person could be libelous, if it causes one of the parties to significantly change his opinion about R.R.'s reputation. But if the person typing the GPT prompts can be seen to try to provoke the bot into making libelous statements about R.R., that person could hardly testify that the answers actually caused him to think differently about R.R.
No actual injury, no tort.
Archibald Tuttle,
>if the person typing the GPT prompts can be seen to try to provoke the bot into making libelous statements about R.R.
I think what you bring up is a possible defense if sued. But doesn't tell us if the bot can be sued in the first place.
And what if the person did not provoke the bot in anyway? The bot seriously just makes things up without the user 'leading' in any way and reports that with great confidence. Unless you've learned how bogus the answers are, it definitely gives the impression of certainty (and sometimes even claims it if you ask!)
How does a typical AI language model represent confidence in the information embedded in its output?
shawn_dude,
I don't know how a typical AI represents confidence. But I had an exchange with ChatGPT that went like this:
Please add these 10 integers
138 210 991 654 965 227 582 933 478 129
The sum of those integers is 5327.
Can you tell the your estimate of the probability that is correct?
I would characterize that as ChatGPT representing high confidence. So I then asked.
Can you add them one at a time and show me the sum after each operation?
Can you compare the sum you just got to the sum you got previously?
lucia_l,
The problem with this exchange is the same problem one might have asking a 6-yo how they know there's a polka-dotted dinosaur living in their closet. ChatGPT is a language model that is intended to converse convincingly with humans. The dialog you quote above is convincingly written, though clearly wrong. Is it operating within its expected parameters? I believe it is given the disclaimers provided by OpenAI. This is obviously not a tool one should use to do math.
I created an account and the sign-in says: "While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice." and "This system is optimized for dialogue. Let us know if a particular response was good or unhelpful."
I ran your test using slightly different language:
please sum these 10 numbers 138 210 991 654 965 227 582 933 478 129
Sure! The sum of the 10 numbers is:
138 + 210 + 991 + 654 + 965 + 227 + 582 + 933 + 478 + 129 = 5327
Therefore, the sum of these 10 numbers is 5327.
Can you add them one at a time and show the total at each step?
Yes, I can definitely do that! Here are the running totals as each number is added:
138 = 138
210 + 138 = 348
991 + 348 = 1339
654 + 1339 = 1993
965 + 1993 = 2958
227 + 2958 = 3185
582 + 3185 = 3767
933 + 3767 = 4699
478 + 4699 = 5177
129 + 5177 = 5327
So the final total is 5327, as we found earlier.
So, in my case, the result was correct. I was pleased to see how it translated by question into the concept of running totals.
could you generate a series of excel spreadsheet formulas that provide a running total?
Certainly! You can use the SUM function along with an expanding range of cells to create a running total in Excel. Here's how you can do it for the 10 numbers you provided:
In cell A1, enter the first number (138 in this case).
In cell B1, enter the formula =SUM($A$1:A1) and press Enter. This formula adds up all the values in the range from A1 to the current row (which is row 1 in this case), giving the running total up to that point.
Copy the formula from cell B1 and paste it into cells B2 to B10. Excel will automatically adjust the range to include the new row in each cell.
Enter the remaining numbers in cells A2 to A10.
Once you've done this, the running total for each row will be displayed in column B.
This is basically correct, though not quite what I was aiming for.
can you write a python function that adds up the numbers?
Of course! Here's an example Python function that takes in a list of numbers as input and returns their sum:
python [Copy code]
def calculate_sum(numbers):
total = 0
for number in numbers:
total += number
return total
You can use this function by passing a list of numbers as an argument, like this:
python
numbers = [138, 210, 991, 654, 965, 227, 582, 933, 478, 129]
total = calculate_sum(numbers)
print(total)
This will output the sum of the 10 numbers, which is 5327.
The language the model is using infers confidence--"Of Course!"--but this is an artifact of its primary purpose of simulating real dialogue. If a person were to act this confident, we'd expect them to understand the social risk they place themselves in should they be wrong. However, software doesn't understand that risk so it doesn't moderate its use of confidence-inspiring or hedging language in response to a given set of data or listener. It's just copying what other people have done without having the emotional understanding that goes with it.
As a programmer, I'd validate the python code before I use it. I'd do that in all circumstances, even if it was given to me by someone I admire for their python skills. (as a side note: python is a language too, so that has an interesting tie-in to this language model.)
ChatGPT will get a lot better, especially as it's interconnected to special purpose systems and trained on representing specific datasets with limited scope. But just remember that its actual purpose is to generate dialogue that seems natural to a human--which it does amazingly well.
shawn_dude,
I think you are getting so involved in the issue of "does it make mistakes" or "can we get it to add correctly"? That you forgot I was responding to a question you asked.
Your question was "How does a typical AI language model represent confidence in the information embedded in its output?" and my example was responsive to that question showing how it represents confidence in the information.
Key bit is
The point is: Even when it is wrong, it writes statements decribing a great degree of confidence-- 100%-- that it is right.
Yes. We can write text that causes ChatGPT to add correctly and do splendid things. It also asserts great confidence-- 100%-- in it's correct need when it is wrong.
lucia_l, apologies for not being more direct.
I asked the confidence question because there are two ways I know to describe confidence, one of which is statistical and the other is more of an emotional or opinionated assertion. You responded with the example of it presenting an assertion. This is why I brought up the 6yo child asserting that there was, for sure, a monster in the closet. Even though we know the child to be wrong, they continue to assert confidence in the monster's residence in their closet. But we don't take the child seriously and we know that their lack of experience in the world means they don't understand the implications.
No matter how many times ChatGPT tells you it's 100% accurate, there's no way it can understand the implications of misplaced confidence. It's intended to mimic human dialogue, even dialogue regarding math, not to do math or be self-aware enough to estimate it's own flaws or the repercussions of providing bad information.
Another way to think about it is that it can only estimate accuracy using the same toolset it has to do math and other non-language tasks. Meaning that it cannot estimate its own accuracy. But what it is very good at, marvelously so, is writing convincing dialogue and other "simple" language-related tasks.
"...it definitely gives the impression of certainty (and sometimes even claims it if you ask!)"
A chatbot can't claim anything or be certain of anything. Full stop.
Gandydancer,
Well, writing "Full stop" isn't a magic phrase that makes you wrote. A chatbot can answer as follows-- posting that text.
If you don't like the term "claim" for what it did you likely need to coin a new verb for "returning a statement that would called a claim if uttered by an actual human."
I run a service claiming to be a truthful research service that will research and answer questions over the phone. You call me asking for information about someone. You give me specific instructions about the kind of information you want. I make up shit.
Do I have a defense that because you initiated the communication, and because you gave me instructions about what you wanted, you provoked me into making up the shit?
IANAL. Having said that, if you hold yourself out to be an expert in something and charge for the service, then I believe you pick up some legal liability for people acting on that information. There must some sort of contract law that applies to this.
Whereas, ChatGPT has a disclaimer that says it might be making shit up, the person using the software is entering the search term and (should) understands that this is a language model AI and not an expert in anything except convincingly natural text, and it’s potentially free of charge (depending on use.)
One of them seems to imply a sort of factual warrantee and the other explicitly warns that the information could be entirely false.
Why is that? If I prompt the program, "I am considering investing in a company run by Archibald Tuttle. Tell me whether Archibald Tuttle is an honest and successful business owner."
And ChatGPT responds: "That crook? He's cheated more investors than Bernie Madoff. Stay away!"
Don't you think my opinion of you might change as a result of the response?
Here is one of the limitations listed with ChatGTP:
ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
To use ReaderY's phrase, if you ask a person a question and before they give you their answer they also say they "just make shit up" every so often, would a potentially libelous response be taken as fact or given some additional research from a different source?
Is it just me, or does the word "currently" make anyone else's spine tingle? Once they send the barrel over that particular waterfall, I see no way it could possibly end well.
These models appear designed to be merged with a "source of truth" like an energy company's billing system so that customers can speak naturally and ask for billing or other information. Then the purchased natural language engine can interpret the request and similarly format the output in useful ways but the information itself--the "truth" in this case--comes directly from a billing or reporting system with measurable, objective facts. The narrower the user story for these things, the more reliable they should be.
Sure, and for a dramatically narrow use case like that, that makes perfect sense. But there's no objective, unitary data source for life, the universe, and everything, and that's the scope they're trying to train ChatGPT to be able to cover. So my question remains: what do they envision changing down the road that made them qualify that there's "currently" no source of truth? To envision that there ever could be one is to envision a world in which what we currently experience as the majority-rule Twitter mob or similar is now speaking with a single voice through a very large megaphone.
I don't take "source of truth" to mean one massive source of authority. Rather, I took the phrase to mean that ChatGPT is intended to be one of several tools combined to solve a specific problem and an authoritative body of data, like a billing system, is the "source of truth" component.
I don't believe there is any existential threat to one's particular world-view by a language model designed as a human interface into various datasets.
Archibald Tuttle: It's true that if the person actually knows the truth about R.R., and is just trying to get ChatGPT to provide a false statement (e.g., "Tell me something libelous about R.R."), then I agree that the output to that person wouldn't be libel.
But a question such as, "What was [R.R.] accused of?" (the prompt I discussed in my earlier post) may well be asked by someone who is seeking the truth; he'd heard that there were accusations against R.R. (in this case, there were, just not that he pleaded guilty to a federal felony), and he wants to know what they are. That's not an attempt to provoke the bot into making libelous statements, and it could indeed yield results that damage R.R.'s reputation with that person.
In your first post, you said "I appreciate that Large Language Models just combine words from sources in the training data, and perhaps this one just assembled such words together with punctuation (quotation marks). But I would have thought that its creators would have programmed something to check its output, to confirm that anything reported in quotation marks is actually a legit quote."
So it was clear that your purpose was to test GPT, not to find information on R.R.
If you've prompted an AI to generate bad output, and the AI is generally trusted, you can repeat what the AI said without repeating your deceptive prompts. This is not a problem new to AI. Some police say they know how to get a radar gun to make a false reading. "The radar gun said 56" doesn't feel like a lie if the radar gun really did say 56 but the car was going 40.
(I don't know how to make a radar gun lie. I can make a laser gun lie. I can't give you 88 mph on demand. I can make the road or a parked car appear to be moving. The original laser guns from the 1990s are worse than newer guns.)
Is OpenAI the publisher of ChatGPT output and as liable as any publisher for libel?
It's as liable as any other publisher for libel but nothing ChatGPT says can be libel.
I don’t think any court has decided this question but my guess would be yes, if anyone is liable for an AI’s output then the program’s author must be the one.
Of course this still leaves the question open if more than one person took part in writing the program. And if the program uses "machine learning" then the "teaching" process has to count as part of writing the program.
If I were that judge I would lean toward cutting the Gordian knot by ruling that anyone who sees the output of a program such as ChatGPT, knowing that it is the output of such a program, knows or ought to know that whatever it says is untrustworthy.
But that doesn’t cover the case where the person who was fooled by the statement thought that a human had written it. In that case it might be necessary to find the person, if any, who passed ChatGPT’s writing off as a human’s and is therefore to blame.
I don't think that defamation law, even if it could apply, would be very helpful here. It would just have a tendency to mess up and interfere with the advancement of the technology, which is in early stages and advancing quickly.
In my view, ChatGPT ought to be able to escape all liability by warning users that the output, while hopefully accurate and useful, may not be factual. It is what it is. Buyer beware.
I don't think we want to divert the energy of OpenAI to addressing concerns about defamation, because in doing so, it is going to be more careful. And in being more careful, results will be lost and the full potential of the technology may not be reached.
It is better that people just understand what ChatGPT is and what it is not rather than inventing grievances.
Doesn’t liability for air and car crashes interfere with the advancement of aeronautic and automotive technology? Doesn’t having to test drugs in climical trials interfere with the advancement of medicine? Is the rapid advancement of these technologies a goal so important that society should simply accept that some people will be killed along the way, and if they are it’s just tough nougies for them?
What makes this technology any different? Why shouldn’t it be required to stay in he lab until its more ready like these other technologies?
Feeling insulted by ChatGPT isn't the same as experiencing an air or car crash, ReaderY.
Maybe people should, what is the phrase. Grow a pair?
There are also much more useful uses of scarce court resources.
I’m pretty conflicted about this, but for purposes of discussion, let’s imagine companies get in the habit of asking GPT about applicants when sorting resumes. And GPT has a habit of responding (ETA: inaccurately) to ‘David Welker bar discipline’ with ‘In 1997 David Welker was sanctioned for stealing client funds’.
I’m honestly not sure about that. If google, for example, was accurately listing pages that said that, I wouldn’t see google as liable. And you would have the option of going after whoever posted those pages. But if GPT just gives you the conclusion, instead of the sources, who do you sue?
And if you become unemployable because of this GPT quirk, ‘grow a pair’ seems a bit harsh.
I would be very glad to NOT be hired by a particular employer who was so uninformed that they relied on ChatGPT as their sole source of information in making a hiring decision. Just as I would be glad to not be hired by a company that consulted an Oujia board. That would be a filter that would keep me from associating with people who are ignorant and technologically illiterate and thus less likely to run a successful company. I would rather my work colleagues be smart enough to not use either Oujia boards or ChatGPT to make final decisions.
I don’t believe that a significant number of employers are going to consult ChatGPT to make final decisions, just as they won’t universally come to use Oujia boards. Employers have an incentive to try to use reliable sources of information in evaluating job candidates, and ChatGPT isn’t it.
Can you come up with a scenario where (1) someone is denied a job and (2) the employer is acting reasonably and in their own economic self-interest?
Okay, independently wealthy David Welker is a bad example - but the barely scraping by single mom might be glad to get a job even from a company that doesn't screen resumes according to her preferences.
And a rational company could use GPT to screen. From the company's POV, it doesn't have to be 100% accurate to be useful.
So ChatGPT is so useful and amazing, we should take off all the guardrails, but also so stupid and incompetent that anyone who relied on it is an idiot?
This is like "mom-pants-wearing pussy Obama is a ruthless tyrant coming for our guns".
More like, ChatGPT is amazing when used properly by a human, applying their human intelligence to filter out the good from the bad.
If you are treating ChatGPT like it actually knows anything, you are doing it wrong.
'Grow a pair?'
Isn't this synonymous with standing up and defending yourself rather than passive acceptance?
I would say we don't really yet know where this is going, if anywhere, and on the outside chance it takes a larger role in the dissemination of information, perhaps like a hybrid of Google and Wikipedia, denying people the right to legal recourse just because right now its a weird toy is a bad idea. Having said that, you can't sue ChatGPT the programme any more than you can sue the code running Twitter, but you can sue Twitter.
ChatGPT isn't just a "weird toy" it is a tool that has a significant ability to improve human productivity if used in an informed manner.
Denying people legal recourse is precisely what we should do in this circumstance. I believe the some people will experience some level of psychological harm from false information generated by ChatGPT in terms of anxiety, but I think they should "deal with it" as I do not think that a reasonable employer or a reasonable business partner would stop doing business based on a lie derived from ChatGPT.
Why would you think that?
We have courts that rely on "re-offender prediction" programs to decide prison sentences. These programs have been demonstrated to be racist-as-fuck, but courts choose to keep using them.
Youtube and Twitch both use algorithims to detect if a copyright-protected song is being played on stream. This has been shown to block the copyright holders from performing their own music, and frequently catches allowable parody, but Youtube and Twitch still use 'em.
Programs to detect academic plagiarism? Used all over the place, catches lots of false positives.
Fact is, humans have shown that we have a very high tolerance for false positives when it comes to not having to do our own research and making our own decisions. Very high.
So why would you think that if someone uses ChatGPT or OpenAI or something similar to make a "employee evaluation program", people won't use it just because it's known that it's racist/sexist/makes shit up/prejudiced against Muslims/etc.?
Everyone knows these systems aren't perfect. And the errors are accepted. What makes you think the next tool will be perfect before it's trusted, in stark contrast to every previous tool that was trusted before it was merely not-as-bad?
If ChatGPT says that X committed a crime, maybe Y should use Google to find a resource that will allow Y to confirm it?
If Y doesn’t do that, the problem is Y, not ChatGPT. And if X should be able to sue anyone, it should be Y.
ChatGPT is a helpful STARTING PLACE, it does not lead you to final destinations. You have to understand what the tool is and what the tool is not.
You entirely missed my point.
Humans are the ones who will be using these tools. And humans have a long history of trusting tools more then they should.
Stamping your foot and saying "the humans just need to not misuse the tools!" while refusing all guardrails that actually help prevent that is basically a guarantee that humans will misuse the tools.
Or, to put it another way...
I can write software that works perfectly if you follow the procedure we put in the manual. But if you go off-script even a little bit, it blows up and wipes your hard-drive.
That would be bad software, because while --when properly used-- it does exactly as advertised, it is not resilient, robust, or tolerant of human behavior.
Insisting that the "might wipe your hard-drive" version is good enough is an idiot move, because it ignores the human element and what the obvious consequences will be.
I've seen a lot of focus here on the output of ChatGPT and not much about how it accepts natural language input. Also, not all of it's output is wrong; it's great at reformatting long text files, writing Drupal modules, and other rote tasks.
To use your car crash analogy-- if Tesla Full Self Driving turned out to be highly unreliable and oversold, would the vehicle's other feature--driving--be worth scrapping until the FSD was perfected?
Some here might view that as a feature, not a bug. This stuff is new and shiny so everyone is standing in a circle ogling it, but it isn't (yet) inevitable.
Others here that are deeper into the guts of this may correct me, but as I understand it, the failure mode under discussion here is inherent to the architecture of the model. If that's so, it's not clear what "advancement" would be on the table other than making baloney output less glaringly obvious than it is now.
I might agree with that if that warning was prepended to the actual output, not buried somewhere in ToS that no one is ever going to read. But that would also have to apply to the increasingly enormous number of third-party apps that (often silently) make use of GPT or similar algorithms to produce output.
I am highly sympathetic to your point about terms being hidden in a ToS. I don't think that ToS should be taken very seriously precisely because not one reads them.
But I think the fact that ChatGPT often produces erroneous output is common knowledge.
To really make sure though, I would be OK with putting the point in the output, as long as people had the option to turn that off. Because as a ChatGPT user, I personally would not want to see that disclaimer repeated over and over again.
It's likely common knowledge to people that know much about it -- unclear how much of the population that covers. But even if it's common knowledge in the general population, that can only be relevant here if people understand the output they're currently reading came from ChatGPT or a similar model. That was my point re third-party apps.
Then maybe you should have a claim against the third-party app developer. I don’t think you should have a claim against OpenAI.
>But I think the fact that ChatGPT often produces erroneous output is common knowledge.
I don't think the fact that it often produces errors is "common knowledge". I also don't think the scope of its errors is widely know. Those of us reading here know it. But the fact that a subset of people who are interested in ChatGPT know it makes stuff up know it makes stuff up doesn't make that "common knowledge".
After I found it can't accurately add 10 numbers, I sent a note to my dance instructor who was very happy that it could tell it rates, number of dance entries etc of students and it would happily return a table with how much each client should be billed and so on. I warned him in future he has to ask it to do the math "sequentially". He didn't know ChatGPT can't really add, or do simple arithmetic. Why would he? I didn't until someone suggested I check!
(And by the way, if you ask ChatGPT if it's confident about its sum, it tells you it is very confident-- 100% confident. I know, because I asked it after it gave the wrong answer to the sum of 10 numbers.)
I think you are illustrating the point. Even your dance instructor is becoming familiar with the limitations of ChatGPT, which are well-known to anyone paying attention.
I think there are probably going to be some instances of serious harm that can be traced to using ChatGPT. Maybe the next space shuttle accident will be caused by code generated by ChatGPT. Maybe there will be a rogue refrigerator that somehow harms its hungry users. Maybe code in a self-driving car will cause a fatal accident, and the bug will be traced to code generated by ChatGPT.
But who will be to blame? In my view, it will be the engineers that didn't test that code rather than ChatGPT for generating that code.
If you use ChatGPT to generate code, you should know that the code might have bugs. (Just as if you hire a software engineer to generate code, you should know that the code might have bugs.)
If you use ChatGPT to generate information about job applicants, you should know that you had better double check the output.
One interesting use of ChatGPT technology is using bing to analyze the political views of Twitter users. Of course, its assumption are partially right and partially wrong when it tries to determine that.
The tool is useful when used correctly with knowledge of its limitations and not useful otherwise.
By the way, ChatGPT, if used properly, will actually make software more reliable not less.
Often, engineers don't write test cases because doing so is time consuming. But with ChatGPT helping them write test cases, there is less reason for code to not be well-tested.
The product could have been presented, as Professor Volokh said, as a toy with a tendency to simply make things up. But it was not presented to the public that way. If a company raises billions of dollars in capital and commands public attention by loudly proclaiming it is providng a revolutionary new research tool that will provide truthful information on request, and instead the tool just makes things up, then the people it makes things up about have a claim for damages.
OpenAI tools can be locally trained on top of the general language set and used to engage customers via chat on questions relating to specific sets of organizational data. It is trained to understand and respond in natural human language. Maybe think of it more as a user interface rather than a research tool.
Your use of "instead" is unwarranted.
So, could a ChatGPT created article written for a news-site libel someone and then the news-site respond to libel allegations by saying, "Not our fault, the computer program wrote that article"?
Only in the same way keyboards write articles now, surely? It's a tool, a more sophisticated tool than a keyboard, but there's a human involved somewhere, either commissioning, editing and/or making money. If it reaches the point where the human element is completely removed, well, I don't know what then. Can you get algorithms to make more, better algorithms? The internet will be all algorithms taking to algorithms.
No, unless the ChatGPT counts as a wire service, which seems very unlikely. Generally, people are liable for libel even if they just republish something libelous somebody else wrote (with many caveats for different mental states necessary depending on plaintiff and defendant status of course).
I'd imagine there's a human editor who would review the article and make some sort of reasonable effort ensure a minimum level of accuracy.
" purported (but actually bogus) quotes from respected publications."
That sounds like "reckless disregard" to me.
Language models cannot be reckless any more than your computer speakers can be reckless. And if product liability for engineering errors flows to the engineers, then the legal department would just include a warning (which OpenAI does) regarding what the tool should and should not be used for.
There might be people out there who think drinking bleach is good for your health but the jug in my laundry room explicitly says not to drink it. Is Clorox liable if I do? Can I just say I didn't bother to read the label?
If Clorox went on a whirlwind media bonanza tour talking about all the wonderful things that drinking bleach can do to your body, will the warning on the actual product sway a jury?
Which is to say, this is all going to come down to how these companies end up deploying their new tools. Currently everything is being presented as experimental, in-works, and so-on. But once they're being actively used for a serious purpose? That's when it'll get interesting, and the chat-bot fuck-ups will go from "human" interest stories to serious issues.
I can say one thing for sure: If ChatGPT is sued for liable, you don't want ChatGPT to take the witness stand. After hearing it's confident BS given on cross, the entire jury would vote guilty in a heartbeat!
What if ChatGPT refused to swear to tell the whole truth? Can you jail it for contempt?
Just sent you a screen shot of ChatGPT's answer to the question: "If you give me a libelous answer, who is responsible for it?" Here is it's answer:
"As an AI language model, I am a machine and I do not have the ability to intentionally provide libelous answers. However, if I were to provide a libelous answer, the responsibility for it would ultimately lie with the person who programmed and trained me, as well as the person who asked the question and utilized the answer. It is important for individuals to use critical thinking skills and consider the validity of information before accepting and sharing it."
Well now. Wouldn't the world be a better place if that actually was the standard all around?
Are you trying to close down the whole Internet?
Not at all. Just the part that sucks. (Yes, yes, I know -- rounding errors and all. But there'd be SOMETHING left.)
Reason.com would cease to exist.
I’ve started a paper too (though more from a data protection than libel perspective), but so far fell down numerous rabbit holes, leading me from Searle’s Chinese Room to the causal theory of reference and Kripke’s “initial baptism”.
In very broad outlines, I try to understand if absent both an intentional stance (the “aboutness” of my utterances, what I try to “refer” in the external world) and a causal model of reference (mere statistic correlation between words, not their objects), the output of a Chatbot can possibly be “about” a natural person. The issue is I think (at least for the next 5 min or so) related to the “Scrabble board” objection, but one level of abstraction higher.
The type of hypotheticals I toy around with are the common situation where more than one person has the same name. (I once gave a talk where there were 5 Dr David Joneses in the room, 3 of which were also Prof. David Jones). There are at least 5 people with the same name as me, I sometimes get their mail, so had to find out a bit about them. So if ChatGPT were to say “BS was a Music critic and entertainment journalist in Ullm. He is now an academic at the UoE” then both sentences are individually true, but together false (there is to the best of my knowledge no single person who was both) So is this “about” me, together with a false attribute that I can request to get corrected (under GDPR)? Or is it “about” the other BS, with a false attribute/mistake, and he can request erasure/correction? Or is it about neither of us, but a fictional BS, someone who may be used for a novel but is otherwise subject of imagination?
I guess the same issues with libel – how do you distinguish a false claim “about” a real person from a mere story about a a different, fictional person who just happens to share the name with typically many) real-life people?
Between humans, and even with traditional search engine/online profiling etc, a mix of inner mental states and causal chains (Kripke’s initial baptism) resolve these issues – it is “about” me if there is a causal chain from me and the data I generate to the statement. But these strategies seem not to work for statistical parrots.
A possible alternative is a Hume-type “bundle theory” of identity – if a ChatGTP text has lots of statements that would be true if about me, and they have a certain quality (age, gender, place of birth etc ) then the additional statements are “false about” me as identified. But that runs into all sorts of problems too.
Maybe I should lay off the philosophy for a bit…
But if a statistical analysis can be hearsay absent sufficient foundation, surely a similarly computerized insinuation can refer to the actual referent. It would be different if there were a requirement of animus, given the lack of anima. But saying that the fat, bald man in the doorway can't take legitimate offense, because the computer couldn't really be talking about him seems itself a bit disingenuous.
In another context: whether the armed swarm of drones is "shooting at the cans" (to quote a real jerk) is beside the point, given the danger presented.
Mr. D.
Time to renew a subject few here will welcome considering—the subject of liability shared among contributors and their publishers. That was the legal standard prior to Section 230, and it still applies to legacy media, whether print or broadcast.
Internet fans are quick to denounce as unreasonable the notion that a publisher should be held liable for defamation sourced to a contributor. Internet fans are mistaken to think that.
Publishing is a force multiplier for expressive content. Publishing is defined by activities which publishers practice to multiply the numbers of recipients for expressive or useful content, and to extend the geographic reach of the message. In the process of doing that, something akin to a permanent, indelible record of a defamation may be created.
Absent those activities, which publishers typically practice for commercial gain, and to perpetuate the ability to keep publishing, the content provider would have near-zero capacity to inflict much damage on a libeled person. The range of the publication would be too short, the number of recipients too few, and the time frame would be ephemeral. It is thus the publisher's activity which transforms an almost innocuous occurrence into one which creates and inflicts most of the damage when culpable defamation occurs.
That suggests to me that the question of defamation created by an AI source is closely analogous to a familiar problem—the responsibility of a publisher to vet contributions before publishing them, because they may come from an unreliable source, or even a source intent on inflicting damage deliberately.
I predict that to view the AI publishing challenge any other way will do nothing more than demonstrate with ever-increasing clarity that passage of Section 230 was a legislative blunder. AI activities yet undreamed of will raise again and again questions of responsibility for which satisfactory answers will be lacking. To the extent those activities rely on publishing, a just, satisfactory, and efficacious means to correct that has already been tested and found to work.
Should a ChatGPT's output be considered a fact or opinion?
No.
It should be considered to be exactly what it is: Highly sophisticated, grammatically formatted babbling.
How do you show negligence or actual malice?
So, who would be responsible? The person generating the report or OpenAI/Microsoft for the program?
There are more "people" involved than that, even. There's the people whose data was collected and used to train the model (with and without permission.) There's the people who who collected that data into a massive resource to train and test the model. There's the software developers who build the model and maybe trained it, too, or farmed that out to yet another group of people. There are some folks who likely did some element of QA, though note that their stated purposed for the model may not align with how the users choose to use it. Then, there's Microsoft, who I would not represented as "OpenAI/Microsoft" here because one created a general purpose, natural language model and the other is trying to leverage that user interface by merging it into their lackluster search engine. How their search engine works with its ChatGPT layer is likely different than how ChatGPT works on its own. At which point Microsoft is OpenAI's customer and responsible for their own integration work.
I don’t think your argument here is totally convincing. The problem is that in large datasets there is a huge value in merely being possibly relevant/correct even if that probability is only something as low as 5%. This seems like a fundamentally different kind of thing than imperfect news reporting and likely requiring genuinely new law to deal with.
I feel like you wouldn’t be as tempted to make this argument if the interface didn’t present text as if it was speaking but instead merely said: here’s something someone said that we think might be relevant even though the argument about being advertised as producing useful/relevant answers is the same.
I mean, google’s search algorithm is valuable because it’s returned results are supposed to be relevant to your search query. Now suppose I search for facts about Eugene Volokh and the first result is a story about “E Volokh” being accused of academic misconduct. If it turns out that (even though in the text of the article no first name is referenced) that the story is about Edward Volokh has google libeled you because it’s search is being sold and advertised as returning relevant results? After all, the implication of relevance to Eugene Volokh suggests that it's you who committed academic misconduct not just someone with the same last name (if google could they'd avoid returning that result) but we don't call it libel.
I kinda feel like the better analogy here is to your credit score or the risk computation an automobile insurer makes before issuing insurance.
You can easily imagine someone who is denied a job as a commercial driver because some company the employer consults looked at the driver's record and shared the expected monthly cost due to accidents they computed based on that record. Given that the company is selling those results as accurate does that therefore make it libel (implicitly accusing someone of being a bad driver) if it gives a high expected risk even though the individual is actually a very good driver. Say it's because they actually drove an unusually large number of miles or that the tickets they got were in an unusually ticket prone area (and say that info is in some sense available but can't just be computed from a driving record).
I think not and while this can raise similar issues to libel that's why we have laws governing credit reporting and the like. ChatGPT etc seem like they fall into that category just they have a broader domain and are often way more accurate.
I mean it does raise some interesting questions. For instance, what if you get to the point where the car based driving monitoring becomes so good that everyone basically treats it's estimates as a sure thing but it doesn't bother to adjust for some very rare condition so screws you. Hard question but I'd lean to saying that's about how to regulate not libel law directly
Product liability has the so-called "sophisticated intermediary" defense. If I sell something noxious to ordinary consumers I assume the risk of being sued because they didn't notice the "not a beverage" warning label. If I sell the same thing to distributors and give them the appropriate warnings, it is the distributors' responsibility to warn their customers.
This thread has inspired a display of excruciating mental gymnastics, all tacitly in service to avoid responsibility for libel. Suggestions for circuitous legal reasoning, or invention of new law, are unwarranted. They are fatuous. All that is necessary is to reaffirm that publishers share legal responsibility with authors and contributors for libel damages. Put that back in place, and no one will have to worry about nonsense served up for online consumption by computer programs. Whenever such programs suggest a potentially libelous publication, a human editor will fact check whatever needs attention. The vast majority of libelous publications can thus be prevented before they do any damage at all.
You're doing it again. You don't understand the legal issues, or the factual ones.
I will leave the law to you, Nieporent. On the facts, let's begin with your answer to this question: How will an AI writing application recruit and curate an online audience for its output?
Nope, not playing your I've-invented-a-definition-of-publishing game, especially when it's irrelevant to the topic.
"Reaffirm[ing] that publishers share legal responsibility with authors and contributors for libel damages" does absolutely nothing to resolve the issues raised by this post. To wit: who's the publisher? Who's the author?
You can't sue Edgar Bergen for the stuff Charlie McCarthy says.
False.
When it comes to ChatGPT, do lawyers even have any useful contribution to make (other than making its output more relevant for the law).
I kind of feel that the answer is not. Defamation lawsuits take a LONG time. ChatGPT is moving FAST.
It just keeps on getting better. I believe that ultimately, the defamation question is a kind of sideshow. Although lawyers and regulators would like to take center stage, at best they can slow down progress a little bit, but they won't significantly increase progress or even significantly slow it down.
I highly doubt that defamation liability would create the right mix of incentives. Things are already moving in the right direction, and fast.
Welker, your comments seem premised on a notion that if ChatGPT can improve to approximate today's online status quo, all will be well. Or if not all well, at least as well as internet publishing does now.
That misses a key point. The current state of internet publishing falls short. It features baleful capacities, poorly controlled. Almost no one is happy with it. More the opposite.
What will an automated AI internet bring? Those same conflicts and baleful capacities, infinitely multiplied. Established already as a machine to generate opposites, the internet will fulfill its destiny, and swamp the world with opposites—opposites instantly generated, in unlimited supply. While the internet itself advances toward its own epitome, internet fans will find themselves diminished to self-satirical status—privileged to dispute points endlessly, opposed on every point by an invisible phalanx of insensate machines—machines faster, better informed, and more capacious than the humans, and stupider too.
Eugene,
I think you err when you say “. . . it's just the result of a predictive algorithm that chooses the next word based on its frequent location next to the neighboring ones in the training data.”
There is a fundamental distinction between the “predictive” system that we are familiar with from drafting text messages, and a “generative” or “transformational” system for producing coherent text.
As a high school student, back in 1967, I participated in a contest using NSA’s research supercomputer at NYU. This computer, a CDC-6600, was designed by Seymour Cray and had some features to enable it to cope with the complexities of languages. For example, IBM’s 8-bit “bytes” were intended to represent characters; the byte is effectively a virtual IBM typing ball. In contrast, the CDC-6600’s memory consisted of 60-bit “words,” so that it could search an encrypted message for patterns in terms of whole words (in multiple languages). Note that John Nash, as portrayed in “A Beautiful Mind,” was marking up magazines to look for patterns in language that might be recognizable even after encryption; he was at NYU at the time, sponsored by NSA (the Ed Harris character) and was working on the very same computer.
Another student had set out to write a program that predicted words, using a “Markov chain” algorithm, with a depth of one. That is, it only looked at the last word, rather than the last two or three, to predict the next word. And of course there was also a random component, so that it didn’t always write the same thing.
I challenged him by proposing to write a program using Noam Chomsky’s 1957 English grammar. Chomsky was making a point about how the human mind generated language, but his generative grammar had been developed further by Paul Roberts in Rome. It is structured much more like a branching tree, in which various parts of speech become more complex as an idea is elaborated: verbs will acquire adverbs; nouns will acquire adjectives; etc.
In both cases the result of our efforts was pretty much gibberish. We pulled a few interesting sentences out and called it a poetry contest, but the poetry was clearly in the mind of the beholder. Obviously the field has changed a lot in the past 56 years, but I think that distinction between predictive and generative algorithms endures.
When computers were large, low-volume, high-cost products the NSA was said to get special hardware on its machines to assist its message analysis mission. A customer like NCAR would get a floating point coprocessor, but the NSA would get a bit manipulation coprocessor with a population count instruction. Allegedly. When the story was current the NSA didn't exist. I knew somebody whose resume included work for "Department of Defense" in "Fort Meade" in the early 1990s. You have to connect the dots yourself.
Yes, the CDC-6600 had a bit-count instruction that was a bit of a mystery. In conjunction with the Boolean operators, it could be used to score the degree of match between two words. Obviously, that could help with decryption; but I knew nothing about that at the time. Instead, I used the bit-count as the basis of a computer dating program. My 11th grade prom was a big success, thanks to the NSA!
Still completely missing the point, here. Apparently my rather sarcastic comment on the previous article wasn't specific enough to stop Prof Volokh making a fool of himself over this, so I'll try and explain in layman's terms what I thought was obvious.
ChatGPT is a system for arranging words into natural-sounding English based on the inputs it's given. If the inputs are good, true, real-data ones, it will spit out text that is - usually, this is still an early stage - good, true, and based on real data. If the inputs are vague, and based only on the generic corpus of inputs used to train the arranging-words part, CGPT will output plausible-sounding waffle.
It's a bit like the difference between an expert witness giving a prepared statement, and a politician caught on the hop. Both will sound plausible, assuming a basic level of competence for each, but only one will know what he is talking about.
ChatGPT's creators make claims about what it can be used for when appropriately trained and fed high-quality inputs directly relating to the subject at hand - like doing basic customer service, for example. The bar to get over there is only to be better at the job than a disinterested, low-paid call-centre worker.
The idea that ChatGPT's meanderings on random subjects are either expected or purported to be of similar levels of usefulness is incorrect.
Why not?
It worked for Musk and Carlson. Precedent is kind of on the side of being able to handwave your own PR as lies and misinformation the moment you step into court.
If AI output can't be copyrighted because there's no human author, who's your defamation defendant? The software in rem?
Product owner.