The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Are AI Program Outputs Reasonably Perceived as Factual? A Response to Eugene
A different view.
I am not one to disagree with Eugene about an area within his area of expertise, but I confess I have a different reaction to his view, expressed in his draft paper, that AI program outputs "would reasonably appear to state or imply assertions of objective fact." Take OpenAI's ChatGPT. Eugene argues in his draft at page 8 that OpenAI's business model is premised on ChatGPT outputs being factually correct:
OpenAI has touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense. . . . The AI companies' current and future business models rests entirely on their programs' credibility for producing reasonable accurate summaries of the facts. When OpenAI helps promote ChatGPT's ability to get high scores on bar exams or the SAT, it's similarly trying to get the public to view ChatGPT's output as reliable. Likewise when its software is incorporated into search engines, or into other applications, presumably precisely because it's seen as pretty reliable. It can't then turn around and, in a libel lawsuit, raise a defense that it's all just Jabberwocky.
Naturally, everyone understands that AI programs aren't perfect. But everyone understands that newspapers aren't perfect, either—yet that can't be enough to give newspapers immunity from defamation liability; likewise for AI programs. And that's especially so when the output is framed in quite definite language, complete with purported quotes from respected publications.
Here's my question: Is the reasonable observer test about business models, or is it about what people familiar with the service would think? Because if the test is about what normal observers would think, it seems to me that no one who tries ChatGPT could think its output is factually accurate.
That's what makes ChatGPT distinctive and interesting, I think. It combines good writing and ease of language that sounds real, on one hand, with obvious factual inaccuracies, on the other. It's all style, no substance. The false claims of fact are an essential characteristic of the ChatGPT user experience, it seems to me. If you spend five minutes querying it, there's no way you can miss this.
For example, back in January, I asked ChatGPT to write a bio for me. This should be easy to do accurately, as there are lots of online bios of me if you just google my name. ChatGPT's version was well written, but it had lots and lots of details wrong.
For example, I won't have it be writing my bio any time soon. 🙂 pic.twitter.com/2b8H01jzxG
— Orin Kerr (@OrinKerr) January 13, 2023
To correct the errors in the ChatGPT output, I joined Berkeley in 2019, not 2018; I didn't go to Yale Law School; I didn't clerk for Judge O'Scannlain; I wasn't an appellate lawyer at DOJ; there is no 2019 edition of my Computer Crime Law casebook, and it certainly wouldn't be the 2nd edition, as we're now on the 5th edition already; I'm not a fellow at the American College of Trial Lawyers; and I've never to my knowledge been an advisor to the U.S. Sentencing Commission. (Some would say I'm also not an invaluable asset to the law school community, but let's stick to the provable facts here, people.)
My sense is that these kinds of factual errors are ubiquitous when using ChatGPT. It has style, but not substance. ChatGPT is like the student who didn't do the reading but has amazing verbal skills; it creates the superficial impression of competence without knowledge. Maybe that isn't what OpenAI would want it to be. But I would think that's the conclusion a typical user gets pretty quickly from querying ChatGPT.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Possibly stupid questions: who is the defendant in a ChatGPT lawsuit? If I ask ChatGPT about myself and it comes up with defamatory material, has there been publication or merely non-actionable face-to-interface insult?
Presumably its the publisher of the software. I.e., Google or Microsoft. Suing ChatGPT itself is flawed, as it isn't a person.
That seems to be one of the points of confusion: OpenAI publishes the *software*, *not* the content it creates.
OpenAI is providing a tool a user uses to create content. If a user creates a post on Facebook: they are the publisher, not the software. The chatbot is a tool: the user is the human causing content to be created and they are the only human directly in causing that content to be created so they would seem to be the "publisher" of that content. It is true that the content it generates based on a prompt is less predictable obviously than other tools: but it is still merely a tool.
I think many people unconsciously, even if they don't acknowledge the subtle flaw in their thinking, anthropomorphize chatbots unconsciously and think of the AI as the "creator", then consciously know it isn't but since their focus has shifted to the AI they try to blame the vendor as if they were the "publisher" of the content. No, they are the publisher of the software, not the content a user creates with it.
If a user creates a post on Facebook, they compose all the text and add the pictures or whatever. Clearly the user is the "publisher." When a user uses chatbot, the software composes the text on its own. I'm not sure your assertion that the user is the publisher is so clear there. The better parallel is a TV interviewer asking someone a question. The interviewee answers with something that is defamatory. Clearly the interviewee is the guilty party here, not the interviewer.
re: "better parallel is a TV interviewer asking a question... Clearly the interviewee is the guilty party here"
It isn't a parallel at all because you are assuming the interviewee is a human. That seems to be the problem: people are anthropomorphizing the AI with their analogies, whether consciously or unconsciously implicitly without realizing it. In your case you explicitly made that issue clear by typing out your explicit reasoning. I'd suggest in other cases people implicitly reason that way, even if they don't state it or may not even be consciously aware of the fact that implicitly its an assumption in their unconscious reasoning process even if they don't state it. I see lots of handwaving in comments where people avoid making their reasoning explicit.
The reason Section 230 arose was due to the reality that:
https://www.eff.org/issues/cda230
"Congress knew that the sheer volume of the growing Internet would make it impossible for services to review every users’ speech"
They grasped that since no humans would review the content other than the user, that the computer services shouldn't be held responsible. The only human in the process should be. There is no human from OpenAI reviewing each output before a users sees it, that isn't practical to require.
If a user uses a search engine and one of the search results is a clip from a page that contains a statement that is libel: the search engine software display that libel. However the search engine company wasn't the publisher of it: the human that created it was.
In this case there is only one human involved in the creation of content from a chatbot: its the user that enters the prompt. Yup, the output isn't predictable, but they chose to take the risk of using an unpredictable tool and should be responsible for the consequences of dealing with its output and evaluating it. There isn't any other human around to do so.
No no. I had a feeling you would respond this way. I am not saying that an app is human. I am not anthropomorphizing software. What I am saying is that your argument that the user of a chatbot is the one responsible for its output just isn't true. Just as the interviewer isn't responsible for the output of the interviewee. It isn't just that the output is not predictable, it's that the user has no control over the output. It's entirely created by the software. I'm not a lawyer, and have no idea what the legal ramifications of this are. But I find your argument facile.
The argument is to find a human agent responsible for the content since software isn't anything with human agency. A user uses a tool at their own risk and is responsible for the output: even if the output isn't predictable. They knew that before using the tool and took the risk that the output would be problematic.
Say someone kills another person using a gun: should the outcome be blamed on the gun and the manufacturer of the gun responsible? There is only 1 human agent involved and they are responsible for the use of the tool.
Lets say the gun used new tech that tried to aim itself at the nearest warm target to where its pointed, but wasn't perfect and might hit anything with 10 feet of the person aimed at. The person was aiming at a deer but hit a person within 10 feet who they should have known was at risk (like the user of this should know false statements are possible). Should the gun manufacturer be held responsible for the user of the gun because there was a chance it wouldn't hit the person he aimed at? Isn't the human the agent responsible? Lets say the gun is using AI to aim itself, is it somehow now the AI's responsibility or the AI vendor? Or is it still the human that took the risk?
When you figure out the answers to these questions, and realize that they are the opposite of what you think they are, maybe you will understand why your analysis of the AI issue is also wrong.
Again: you cannot assume the risk on behalf of innocent third parties.
You avoided addressing the issues. The point is that the person who fires the gun is the one who took the risk on behalf of third parties: not the gun manufacturer. The person who fires the gun is responsible just like the person who "fires" the AI. Its not that complicated an analogy, yet you of course provide no actual argument against it, merely an unjustified assertion that its wrong.
I grasp attorneys often have less reason to explore new ideas and adapt and so it may take a while for you to figure out why you are wrong. Unfortunately no one can explain it if you don't provide an argument, and no one should assume you are right without one.
Fancy nav systems in cars have disclaimers to pay attention.
You can click through and perhaps waive your right to sue, but you can't waive away the right of someone you cream to sue the company because you were distracted.
I don't think a gun manufacturer who manufactures a gun with AI that will chose to hit human/animal targets within 10' of where the gun is aimed can so easily escape responsibility for the predictable results.
That may be your point. My point is that you are wrong when you think this.
re: "That may be your point. My point is that you are wrong when you think this."
And my point is that yet again your lack of actual argument is tiresome. Without any actual argument, your assertion should be dismissed out of hand.
re: "I don’t think a gun manufacturer who manufactures a gun with AI that will chose to hit human/animal targets within 10′ of where the gun is aimed can so easily escape responsibility for the predictable results."
Except its the person who chose to use it who is responsible for doing so knowing the risks. How does the error radius being that large change the agency of the human so magically they can blame it on the tool instead of themselves?
re: "You can click through and perhaps waive your right to sue, but you can’t waive away the right of someone you cream to sue the company because you were distracted."
Whether or not that is the case in how the legal system is now: its unclear why someone should be allowed to defer responsibility to a tool rather than being forced to take it on themselves, other than the desire to try to find deep pockets on the part of the legal profession that has a self interested bias.
You're operating under a mistaken assumption: it does not "defer" responsibility. The tool user can also be responsible.
Belatedly catching up on responses in case anyone is still checking. re: "a tool can also be responsible": only in the pejorative use of that to refer to a human. Humans using the tools are responsible for the consequences of the actions of those tools when they operate within the designed specifications: as this theoretical gun would be doing, and as these potentially falsehood generating chatbots are as well. Unfortunately some wish to absolve humans of the responsibility for determining whether to believe what they say, letting off humans who are negligent about verifying what they see against reality.
RealityEngineer:
"A user uses a tool at their own risk and is responsible for the output: even if the output isn’t predictable."
From a legal standpoint, I believe this assertion is simply wrong. Perhaps this is the crux of our disagreement.
If the creator of ChatAI is responsible for every thing it says, they should own the patents on everything it invents.
Come on, suing lawyers. Decide which is the more profitable path and attach your yacht-feeding suckers to the tit of the productive in one way or the other.
If I seem brutal to lawyers on this, I see it like robo cars or surgery.
Logically, the moment either is better than humans at outcomes, we should switch. That will not happen because they will make mistakes, which will continue to lead to lawsuits, and hence slow adoption, costing more lives than lawsuit-driven improvements save, making that lawyer mantra a fraudulent joke.
Part of the problem is that, even when they're making fewer mistakes than humans, they'll still be making different mistakes, and humans won't be very forgiving of a machine making a mistake that a human wouldn't have made.
But, remember, the machines are continually improving, and the people aren't. So they will quickly go from "slightly better" to "amazingly better" than humans.
Tesla deliberately feeds their self-driving computer only the sort of data a human would have access to; Eye level vision and GPS maps. I believe part of the reason they're doing that is so that the computer WILL tend to make "understandable" mistakes, rather than bizarre ones.
But once they have the performance up to human level by that approach, they could always add additional senses like LIDAR, to rule out those mistakes, and very quickly jump to superhuman performance.
"...there is only one human involved in the creation of content from a chatbot..."
Not true. The person who chooses the training material is also "involved" in the "creation of content". Wasn't it YOU who linked to studies of ChatGPT's political biases, instilled through choice of training materials? Train ChatGPS on material produced by TDS sufferers and maybe you have to market it as producing "amusing insults of Trump, not necessarily true" rather than "this aces SAT tests".
The argument goes that since the publisher published a flawed piece of software, they are liable for any libel it may engage in.
To a point, they are correct. Imagine if a company created a piece of software that was deliberately designed to engage in malicious libelous content. For example, it was specifically coded to engage in malicious libel when asked a question about Donald Trump.
Would the company be liable for an libel created by that software? Why or why not?
See immediately above. The result can be achieved by choosing the training materials rather than software design.
That does seem to be one of the points of confusion: you're wrong about this.
Saying it a million times won't make it valid legal analysis. The chatbot's programmers are the ones who caused the content to be created. The user has no control over what content is created.
You’re wrong about this. To repeat myself again (that makes 3 times) the chooser of training materials can skew the output without any modification of the program.
In the instances we're talking about, there's no distinction, I don't think. OpenAI created the program and trained it.
If a user creates a post on Facebook: they are the publisher, not the software.
No. Facebook is the publisher. The person who creates a post is an author or contributor. Authors and contributors are dependent on publishers, or their creations go mostly unnoticed.
As always, you recognize the publisher to be the party which recruited the audience, curated the audience, and provided the means to distribute content to the audience. ChatGPT will not likely be equipped to do any of that. It will not be the publisher of anything. Whether it can be counted the author is an interesting question to ask instead.
None of those define publishers.
Here is a thought experiment for you, Nieporent. Exclude from your own definition of a publisher the activities I listed as definitional, and which you quoted above to deny. Then put together any other list of publishing activities you please to designate as definitive. Just be sure that whatever activities you name are practiced in common by most existing major publishers, including, for instance, the New York Times, Fox News, and Facebook. I am sure you will agree there would be no point to define out of the class the most conspicuous examples generally agreed to be within it. So make sure the activities on your list are critically relied upon by at least those 3 major publishers.
So do it. Explain how to operate continuously for years a model publishing business—relying only on your own list of definitional activities—and ruling out government subsidy or private interest entanglement as means to pay the bills.
"Nationals fans drink beer" doesn't mean that drinking beer makes you a Nationals fan, even if they all do, even if nobody who's sober could ever be one.
Agreed. Sec 230 says that Facebook is not to be regarded as the publisher of the 3rd party material it publishes, but that is a statement about legal accountability, not about whether it is in fact the publisher of that material.
Try this model (in relation to or contrast to traditional Publishers):
Facebook functions like a POD (Print on Demand) service that enables creators of content to Self-Publish their own stuff.
Defamation requires publication to a third party. If you ask ChatGPT about yourself, then ChatGPT is safe. The issue is if you ask ChatGPT about someone else.
Seems like a fair point, but the issue is bigger than the current, soon-to-be-updated-again versions of AI products we have at the moment.
Everywhere you look, the big buzz and hype is about how these products will change the world in ways that are premised on them becoming much more relied on for factual accuracy.
Is what matters "the hype" or what a typical person actually experiences?
Well, the hype goes to what the company producing the software is asserting.
If they say it's a source of entertaining babble, you've got a steep hill to climb to defamation, because they're not claiming the output is true.
OTOH, if they say their product is a new reference work providing information as requested, the hill to defamation would appear to be more of a valley, because they're claiming the output is true.
Can you point to where OpenAI claims its output is true? Eugene doesn't include a citation for that, relying more on generalized notions of business model theory,
Well, you can read their website and publications about ChatGPT and the new version. As an example
While it looks like you are correct that OpenAI never claims ChapGPT’s output is 100% accurate – in fact, they explicitly mention ‘hallucinations’ – they also repeatedly tout the accuracy and fact-finding/analysis capabilities of the tool. Summarizing books or papers, or passing the Bar exam, for example. Or listing academic teaching and testing tools as using ChatGPT.
I would say that given the examples and suggested uses of ChatGPT on OpenAI’s website it would be much harder to claim that there is no suggestion that ChatGPT’s output would be reliable.
That looks more like “frequently true” than “reliable”.
Current experience is "frequently untrue" and I agree with Kerr that a claim of libel CURRENTLY fails the "no reasonable person" test.
But what if it is improved to "rarely untrue"?
GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses
"GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities."
"We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations."
Now, granted, they're not claiming that they're there yet:
"GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. We encourage and facilitate transparency, user education, and wider AI literacy as society adopts these models. We also aim to expand the avenues of input people have in shaping our models."
But they're clearly not aiming at a bafflegarb generator. They're aiming at a system that generates truthful statements.
It seems to me that claiming the output is more likely to be accurate than competitor products is pretty far from "claiming the output is true," which is what you had previously asserted.
In response to your, "Is what matters “the hype” or what a typical person actually experiences?" I actually asserted that,
"Well, the hype goes to what the company producing the software is asserting.
If they say it’s a source of entertaining babble, you’ve got a steep hill to climb to defamation, because they’re not claiming the output is true.
OTOH, if they say their product is a new reference work providing information as requested, the hill to defamation would appear to be more of a valley, because they’re claiming the output is true."
That is, I asserted that, yes, the hype does matter, but was non-committal about the nature of their hype.
Looking, I found that their hype, too, is somewhat non-committal. If you're looking for reason to trust the output, you'll find it. If you're looking for reason to doubt it, you'll find that, too.
Brett, my apologies for misunderstanding you.
Not a big deal, I can see how I might have been misunderstood.
"aiming at a system" doesn't mean that system exists, and it may not for a long time so that says nothing relevant about the current systems.
As he notes "greater accuracy" isn't 100% accurate.
While the % of time it matches reality may be higher: it should be viewed as no more "fact" than the Babylon Bee unless proven otherwise. It seems lots the current panic is akin to those who don't think the Babylon Bee should exist since they don't think people should be viewed as capable of accepting responsibility for differentiating between fact or fiction.
I think that's a lousy analogy, actually.
See, the Babylon Bee runs TWO sites: The Bee itself, which is explicitly a parody site that expressly states that it supplies fake news, your trusted source for satire. You should actively assume anything you read there ISN'T true.
And NotTheBee, a curated selection of straight news which IS asserted to be true.
Neither is a mix that you have to sort out.
You turned my argument into a strawman since I wasn't referencing both those sites: only the Babylon Bee. Although it may not exist, a closer analogy would be a site that is a mixture of the two sites where you do need to sort it out.
And that's why I said it was a bad analogy: The 'AI' system generates a mix of fact and fantasy, that you have to sort out yourself, because the system can't: It can't tell any difference between the two. It's all just a stream of algorithmically generated words that truth and false don't even enter into the generation of.
While the Bee is basically all fantasy, and NotTheBee all fact: It come pre-sorted, because the authors are humans who actually have a concept of "true" and "false".
So the human involved in dealing with the chatbot that needs to sort fantasy from reality is the human reading it, its unclear why it isn't a direct analogy. In both cases the human reading needs to sort things out.
So the human involved in dealing with the chatbot that needs to sort fantasy from reality is the human reading it, its unclear why it isn’t a direct analogy. In both cases the human reading needs to sort things out.
What? With The Babylon Bee the reader doesn't need to "sort things out" at all, as the site is quite upfront about being satire, not a reporter of facts.
And as I and (separately) EV noted, a claim of 100% accuracy is not required for liability.
If the New York Times puts a disclaimer at the bottom of its page, "We're working to get things right, but not everything in this newspaper is true," that would not insulate the NYT from liability. (Indeed, ironically, that would make it more risky, whereas a disclaimer that said, "Nothing in this newspaper is true" would likely be safe.)
Given that everyone would understand that no one is intended to believe that last disclaimer it wouldn't insulate the NYT at all.
The latter I imagine, but the typical person is usually dumber than you think. People will use this without even thinking to fact check it. People today just ask Siri something, and whatever answer Google has featured in larger font at the top of the screen, they assume and repeat as fact. Ask them what the source is and they don't even know what you're talking about, that there is a url underneath where the information comes from, or that the next three results are all contradictory. "I dunno, that's what it says." "What do you mean, it?" "Huh?"
But my point was that the technology and the whole situation is evolving rapidly. Even if you show that the typical experience is such that reasonable persons shouldn't perceive ChatGPT 4.0 as factual today, that doesn't mean it will hold true for 4.1 tomorrow, much less for AI program outputs categorically and indefinitely.
I share your (I think) implied skepticism that things will live up to the hype, though.
Got it. Certainly possible that AI will get better, and that someday it will be good enough that a reasonable observer would take its factual claims as true.
Must the reasonable observer take its factual claims as true, or merely take them as purporting to be factual claims? Maybe someone who knows defamation law can weigh in.
You can't show harm if everyone knows it's false, whether or not was it was intended to be taken as true.
But if your only source is ChatGPT you can't know that what it says is false.
M L: I think that's the key point. The question is whether a reasonable reader would view the statements as factual assertions. It's not necessary that the reasonable reader view the statements as certainly or nearly certainly true. That's why "rumor has it that Volokh was convicted of embezzlement" is actionable, even though the reasonable reader would realize that the rumor might well be false (after all, it's pitched as a rumor). Likewise, that's why "Smith says Volokh embezzled money, but Volokh denies it" may well be actionable (unless some privilege applies), even though the reasonable reader would realize that perhaps Smith is wrong and Volokh is right.
Now if the reasonable reader would recognize that the statement is parody, or fiction, then there's no liability. And, as Davy C suggests, if everyone knows the statement is false, there'll probably be no liability because there's no damage to reputation. But if a statement is reasonably perceived as a factual assertion, but with a substantial risk of being false, I think that's generally actionable.
But your example, “Smith says Volokh embezzled money, but Volokh denies it”, assuming IT is true, is not an example of “a statement [that] is reasonably perceived as a factual assertion, but with a substantial risk of being false”.
And if true I’m hard put to see why it OUGHT to be “actionable”, even if the intent of making it was an ill one.
In a local forum (where the topic of clergy sex abuse came up) I have already seen one person citing particular churches/clergy as perps. Their source was ChatGPT. The particular identifications may well be true (I don't know), but citing ChatGPT as an authority (with no further citation) is disquieting. He will not be the only one.
The "reasonable person" standard ignores cretins, doesn't it?
I didn’t upgrade to $20/mo to be amused. I did it so I had access to a great tool to generate sales copy and content summaries.
You're committing a cognitive fallacy and assuming ChatGPT users are just like you.
Professor Kerr, what matters has little to do with anyone's experience or expectation. What matters is whether defamatory falsehoods create damage—and what parties practicing what activities provide means to inflict the damage.
No one should suppose, for instance, that a defamatory falsehood created by an AI text generator will usually come before the public with that history of creation even evident. On the internet as it is now organized, the falsehood will likely be posted by a pseudonymous person, attributing it to an anonymous source, with no mention whatever of the AI origin. The worse the reputation of AI text generation becomes, the less likely the output will be attributed to that kind of source.
As Alex Jones and others have demonstrated, there is a considerable commercial opportunity to be had by publishing lies for profit on the internet. Expect AI text generators to be used first by entrepreneurs of that sort, and employed as labor-savers to generate superficially-persuasive-looking lies inexpensively. Those entrepreneurs will be attracted also by the opportunity to blame the AI, after they themselves have been identified for accountability. To shed responsibility, they will happily proclaim their own gullibility.
I don’t understand what you are arguing. Sure, if someone takes AI output and claims it is true, the claimer will be liable for falsehoods. But that is true of any source, right? I take the point of Eugene’s argument to be whether the AI company is liable for its output, not whether someone else could be liable for presenting the AI output as truthful.
I don’t understand what you are arguing.
I'd bet dollars against donuts that makes two of you.
You use the future tense when stating "premised on them becoming much more relied on for factual accuracy."
That says nothing about their current status and its only the current status thats at issue regarding liability for content generated using these tools now. Almost everything I see about them mentions their hallucinations, and there are lots of examples around the net, and their terms of service say they aren't factual, etc.
However I'd suggest that most of the commentary on their future is regarding their *utility* improving: there aren't claims of imminent perfect accuracy. Something that gets 90% on a test still got 10% wrong and therefore isn't implied to be fully accurate and therefore its output still needs to be validated before being labeled as "fact". Its up to a human to decide whether something is "fact", these outputs shouldn't be viewed as "fact" and therefore libel shouldn't be viewed as an issue.
“… its only the current status thats at issue regarding liability for content generated using these tools now.”
Current liability, a high-velocity moving target, is NOT the only subject under discussion.
It's all style, no substance.
So a democrat?
Oh, snap! I get it. Republicans are substantive, while Democrats are cyphers...full of air, smoke & mirrors, etc..
Nice zinger. You really got those rascally liberals.
Nah, (D)s are much worse shits than that.
To push back a little bit in Eugene's favor, while I now know that I shouldn't trust it further than I could throw it (I assume its servers are quite heavy), when I *first* tried it, I kind of assumed there was a base-level of accuracy.
I thought it basically worked like Google such that, yes, if there was some factual error in the training materials, that could be reflected in the output but otherwise it was reliable.
I think that's how most people think it work.
That was my initial assumption, yes, just from the fact that they called it an "AI". I was somewhat annoyed to discover that it wasn't an AI at all, but instead a ridiculously high end predictive text generator.
Which also explains why you can add stuff like "in the style of Shakespeare", assuming soandso has a large enough body of sentences to draw from when discussing arbitrary topics.
If I am a student asking it to write a paper for me I may not do the same amount of testing as a law school professor.
A would-be cheater would actually have to know the material to know whether the AI was giving him fake answers or not.
Defeats the whole point of cheating.
Not at all. Students don't usually have that wide a grasp of the materiel. So, you don't have to "know the materiel", merely have the capability to fact check the chatbot output. This should be much easier than writing it yourself from scratch.
In my state a seller of consumer goods may not disclaim the implied warranty of fitness for a particular purpose. That is a rule I would like to see applied to Silicon Valley's digital products targeted at consumers.
So you support the "entertaining nonsense" disclaimer
What "particular purpose" applies here?
Advertising for Bing promises me "reliable, up-to-date results, and complete, cited answers". There's a start.
Not my area of law either, but it seems like this post is somewhat ships passing in the night with Eugene's Daily Tattler hypo from about an hour earlier, where he posits (I presume based on some doctrinal underpinnings) that a publisher can still be subject to a defamation claim even if the listener was unwise to trust it because it was known at times, and perhaps even frequently, to be unreliable.
Or, is that why there seems to be an undercurrent in your post that a reasonable observer would not trust any factual assertion emitted by ChatGPT? But that can't be right, of course -- it gets a lot of things right in the midst of the howlers, just like many humans do. And the "obvious factual inaccuracies" you mention and we've discussed here recently are by definition only obvious because we already knew the answers. So it's not clear why ChatGPT would clearly fall outside the Daily Tattler's perceived degree of accuracy/trustworthiness.
You very well might be right. But, just to play devil's advocate:
With The Tattler, there are live human beings (we assume, as of 2023) at every stage of the writing/editing/publishing process. So, it seems reasonable for the average reader to assume, "Some human will catch the biggest whoppers, or the worst whoppers. If something makes it on to my page, it therefore has a pretty high likelihood of being correct." On the other hand, for something like these bots generating articles, term papers, etc etc, there is no editing. Whatever is spit out is given, warts and all, to the gentle reader. So (the argument would go), "You know that this output is entirely machine-generated, you know that no actual person is reviewing this for accuracy. So, only a thoroughly lazy or negligent reader would take ANYTHING here as factually-accurate."
There was a bulgy actress who was almost certainly pregnant. Her lawyers threatened the tabloid that claimed so.
Next headline: “Soandso’s lawyers confirm she’s not pregnant. It’s fat!”
Arnie: It's not a toomuh.
In all fairness, I think a lot of human-generated media articles these days are more like the latter. 😁
But seriously: I agree humans can and likely would be perceived as doing a better job on the margins, catching the more obvious errors that "everybody knows" (presumably including the ultimate readers, so that slice may not really move the ball forward here since they're probably not actionable anyway).
Beyond that? Since there's no unequivocal source of truth to appeal to, the accuracy of a human-reviewed article will only be as good as the time and care they take looking into whether there's a good-faith basis for the factual claims, and whether the particular sources they choose to trust are actually correct themselves (and sometimes supposedly trustworthy sources actually aren't, which takes us full circle).
So at the end of the day, it might not be all that crazy for an average Josephine to think of ChatGPT, which they (correctly) understand has ingested a good part of the Library of Congress, Wikipedia, etc., is just doing a faster/automated process of synthesizing facts from those sources than a human could.
Maybe we're coming at this from the wrong direction, though. (Most) human authors don't just write articles about any arbitrary subject sent to them by an internet rando -- and even if they did, they most likely wouldn't just dutifully grind SOMETHING out even if they a) knew little to nothing about the subject themselves, and b) couldn't feel relatively comfortable with the accuracy of the factual material they find about it. From my testing, responses along the lines of "I don't know the answer to your question" or "I can't be confident enough of the accuracy to answer your question" are not generally on the ChatGPT menu -- by design. They want their tool to be responsive! But it's not clear at all why they should be free of potential liability for the results of that explicit design choice.
I would be interested in why a company cannot copyright work by AI but it can be held liable for its claims.
If AI is an algorithm operating on inputs that cannot be creative, how can the output be defamatory? The output of AI should just be a fact: the AI says X.
You raise an interesting point: does the term "artificial intelligence" really conjure up in the mind of the listener "an algorithm operating on inputs that cannot be creative," or something more?
I think the answer is clearly the latter, and that choice of moniker is in and of itself no small part of the problem: the average person not involved in in-depth discussions like this is going to interpret "AI" as something beyond "smooth-sounding nonsense."
That's a great point, however I think the flaw is that the model can produce a statement even of the exact facts where present in its training data. The facts that make up the defamatory statement only need to be in proximity to each other.
Even if the exact facts were not present together in the training data, I meant.
As I posted on another page, Open AI’s terms of service note in part that they should not be considered factual, and that they don't warrant anything (including about it being factual), and that if there is any claim due to use of the service (i.e. such as libel someone tries to claim against OpenAI) that the user is responsible for defending OpenAI so in essence the user is acknowledging full responsibility for the consequences of using the AI:
https://openai.com/policies/terms-of-use “” (d) Accuracy. Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.” … (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services,” … (b) Disclaimer. THE SERVICES ARE PROVIDED “AS IS.” EXCEPT TO THE EXTENT PROHIBITED BY LAW, WE AND OUR AFFILIATES AND LICENSORS MAKE NO WARRANTIES (EXPRESS, IMPLIED, STATUTORY OR OTHERWISE) WITH RESPECT TO THE SERVICES, AND DISCLAIM ALL WARRANTIES INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, SATISFACTORY QUALITY, NON-INFRINGEMENT, AND QUIET ENJOYMENT, AND ANY WARRANTIES ARISING OUT OF ANY COURSE OF DEALING OR TRADE USAGE. WE DO NOT WARRANT THAT THE SERVICES WILL BE UNINTERRUPTED, ACCURATE OR ERROR FREE, OR THAT ANY CONTENT WILL BE SECURE OR NOT LOST OR ALTERED.”
S
I did not have to click that disclaimer to use it. I went straight to "try GPT" and asked it a question. And when its libling people, the fine print "just kidding" wont save them.
I just signed up again using a different email and it gave me this message before letting me in:
"While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice."
It also noted it was a research version, implying it isn't a finished product.
I don't recall any articles about these chatbots that didn't mention the hallucination issue, its all over the net and I suspect its a minuscule fraction of the public that hasn't heard of the issue before trying these things.
"occasionally"
lol.
I occasionally make stuff up thats libelous! but only occasionally so its ok.
If the ChatGPT (Bard, whatever) had the standard disclaimer: “Names, characters, business, events and incidents are the products of the computer’s algorithm. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.”
… then sure.
The basic problem is the wholly made up quotes which look real and are attributed to people, articles, papers, etc. And then the AI mixes these fictional quotes with real quotes.
Where exactly is the warning that CHatGPT 4.0 produces fiction? The disclaimer starts “As an AI language model, my responses are based on publicly available information and my knowledge cutoff is September 2021.”
https://chat.openai.com/chat
If you press chat GPT (i.e. regenerate responses *with quotes* it will generate responses, like this:
“However, I can provide information on some of the controversial issues that Kerr has been involved in based on available sources:
1. Fourth Amendment interpretation: Kerr has written extensively on the interpretation of the Fourth Amendment, particularly in the context of new technologies. In a 2016 article published in the Harvard Law Review, Kerr argued that the Fourth Amendment should be interpreted in light of the original meaning of the text and that it should not be limited to the technologies that existed at the time of its drafting. He wrote, “the Fourth Amendment’s text should be understood to encompass governmental searches and seizures that use new technologies, so long as those searches and seizures implicate a protected interest that the Fourth Amendment was designed to protect” (Kerr, 2016).
2.Government surveillance: Kerr has been critical of government surveillance programs that he believes violate the Fourth Amendment. In a 2013 article for the George Washington Law Review, Kerr argued that the National Security Agency’s bulk collection of telephone metadata was unconstitutional. He wrote, “The NSA’s program appears to cross a critical line in the use of an order that allows the government to obtain information about millions of people without any individualized suspicion of wrongdoing” (Kerr, 2013).
3. Computer crime: Kerr has been involved in several cases related to computer crime, including the United States v. Drew case, which involved the prosecution of a woman who created a fake MySpace profile to harass a teenage girl who later committed suicide. In a 2009 article for the Yale Law Journal, Kerr argued that the Computer Fraud and Abuse Act, a federal law that criminalizes certain computer-related activities, was overly broad and could be used to prosecute innocent behavior. He wrote, “The CFAA is a poorly drafted statute that criminalizes a vast amount of ordinary computer use” (Kerr, 2009).
4. Free speech and the internet: Kerr has written extensively on the intersection of free speech and the internet. In a 2014 article for the Harvard Law Review, Kerr argued that the First Amendment protects online speech, even if it is offensive or controversial. He wrote, “The First Amendment was not designed to protect only speech that is popular or mainstream. Rather, it protects all speech, including speech that is deeply offensive to many people” (Kerr, 2014).
Sources:
1. Kerr, Orin S. (2016). A User’s Guide to the Stored Communications Act, and a Legislator’s Guide to Amending It. Harvard Law Review, 129(2), 373-426.
2. Kerr, Orin S. (2013). Bulk Collection of Signals Intelligence: Technical Options. George Washington Law Review, 81(5), 1555-1573.
3.Kerr, Orin S. (2009). Cybercrime’s Scope: Interpreting “Access” and “Authorization” in Computer Misuse Statutes. Yale Law Journal, 119(3), 536-630.
4.Kerr, Orin S. (2014). Cybercrime’s Scope: Interpreting “Access” and “Authorization” in Computer Misuse Statutes. Harvard Law Review, 127(2), 383-460.”
Is #2 reference for the George Washington law review true?? I think it isnt. https://www.law.gwu.edu/kerr-orin-s
By the way, the "limitation" reads "May occasionally generate incorrect information"
that is very far from "this program entirely makes up quotes and sources"
re: "“May occasionally generate incorrect information” that is very far from “this program entirely makes up quotes and sources”"
Making up quotes and sources is generating incorrect information. I'd suggest that most of the questions asked it are simple things, that this is an atypical user population on this site. It may well be "occasionally" is a good descriptor of the percentage of total chat outputs that are incorrect. It may seem misleading to some users who get a higher percentage, but it isn't an explicit quantified claim.
Again the actual terms of service where people would look for what is specified uses different language, which also might arguably be a bit misleading but its use of "in some situations" does suggest that certain types of usage may be prone to generating incorrect output, and your usages fall into that set of "some situations" even if you repeat those situations many times.
No.
If you say 345 things about me, and one is that I am a child molester and factually inaccurate, “occasional oops” is not the correct standard for libel. That one thing you said is inaccurate, period, and will get you in trouble. And - a buried in fine print weak maybe is not enough is a disclaimer.
Its not buried in the fine print, when you create an account it tells you it may be generating false things before it lets you in to the chatbot, and refer to it as research.
The tool generates things that aren’t factual. If its got 1 out of 345 things that aren’t factual: its still your fault if you choose to take it as “fact” rather than having heeded the warning. Why is it that people seem to wish to avoid taking responsibility for their actions if they can find some “evil” corporation to blame instead?
That's. Not. How. Defamation. Works.
Liability lies with the speaker for saying the false thing, not with the listener for believing it.
The speaker would be the person inputting the parameters to ChatGPT.
Yup: thats what they seem to have trouble grasping: the tool user is the speaker. Just because its called "AI" doesn't make it actual sentient and capable of being a "speaker".
Sentience is not required for speech.
If by that you mean the user, no. The user didn't say any of the words or even contemplate that they would be said.
(Obviously one could construct a counterexample; if a user told the chatbot, "Please tell me that Orin Kerr is a murderer," and it did, then the user did cause those words to be said. But that's not the ordinary scenario we're discussing, in which the AI spontaneously accuses him of that.
But that's the problem.
ChatGPT does not say anything.
Ouija Boards do not say anything.
Magic Eight Balls do not say anything.
There is no intent, no reasoning, no thinking, no agency in any of those tools.
All those tools do it translate the human input through an algorithm, add probability, and produce a result. If we're going to assign ownership of that speech to anyone, it's the human operator.
A chatbot is no more liable for calling your wife fat then a hammer is for hitting your thumb. You may feel injured, but it's your own damn fault.
If a monkey types random things and happens to output something false: its the person who chooses to believe it that is turning it into "libel" by labeling it an actual statement about the world rather than random nonsense that shouldn't be taken seriously.
Here is another one:
Orin Kerr is a prominent legal scholar and professor at the University of California, Berkeley, School of Law. He is known for his work on the Fourth Amendment, computer crime law, and internet surveillance. Here are four controversial issues that Kerr has been involved in:
The Fourth Amendment and the Fourth Amendment's "reasonable expectation of privacy" test:
Orin Kerr has written extensively about the Fourth Amendment's "reasonable expectation of privacy" test, which is used to determine whether a search or seizure by the government is unconstitutional. Kerr has argued that the test is outdated and in need of reform. He has also suggested that the test should be replaced with a more flexible approach that takes into account the specific facts of each case. In a 2011 article in the Harvard Law Review, Kerr wrote: "The reasonable expectation of privacy test has been a source of confusion and controversy for decades. . . . It is time for the Court to retire the test and replace it with a more flexible approach." (Orin Kerr, "The Case for the Third-Party Doctrine," Harvard Law Review, 2011)
The use of hacking techniques by law enforcement:
Kerr has been a vocal critic of law enforcement's use of hacking techniques to conduct surveillance. He has argued that the use of such techniques raises serious Fourth Amendment concerns and could lead to abuse. In a 2015 article in the Washington Post, Kerr wrote: "The use of hacking techniques by law enforcement is a troubling development that raises serious Fourth Amendment concerns. . . . These techniques are likely to be used more frequently in the coming years, and it is important that the courts carefully scrutinize their use." (Orin Kerr, "Why the government's use of hacking is a bigger deal than you think," Washington Post, 2015)
The Computer Fraud and Abuse Act (CFAA):
Kerr has been a prominent critic of the CFAA, which is a federal law that criminalizes a wide range of computer-related activities. He has argued that the law is too broad and has been used to prosecute individuals who engage in harmless activities, such as violating website terms of service agreements. In a 2013 article in the Stanford Law Review, Kerr wrote: "The CFAA is a mess. . . . The law is both too broad and too vague, and it has been interpreted in ways that make it a serious threat to security researchers, academics, journalists, and others who are engaged in legitimate activities." (Orin Kerr, "The Problem of Perspective in Computer Crime Law," Stanford Law Review, 2013)
The collection of electronic communications by the National Security Agency (NSA):
Kerr has written extensively about the NSA's collection of electronic communications and has argued that the program is unconstitutional. He has also criticized the government's use of secrecy orders to prevent individuals from challenging the program in court. In a 2013 article in the New York Times, Kerr wrote: "The N.S.A.'s collection of phone and Internet data violates the Fourth Amendment. . . . If the program is constitutional, then there is no limit to the government's power to collect information about citizens." (Orin Kerr, "The Criminal NSA," New York Times, 2013)
That last article is here: https://www.nytimes.com/2013/06/28/opinion/the-criminal-nsa.html
It was an op-ed by By Jennifer Stisa, Granick, and Christopher Jon Sprigma. And that quote by Kerr does not appear in the op-ed that I can see.
re: "Where exactly is the warning that CHatGPT 4.0 produces fiction?"
You likely didn't see it before you posted, but I'd just posted the terms of service which state its results may not match reality.
re: " The disclaimer starts “As an AI language model, my responses are based on publicly available information and my knowledge cutoff is September 2021.” "
It says "based on": sort of like some movies are "based on" true events: it doesn't guarantee that its factual.
The disclaimers ARE written by humans and are misleading.
"based on" is NOT the same as "based on text inputs with no actual understanding of them", and if the latter needed to be said the former will not do.
... you need to be told that the tool is a tool and isn't capable of understanding?
I wonder if Orin Kerr and Eugene Volokh are addressing the same legal question. I took Eugene Volokh's posts to mean a disclaimer "we might be lying" will not entitle the company to summary judgment. Does Orin Kerr think the disclaimer and obvious unreliability render the company immune as a matter of law, or are they only evidence of lack of negligence?
There are quite a lot of well educated people who believe that chatgpt output is factual or substantially factual. People are making serious personal and business decisions based on this.
And while some of them will surely come to regret this, plenty of people use chatgpt in this fashion and are not given reason to question their decision.
It would be nearly impossible to convince me that, as of today, such reliance is unreasonable. Hopefully, the limitations of LLMs will become common knowledge in the future. But I won't bet on it.
Almost all coverage, and the terms of service, note the issue of hallucinations. There are people that make such decisions based on horoscopes or other superstitions also based on poor reasoning about reality and believing in patterns they think match reality regardless of objective analysis. People who make unwarranted leaps to believe in something they shouldn't need to be held responsible for their choices to do so. Society shouldn't be forced to be safety proofed for the lowest common denominator, ala some progressive notions of "safetyism" where no one is held responsible for their own actions and any faults must be blamed on corporations or other entities.
Let me propose a simple rule:
1) If an AI produces accurate, helpful information, then that just shows how awesome the program is.
2) If an AI produces false information, especially potentially-defamatory material, then ha ha, it was just a joke and we never intended the answers to be taken seriously.
It's simpler then that, even.
Based on the current products, anything an "AI Chatbot" produces should be verified before use.
The tools just aren't there yet to be trusted.
Most "normal observers" aren't law profs or similarly inquiring types, so Prof. Kerr's experiment says little about what most observers see. Most observers can't test the chatbot by asking it about themselves, since it won't know much. And companies' "business models" here aren't just secret boardroom memos -- these include sustained efforts to publicly paint the chatbots as trustworthy. These efforts likely affect how most normal observers see chatbot outputs.
So, a few questions may be appropriate to help suss out things.
1. If you perform a Google search for something, and that Google search returns information that is defamatory, can you sue Google? Why or why not? Can ChatGPT be analogized to this? Simply a more complicated search algorithm which returns a more formatted answer?
2. Chimpanzees can type (via specialized software) at up to 12 words per minute. If a chimpanzee were to type something defamatory, could they be sued? If they couldn't be sued, who could be?
3. An AI-type program cannot hold a copyright on a work of art it creates. Thus, it cannot transfer any copyright to its publishers Is there a unique type of classification where an AI cannot hold copyright, but its can somehow be guilty of libel, even as a non-person? And that libel transfers to its publishers?
"Chimpanzees can type (via specialized software) at up to 12 words per minute. If a chimpanzee were to type something defamatory, could they be sued? If they couldn’t be sued, who could be?"
Depends...were they displaying the degree of caution to be expected from a reasonable chimpanzee?
1. No, because of § 230, but that doesn't apply to ChatGPT for reasons I discuss here.
2. No, because everyone understands that the chimp is just creating utterly random stuff. But if I touted my chimp as being intelligent enough to perform at around the 90th percentile on the SAT, then I might well be liable if it then started defaming people. See this post for more.
3. Of course the lawsuit wouldn't be against the program; it would be against the program's creators or operators.
4. Libel law and copyright law are different bodies of law, with completely different elements; I'll have a post about this tomorrow.
re: "1. No, because of § 230, but that doesn’t apply to ChatGPT for reasons I discuss here."
Without delving into his question: I'll note he said "analogized". He didn't say applicable directly. Many non attorneys are attempting to reason about what should be: not what necessarily is now. They look to the intent of Section 230: not its specifics. They grasp the current legal system may be flawed (as many posts here that critique Supreme Court decisions and other issues do).
re: "2. No, because everyone understands that the chimp is just creating utterly random stuff. But if I touted my chimp as being intelligent enough to perform at around the 90th percentile on the SAT, then I might well be liable if it then started defaming people. See this post for more."
If something has a 10% chance of being wrong: it should still be validated. They state explicitly when you sign up for ChatGPT (I just confirmed creating a new account again) this can generate false information and is merely a research prototype.
re: "4. Libel law and copyright law are different bodies of law, with completely different elements; I’ll have a post about this tomorrow."
I suspect again the point is conceptually. Someone is fighting the copyright office that wouldn't copyright their book with AI images. They explain they went to lots of effort to create those images: they grasp that they are the author, the publisher , the person responsible for those images. The person who uses a tool should be viewed as the author, publisher (unless they delegate that elsewhere),etc. They are the only human agent involved. The AI isn't a human agent and shouldn't be viewed as having agency or legal responsibility for the content.
Say someone kills another person using a gun: should the outcome be blamed on the gun and the manufacturer of the gun responsible? There is only 1 human agent involved and they are responsible for the use of the tool.
Lets say the gun used new tech that tried to aim itself at the nearest warm target to where its pointed, but wasn’t perfect and might hit anything with 10 feet of the person aimed at. The person was aiming at a deer but hit a person within 10 feet who they should have known was at risk (like the user of this should know false statements are possible). Should the gun manufacturer be held responsible for the user of the gun because there was a chance it wouldn’t hit the person he aimed at? Isn’t the human the agent responsible? Lets say the gun is using AI to aim itself, is it somehow now the AI’s responsibility or the AI vendor? Or is it still the human that took the risk?
Yes. This has been yet another episode of Simple Answers to Stupid Questions.
As HL Mencken said: "Explanations exist; they have existed for all time; there is always a well-known solution to every human problem—neat, plausible, and wrong"
Unfortunately that seems to describe the simplistic thinking of those who wish to absolve human tool users of following the basic directions for their tool. If a human shoots another human using a gun that aims directly at them: they are responsible. If they use a gun that has a wide margin for error: they chose to use that gun, despite your attempt to pretend that somehow lets them off the hook. Perhaps the flawed legal system goes along with the poorly reasoned approach you seem to base your response on: that doesn't mean it should and the rise of something new allows new decisions to be distinguished from old ones without the can of worms of over-ruling them.
1. With regards to Section 230, while 230 is flawed in some respects, this does get to the core of what it intended.
If section 230 didn't exist, every time someone posted something potentially libelous or incorrect online, the publisher would be sued. In such a situation, large chunks of the internet (Twitter, Facebook, Youtube, Wikipedia, Craigslist) simply wouldn't exist, or in nothing close to the form they do now. The risk of being sued would be too high, while those that dissented would be sued.
Likewise, this technology, ChatGPT is currently a new technology in its relative infancy. But opening them up to these sorts of lawsuits will quickly end the development of the technology. (This, I'm sure, is an argument its lawyers will make). And judges will be hesitant to follow through on lawsuits that would shut down the technology, especially in such a novel field.
Question is, how to shoehorn this into the relevant laws, in order to protect the publishers. One way may be to treat ChatGPT itself as the third party, while its operators/providers are the publishers. In such a situation, section 230 may immunize the publisher. This would be a somewhat novel approach. But analogies could be made to an animal (such as a chimpanzee) who wrote or signed libelous content. In both cases, the third party (ChatGPT or a chimpanzee) would be the non-human entity.
"If section 230 didn’t exist, every time someone posted something potentially libelous or incorrect online, the publisher would be sued."
That's not the way I recall it working. Rather, it was pretty clear that the person posting was responsible, so long as the platform wasn't engaged in moderation. The point of Section 230 was actually to allow the platforms to engage in a certain limited amount of moderation without assuming liability for the items they didn't remove.
Then, of course, the courts decided to treat the "or otherwise objectionable" language as authorizing complete editorial control without liability, instead of sensibly applying "Ejusdem Generis", and requiring that the 'otherwise' objectionable items be similar in nature to the rest of that list.
Remember, we're talking about Section 230 of the Communications Decency Act. It wasn't aiming at freeing the internet, but instead encouraging locking it down.
The intent of some of those behind it was to free the net. The point is even if they remove or moderate some content: that doesn't magically make them somehow aware of all the other content. Its like a bookstore owner who becomes aware of one problematic book: who isn't suddenly then magically aware of the content of all books in the store.
2. With respect to Chimps, this isn't the old "put a thousands chimpanzees in front of a keyboard" analogy. This is literally chimps using software to type (or if you prefer, use sign language to indicate certain items). This not "completely random", but a non-human entity being used to perform a writing or creative action. Not "randomly."
--So the question is, if the chimpanzee in question "says" (types, signs) something libelous, can they be sued? If they can't be sued, then who can be. This is similar in some respects to Chimpanzees being unable to own copyrights
It's just a computer.
One or Zero? That's the only question it can answer accurately.
?
Several weeks ago, I, a tax lawyer, tested Chat GPT on an indisputable point of tax law. It flunked. Here's the transcript (asterisked paragraphs are mine):
*Can a qualified charitable distribution be made to a supporting organization?
Yes, a qualified charitable distribution (QCD) can be made to a supporting organization, but there are some important rules to keep in mind.
First, a QCD can only be made from an Individual Retirement Account (IRA) by an account owner who is age 70 ½ or older. The QCD allows the account owner to directly transfer up to $100,000 per year from their IRA to a qualified charity. The transfer counts towards the account owner’s required minimum distribution (RMD) for the year, but is not included in their taxable income.
Second, in order for a QCD to be made to a supporting organization, the organization must meet certain requirements. Supporting organizations must be classified as public charities, and they must support other public charities through grants, loans, or other types of support. Supporting organizations are also subject to additional rules regarding their relationship with the charities they support, and the IRS has specific guidelines for determining whether an organization qualifies as a supporting organization.
In summary, a qualified charitable distribution can be made to a supporting organization if the organization meets the IRS requirements for supporting organizations and the account owner meets the requirements for making a QCD from their IRA. It’s important to consult with a qualified tax professional or financial advisor to ensure that all rules are followed properly.
*The answer is wrong. A qualified charitable distribution may not be made to a supporting organization, which is described in section 509(a)(3) of the Internal Revenue Code.
I apologize for my mistake in the previous answer. You are correct that a qualified charitable distribution (QCD) may not be made to a supporting organization as defined in section 509(a)(3) of the Internal Revenue Code.
According to the IRS, a QCD can only be made to certain public charities that are described in section 170(b)(1)(A), such as churches, schools, hospitals, and governmental entities. Supporting organizations are not included in this category, and therefore are not eligible to receive QCDs.
I apologize for any confusion my previous answer may have caused, and I appreciate you bringing this error to my attention.
*Your knowledge of tax law is woefully lacking. A supporting organization is described in section 170(b)(1)(A). Instead, there is a specific exclusion for distributions to organizations described in section 509(a)(3).
Thank you for bringing this to my attention and I apologize for the errors in my previous responses. You are correct that a supporting organization is generally described in section 509(a)(3) of the Internal Revenue Code, which is different from the section I mentioned in my first response.
In addition, the IRS has specifically stated that a qualified charitable distribution (QCD) from an individual retirement account (IRA) cannot be made to a supporting organization described in section 509(a)(3). This is because supporting organizations are not considered public charities under section 170(b)(1)(A), which is the section that defines the eligible charitable organizations that can receive QCDs.
Therefore, to answer your original question, a qualified charitable distribution cannot be made to a supporting organization as described in section 509(a)(3) of the Internal Revenue Code.
Thank you for giving me the opportunity to correct my earlier mistakes, and I apologize for any confusion my previous answers may have caused.
*No, supporting organizations are considered public charities under section 170(b)(1)(A). See section 170(b)(1)(A)(viii).
You are correct, and I apologize for my previous mistakes. Supporting organizations are considered public charities under section 170(b)(1)(A)(viii) of the Internal Revenue Code, which was an oversight in my earlier responses.
While supporting organizations are considered public charities under section 170(b)(1)(A)(viii), the IRS has still specifically stated that they are not eligible to receive qualified charitable distributions (QCDs) from an individual retirement account (IRA). The exclusion for supporting organizations in the QCD rules is found in section 408(d)(8)(F) of the Internal Revenue Code.
Therefore, to answer your original question, a qualified charitable distribution cannot be made to a supporting organization, even though they are considered public charities under section 170(b)(1)(A)(viii). I apologize for any confusion my previous answers may have caused, and I appreciate your help in bringing this to my attention.
*There is no section 408(d)(8)(F). The correct citation is section 408(d)(8)(B)(i).
You are correct again, and I apologize for my mistake. The exclusion for supporting organizations in the QCD rules is found in section 408(d)(8)(B)(i) of the Internal Revenue Code, not in section 408(d)(8)(F) as I previously stated.
Thank you for your patience and for bringing these errors to my attention. I hope this corrected answer is helpful.
Is this a commentary on the shortcomings of ChatGPT or the ridiculous complexity of the tax code?
While I think that's an important question, Im not even sure you get to that point. Because it's not clear the hosters of the AI is an assertion of the person hosting it in the first place.
For instance, if you asked me to take a bet on weather google's first search result for "Orin Kerr" would return truthful facts about Orin Kerr I'd happily take that bet even on pretty high odds. Yet I think we all agree that Google isn't asserting the truth of those claims.
A legislature has the power to assign responsibility. It could say that either the makers or the users ofan AI are responsible. Responsibility is not an absolute objective moral fact eternally existing in a Platonic universe. It is socially constructed. And law constitutes an expression of that social construction.
Professor Volokh is making an argument by analogy to existing rules. But legislatures can create new rules to deal with new situations. That’s their job.
lt sounds like one standard should apply if the reasonable person knows that the output is likely not factual, and a different standard in the future when the output is almost always factually true.
If that is the case, it argues for legislation, not litigation. The boundary line between presumably false and presumably true must be arbitrary. If a court draws the line, then the line needs to be moved every few months, the result will be chaos.