The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Why § 230 Likely Doesn't Provide Immunity for Libels Composed by ChatGPT, Bard, etc.
This week and likely next, I'll be serializing my Large Libel Models? Liability for AI Output draft. I had already posted on why I think such AI programs' communications are reasonably perceived as factual assertions, and why disclaimers about possible errors are insufficient to avoid liability. Here, I want to explain why I think § 230 doesn't protect the AI companies, either.
[* * *]
To begin with, 47 U.S.C. § 230 likely doesn't immunize material produced by AI programs. Section 230 states that, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." "[I]nformation content provider" is defined to cover "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service."[1] A lawsuit against an AI company would aim to treat it as publisher or speaker of information provided by itself, as an entity "that is responsible, in whole or in part, for the creation or development of [such] information."[2]
As the leading early § 230 precedent, Zeran v. AOL, pointed out, in § 230 "Congress made a policy choice . . . not to deter harmful online speech through the . . . route of imposing tort liability on companies that serve as intermediaries for other parties' potentially injurious messages."[3] But Congress didn't make the choice to immunize companies that themselves create messages that had never been expressed by third parties.[4] Section 230 thus doesn't immunize defendants who "materially contribut[e] to [the] alleged unlawfulness" of online content.[5]
An AI company, by making and distributing an AI program that creates false and reputation-damaging accusations out of text that entirely lacks such accusations, is surely "materially contribut[ing] to [the] alleged unlawfulness" of that created material.[6] Recall that the AI programs' output isn't merely quotations from existing sites (as with snippets of sites offered by search engines[7]) or from existing user queries (as with some forms of autocomplete that recommend the next word or words by essentially quoting them from user-provided content).
To be sure, LLMs appear to produce each word based on word frequency connections drawn from sources in the training data. Their output is thus in some measure derivative of material produced by others.[8]
But of course all of us rely almost exclusively on words that exist elsewhere, and then arrange them in an order that likewise stems in large part from our experience reading material produced by others. Yet that can't justify immunity for us when we assemble others' individual words in defamatory ways. Courts have read § 230 as protecting even individual human decisions to copy-and-paste particular material that they got online into their own posts: If I get some text that was intended for use on the Internet (for instance, because it's already been posted online), I'm immune from liability if I post it to my blog.[9] But of course if I don't just repost such text, but instead write a new defamatory post about you, I lack § 230 immunity even if I copied each word from a different web page and then assembled them together: I'm responsible in part (or even in whole) for the creation of the defamatory information. Likewise for AI programs.
And this makes sense. If Alan posts something defamatory about Betty on his WordPress blog, that can certainly damage her reputation, especially if the blog comes up on Google searches—but at least people will recognize it as Alan's speech, not Google's or WordPress's. Section 230 immunity for Google and WordPress thus makes some sense. But something that is distributed by an AI company (via its AI program) and framed as the program's own output will be associated in the public's mind with the credibility of the program. That may make it considerably more damaging, and would make it fair to hold the company liable for that.
Relatedly, traditional § 230 cases at least in theory allow someone—the actual creator of the speech—to be held liable for it (even if in practice the creator may be hard to identify, or outside the jurisdiction, or lack the money to pay damages). Allowing § 230 immunity for libels output by an AI program would completely cut off any recourse for the libeled person, against anyone.
In any event, as noted above, § 230 doesn't protect entities that "materially contribut[e] to [the] alleged unlawfulness" of online content.[10] And when AI programs output defamatory text that they have themselves assembled, word by word, they are certainly materially contributing to its defamatory nature.
[1] 47 U.S.C. §§ 230(c)(1), (f)(3).
[2] I thus agree with Matt Perault's analysis on this score. [Cite forthcoming J. Free Speech L. article.]
[3] 129 F.3d 327, 330–31 (4th Cir. 1997).
[4] The statement in Fair Housing Council, 521 F.3d at 1175, that "If you don't encourage illegal content, or design your website to require users to input illegal content, you will be immune," dealt with websites that republish "user[]" "input"—it didn't provide immunity to websites that themselves create illegal (e.g., libelous) content based on other material that they found online.
[5] Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1167–68 (9th Cir. 2008) (en banc). Many other courts have endorsed this formulation. Fed. Trade Comm'n v. LeadClick Media, LLC, 838 F.3d 158, 174 (2d Cir. 2016); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 410 (6th Cir. 2014); F.T.C. v. Accusearch Inc., 570 F.3d 1187, 1200 (10th Cir. 2009); People v. Bollaert, 248 Cal. App. 4th 699, 719 (2016); Vazquez v. Buhl, 150 Conn. App. 117, 135–36 (2014); Hill v. StubHub, Inc., 219 N.C. App. 227, 238 (2012).
[6] If the AI program merely accurately "restat[es] or summariz[es]" material in its training data, even if it doesn't use the literal words, it may still be immune. See Derek Bambauer, Authorbots, 3 J. Free Speech L. __ (2023). But I'm speaking here of situations where the AI program does "produced . . . new semantic content" rather than "merely repackage[ing] existing content." Id. at __.
[7] See O'Kroley v. Fastcase, Inc., 831 F.3d 352 (6th Cir. 2016) ("Under [§ 230], Google thus cannot be held liable for these claims — for merely providing access to, and reproducing, the allegedly defamatory text.").
[8] See Derek Bambauer, supra note 7, at __; Jess Miers, Yes, Section 230 Should Protect ChatGPT and Other Generative AI Tools, TechDIrt, Mar. 17, 2023, 11:59 am.
[9] See, e.g., Batzel v. Smith, 333 F.3d 1018, 1026 (9th Cir. 2003), superseded in part by statute on other grounds as stated in Breazeale v. Victim Servs., Inc., 878 F.3d 759, 766–67 (9th Cir. 2017); Barrett v. Rosenthal, 146 P.3d 510 (Cal. 2006); Phan v. Pham, 182 Cal. App. 4th 323, 324–28 (2010); Monge v. Univ. of Pennsylvania, No. CV 22-2942, 2023 WL 2471181, *3 (E.D. Pa. Mar. 10, 2023); Novins v. Cannon, No. CIV 09-5354, 2010 WL 1688695, *2 (D.N.J. Apr. 27, 2010).
[10] Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1167–68 (9th Cir. 2008) (en banc). Many other courts have endorsed this formulation. Fed. Trade Comm'n v. LeadClick Media, LLC, 838 F.3d 158, 174 (2d Cir. 2016); Jones v. Dirty World Ent. Recordings LLC, 755 F.3d 398, 410 (6th Cir. 2014); F.T.C. v. Accusearch Inc., 570 F.3d 1187, 1200 (10th Cir. 2009); People v. Bollaert, 248 Cal. App. 4th 699, 719 (2016); Vazquez v. Buhl, 150 Conn. App. 117, 135–36 (2014); Hill v. StubHub, Inc., 219 N.C. App. 227, 238 (2012).
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
So, is it your intention to rely on defamation per se?
Because if you don't, then it seems establishing harm is going to be exceedingly challenging.
Especially since it seems obvious to me that the moment someone takes the chat logs and then publishes them (and thus the lies), liability is going to shift to them rather then the companies that made the AI tools.
Nope. Section 230 does not immunize that. If I write and publish something defamatory online, whether on my blog, Twitter or Facebook, I am liable. If you republish that, you are also liable, that does not get me off.
It's only a platform like Twitter or Facebook that gets Section 230 immunity for third party content.
Nope. 230 does not mention "platforms," and does not apply only to them. See, e.g., Batzel v. Smith, 333 F. 3d 1018 (9th Cir. 2003). The facts are a bit convoluted, but it allows the application of 230 to listservs.
What are you talking about? EscherEnigma claimed that if something is republished, the original author is absolved from liability. Which is not the case.
That listservs are also included in Section 230 seems to be a red herring. I know what Section 230 says, but platforms are the main parties that claim its immunity.
Yes, he's wrong on the republishing part. I was saying that your talk about social media 'platforms' being the only beneficiaries of 230 was too narrow.
I wasn't giving a legal definition of who is or is not covered by Section 230, just that republishing does not get the original author off the hook.
"your talk about social media ‘platforms’ being the only beneficiaries of 230 was too narrow."
Is there a legal definition of "platforms" somewhere that you are basing this on? I don't see platforms mentioned in Batzel v. Smith, either.
It seemed obvious to me that Bored Lawyer was colloquially referring to interactive computer service providers, and I don't see why that's not reasonable.
No. The law doesn't use the word "platform" at all, even though lots of people on twitter think it does. That's why I put scare quotes around 'platform.'
He said "a platform like Twitter or Facebook," which is narrow. ICS is a much much much much broader category than that, which was my entire point.
Got it. In general, I think people use the word "platform" even more broadly than ICS, but I see what you're getting at here.
Section 230 is a red herring.
I say liability shifts to a person who makes a blog based on an AI chat, because they're the human in the situation and have actual agency, responsibility, judgement, etc. and so-on, whereas the AI chat program is just a tool, different from your spellchecker only in magnitude.
It's like if your dog bites someone. You're responsible, not your dog, because you're the human in the scenario.
Not sure your canine analogy follows. Certainly not if the AI programmers/platform is liable in the first instance, they don’t get relieved of liability by someone else republishing. If anything, they are the equivalent of the dog’s owner who failed to keep it under control.
As for defamation per se, that depends on what's in the alleged defamation. Some is, some is not. And even if not, there could potentially be some kind of damages, again depending on what is said.
1. Many jurisdictions don't require proof of special harm for written defamatory material in any event; to quote the Restatement (Second) of Torts,
2. But, yes, even those that do require such harm in situations don't require it as to certain "per se" categories, such as statements that falsely say someone was guilty of a crime, acted in a particular way inconsistent with his professional obligations, etc.
3. In some situations, special harm might be provable, e.g., if someone cancels a planned contract with someone else, and eventually it comes out that the reason was the output of the AI program. But in any event, it often won't have to be proved, for reasons given in 1 and 2 above.
I'll have a post on this in the next few days, since that's another section of my draft.
I mean, I can think of a rather infamous recent case where a man very publicly accused another of being a literal pedophile, and the court said "nah, no one would think he was serious".
So I think you might be overstating the ease of libel cases here.
But hey, I fully encourage you to test your theory that you can sue Google because their tool might lie about your reputation to someone gullible enough to forget the reputation of the tool they're using.
I'm sure that kind of precedent-setting case would be great for your reputation.
Defamation is a hard claim to win. That’s true whether you are a human being or an AI computer. The question here is what difference does it make that an AI computer said it as opposed to an individual.
Just to be precise, the court allowed Vernon Unsworth's case against Elon Musk for Musk's "pedo guy" comment to go to trial. The jury then concluded that, on the facts of the case, the statements weren't actionable, presumably because in context the Tweets would be seen as unserious trash talk, and not as a factual assertion. Here were the Tweets, by the way:
You already said your legal theory here doesn't rely on any harm to reputation actually happening. You already said that your legal theory here doens't rely on anyone actually asking the tool about you. You are relying on the possibility that the tool might say something.
And you think a jury is more likely to accept that over "yeah, Musk totally called that guy a pedophile."
Eugene, what I don’t understand here is the publishing piece. These AIs have a randomization element, so that they don’t give the same response twice. That means it’s very much like a 1:1 dialogue between the AI and the individual user issuing the prompts.
I don’t see how that could be considered “publishing,” at least with respect to the AI provider. If the user republishes the AI’s responses, that could count as defamation, as in the republishing of a libelous rumor. But then the user is the publisher, not the AI provider.
For the purposes of defamation law, communication to a single other person is "publishing."
I think what he is saying is that if I type something libelous about you, and then only I read it, I haven't libeled you because there hasn't been the "other" person reading it. And Dell's not liable because even though it was their machine that turned the random keystrokes into the libelous statement, it was as a direct result of my intent to do so.
I think the question is how closely the inputs cause the libelous statement to come out.
The issue is that people seem to be anthropomorphizing the AI as a “speaker” or “publisher” when it isn’t an entity capable of those things. The only entity with rights and agency involved in the process of having an output generated is the user who chooses to use an inanimate tool to generate some text. If a human creates text in Microsoft Word and puts it on the net to be viewed by others via the web server and their browser: none of the software involved in that process is the “publisher” or “speaker”. The speaker and publisher is the human entity that had the tools do all that.
Section 230 was an attempt to make it clear to those who might confuse things that the company providing the website service involved in hosting that user generated content wasn’t the “publisher” of it since there was no human from that website service involved in reviewing that content.
It just happens that this particular tool when given a prompt provides less predictable output: but its still only a piece of software without agency. The human chooses to use that, and accepted the terms of service granting its fallibility.
Yes, it is. Of course, you can't be so dense that you don't understand that when we talk about the AI we're talking about the company that creates the AI/offers it to the public.
Except as I noted: the company isn’t in the room with the person: they didn’t review the content. They aren’t the publisher: the user prompted the tool to create the content.
The person waived liability with the terms of service and chose to use the tool despite that. Yet you wish to pretend the company is somehow still liable for some reason that you don’t specify. Merely because you wish to treat the company as the publisher doesn’t make it so.
You need to provide actual logic: not merely assertions .
Dr. Ed makes a great point. But even putting it aside, let's assume it's something like:
Me> Give me the juicy gossip!
AI> Don't tell anyone but... Eugene has no sense of humor.
David, can you point me to a case where a conversation along those lines happened in private, like on a phone call, and the speaker of "Eugene has no sense of humor" was found liable?
I have been to seminars where counsel has warned that gossiping about colleagues (e.g. who is sleeping with whom) via email *is* libelous and hence ought not be done.
I think my question is really the same as EscherEnigma's. Let's say I email you "Eugene has no sense of humor." Then you publish my email in UCLA's Bob Loblaw Newsletter. Eugene is furious and incurs $1M in damages. Who's liable for those damages, me or you?
Well, obviously the speaker can't be found liable unless the subject of the discussion finds out about it. As a practical matter, Eugene can't sue if he doesn't know. But assuming Eugene does find out about it in some way, he can sue the speaker even though the speaker only communicated to one person.
But sue the speaker for what, exactly? What is the remedy?
re: “single other person”
There is no “other person” in this case: there is only one person involved. Its unclear if some people are confused by the use of the phrase “AI” to anthropomorphize it too easily and act as though it were a “person” in such statements.
The only human agent involved in this process is the person choosing to use a tool to generate content. They agreed with the terms of service acknowledging it may not generate factual content and then caused the computer to generate content. They are the only human agent involved in the process: but people keep trying to absolve them from any responsibility in the process.
Its like someone who agrees to take the risk to do something risky like say rent a motorcycle where they sign some sort of risk waiver taking responsibility for it, and then tries to blame the company that rented it to him for the fact that he had an accident, trying to claim that there should be 0 risk involved. Or going further and claiming the motorcycle manufacturer shouldn’t have been allowed to release a product that had any risk, that it was negligent.
There may be some more rational negligence theory at play in this case, but all I see are handwaving assertions that my critique is wrong but no one actually making any detailed case that can be examined and critiqued. Often people discover the devil is in the details, so they need to provide them so we can see if there are flaws in the theory.
I discuss that in Part I.D of the article; I plan on posting that section today or tomorrow.
I keep coming back to where I started -- what about Reuters?
I think the fact that their brand name is being stolen by fabricated quotes and nonexistent articles the far bigger issue here.
A hypothetical example -- say AI came back with a statement that Trump's claims as to election fraud were legitimate and cited Routers for the source. That definitely would be defamatory -- to Routers which wrote no such story.
Heck, say National Panhandler Radio (NPR) was falsely attributed to the story. I'd definitely believe it knowing just how far to the left NPR leans, and as they exist partially on donations from people who don't particularly like Trump, they'd clearly have harm.
Let's say you have a friend Jeff. You know Jeff, Jeff lies for attention. He makes things up all the time, about anyone and everyone, because he likes the attention. Furthermore, Jeff is about four years old.
Jeff is the one who just told you that "Trump’s claims as to election fraud were legitimate and cited Routers [sic] for the source".
If you believe Jeff, is that really Jeff's fault, or is that yours? Especially if he tells you where he heard it, and you don't double-check?
If you think it's ridiculous for you (or anyone else) to take Jeff at his word, then good on you.
Now replace "Jeff" with "Chatbot AI". If you suddenly swap to "nah, we should totally trust it and not be responsible for checking what it says, even after the many stories about how unreliable it is", then... well, I guess you're who Volokh is hoping will be on the jury.
An important fact to deal with is Meta's LLM model got leaked to 4chan. So there will be no shortage of extremists retraining it to generate hate speech, including defamatory content.
So you have OpenAI, Google, Meta, etc, are all trying to stop their LLM's from generating objectionable content, including defamatory speech. And on occasion they will inevitably fail.
And you have various parties will be explicitly training their LLMs to generate objectionable content, including defamatory speech. And most often they will inevitably succeed.
Sec. 230 immunizes info from "any person or entity that is responsible". Why can't the LLM be the entity? AI is not sentient enough to be a person yet, but surely it can be an entity. It is the entity that produced the erroneous info about you.
Also Google hid behind sec. 230 to avoid any liability for calling G.W. Bush a "miserable failure". It used to be that googling that phrase would return Bush as the top hit. This was obviously perpetuated as a political prank on the part of Google employees. What exactly is the difference between LLM libel and one of these Google bombs?
No. There's no liability with or without Section 230 for that.
Checking legal definitions of entity: it is something with legal rights and this level of AI doesn't have them. Its not an entity, its a program. However I'd suggest many seem to implicitly react that its an "entity" and try to blame it. Then of course since its not an entity instead they place the blame on the company that created it.
However I'd suggest that leads to subtle logical flaws in their thinking since the fact that it isn't an entity changes things and means there is no entity involved in creating the content except the user who uses the tool. They keep trying to avoid blaming the user for the creation of a tool they choose to use who took the risk and accepted the terms of service for using that tool.
Also Google hid behind sec. 230 to avoid any liability for calling G.W. Bush a “miserable failure”.
An opinion you disagree with is not defamation.
It used to be that googling that phrase would return Bush as the top hit. This was obviously perpetuated as a political prank on the part of Google employees.
It was obviously a political prank perpetuated by unaffiliated individuals who understood how Google's search algorithms worked, hence the phrase "Google bombs" (they were bombing the data feeds Google used with the associations they wanted).
The only part Google or Google employees played was when they modified the algorithms so that Google bombing was more difficult.
No, Google engineers made google bombing of leftists more difficult, while they left the Bush prank in place.
You might not think that it is so funny if a search on "child molester" turns up your name as the top hit.
Are terms of service irrelevant these days? Open AI’s terms of service note in part:
https://openai.com/policies/terms-of-use “” (d) Accuracy. Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.” … (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services,” … (b) Disclaimer. THE SERVICES ARE PROVIDED “AS IS.” EXCEPT TO THE EXTENT PROHIBITED BY LAW, WE AND OUR AFFILIATES AND LICENSORS MAKE NO WARRANTIES (EXPRESS, IMPLIED, STATUTORY OR OTHERWISE) WITH RESPECT TO THE SERVICES, AND DISCLAIM ALL WARRANTIES INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, SATISFACTORY QUALITY, NON-INFRINGEMENT, AND QUIET ENJOYMENT, AND ANY WARRANTIES ARISING OUT OF ANY COURSE OF DEALING OR TRADE USAGE. WE DO NOT WARRANT THAT THE SERVICES WILL BE UNINTERRUPTED, ACCURATE OR ERROR FREE, OR THAT ANY CONTENT WILL BE SECURE OR NOT LOST OR ALTERED.”
Though I guess the whole argument is that people are too dense to understand the concept of an AI not being guaranteed to be factual, so you assume they can’t be allowed to be considered mentally competent to have consented to the terms.
Are you mentally ill? The terms of service are a contract between the company and the user. They only control the company's potential liability to said user. They are irrelevant to a third party's rights.
Seriously? If you have trouble noticing the relevant specifics that you seem to wish to ignore, it states: "use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts..
DISCLAIM ALL WARRANTIES INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE,"
So no one should take what it states as a statement of fact with any guaranteed connection to reality.
It also states: "You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services,"
It states the user will "defend... us [OpenAI]... against any claims, losses and expenses (including attorneys' fees) arising or relating to your use of the Services".
If some third party claims: there was libel generated and sues OpenAI: that is a claim arising out of the users use of the service.
Do I need to explain in simpler terms? Perhaps a chatbot can explain it if you don't understand it.
Even if anyone reads Terms of Service — not even the most OCD lawyer does — that is only relevant for users. Users aren't the ones who would be suing OpenAI. Third parties who have arguably been defamed would be.
Once again: you input, "Tell me about David Nieporent." It outputs "David Nieporent is a serial sexual molester of aardvarks." I want to sue over that defamatory statement. Whether they disclaimed warranties is legally irrelevant to me. Whether you're required to defend and indemnify them is legally irrelevant to me.
"a serial sexual molester of aardvarks"
Wow, must be a lot of aardvarks where you live.
Well, not anymore.
Although the quoted part of Section 230 is of questionable relevance, the EFF for instance notes the basic underlying concept:
https://www.eff.org/issues/cda230
"Congress knew that the sheer volume of the growing Internet would make it impossible for services to review every users’ speech....
Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others. "
There is no human at OpenAI or these other companies reviewing the output speech. The whole intent of it was to not hold humans liable for content that they never reviewed. The only human involved in the process is the one choosing to use the tool and causing it to generate text. Yet rather than acknowledging that they should be able to be treated as mentally competent to grasp the output isn't a statement of "fact" (merely possibly useful and might be fact or fiction, but not guaranteed to be "fact"), you are hunting for some other person to blame. Just like progressives trying to pretend gun manufacturer's should be held responsible for the actions of those who commit crimes using guns.
Elsewhere in Section 230 it states:
" b)Policy
It is the policy of the United States—
(1)to promote the continued development of the Internet and other interactive computer services and other interactive media;
(2)to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation;
(3)to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;"
AI is one of those "interactive computer services" it presumably is trying to encourage, so it seems against the policy intent to use interpretations that discourage it. AI tries to increase user control over the information given, its merely not 100% accurate.
I mostly think the actual law isn't relevant, however there is one way it might be viewed as relevant. If you look more closely: it defines: "The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service."
The implication at the time is that "entity" purely refers to entities composed of humans, not an artificial entity. Therefore an "information content provider" is a human that creates content: even if they are only creating that content for themselves.
An AI is a tool, merely a more complicated one than Word or Photoshop. The person who chooses to run a tool to create information is an information creator. Don't anthropomorphize the AI to pretend its has human agency. Its not a content creator: the human involved in using a tool to create content is. So in the text:
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
You are trying to somehow pawn off responsibility for taking the agency to run a tool from the actual information content provider to an interactive service.
Absolutely wrong. The intent of it was to reverse Stratton Oakmont v. Prodigy, in which Prodigy's agents did review the content and were held liable as a result.
You're forgetting the humans who created the tool.
Calling the AI a slightly more complicated Word is not fooling anyone.
The EFF, an expert reliable source on the issue, states explicitly:
https://www.eff.org/issues/cda230 “Congress knew that the sheer volume of the growing Internet would make it impossible for services to review every users’ speech.”
The Prodigy case may have been one instigating factor in the drive to pass it, but it wasn’t the sole factor in the reasoning involved. Some of us were actually around watching the process back then.
As another source explains its essentially the bookstore analogy: not holding people responsible for content they don’t know about or review:
https://www.theverge.com/2019/6/21/18700605/section-230-internet-law-twenty-six-words-that-created-the-internet-jeff-kosseff-interview “To really understand Section 230, you have to go all the way back to the 1950s. There was a Los Angeles ordinance that said if you have obscene material in your store, you can be held criminally responsible. So a vice officer sees this erotic book that he believes is obscene. Eleazar Smith, who owns the store, is prosecuted, and he’s sentenced to 30 days in jail.
This goes all the way up to the Supreme Court, and what the Supreme Court says is that the Los Angeles ordinance is unconstitutional. There’s absolutely no way that a distributor like a bookstore could review every bit of content before they sell it. So if you’re a distributor, you’re going to be liable only if you knew, or should have known, that what you’re distributing is illegal. …CompuServe’s lawsuit is dismissed because what the judge says is, yeah, CompuServe is the electronic equivalent of a newsstand or bookstore.”
That viewpoint is what was codified into the law. No one from OpenAI is reviewing this content. There is no human in the loop, they can’t know the content generated any more than a bookstore owner was expected to know all content in their shop.
Section 230 might provide an analogy for what you think the law of AI defamation should be, but it is not applicable to AI at this time. Section 230 applies to user generated content. The output of AI is not user generated content. It is OpenAI's generated content.
re: "It is OpenAI’s generated content."
A user uses photoshop to generate content: Adobe doesn't generate it. This is just a more sophisticated tool they are providing a prompt to in order to generate content. Its sophistication misleads people into thinking its somehow something other than a tool.
This argument has already been rejected by at least one adjudicator in the copyright context.
re: "You’re forgetting the humans who created the tool."
No, I'm not forgetting them: I'm noting they aren't relevant since they aren't in the room reviewing content. They can't predict or review all possible outputs any more than Facebook can review all its user generated content. The user chose to use the tool after agreeing to terms of service acknowledging its flaws.
re: "Calling the AI a slightly more complicated Word is not fooling anyone."
Just because its more complicated than word shouldn't fool the poorly informed into thinking its a human agent that somehow can take responsibility for content in the way a human publisher can.
Facebook does not author user generated content. OpenAI does author the OpenAI output. Obviously OpenAI does not pay people to review its output, but that's OpenAI's choice.
Irrelevant. The person who was defamed didn't agree to those terms of service.
Actually, OpenAI does pay people to review its output. It is called RLHF, and was crucial to training ChatGPT, and in installing the biases it desired.
re: "OpenAI does author the OpenAI output. Obviously OpenAI does not pay people to review its output, but that’s OpenAI’s choice."
Section 230 was based on the reality that like with bookstores: its not a remotely realistic expectation that all content will be reviewed by humans. Nor is it in this case. The goal was to have humans take responsibility for content: and the only human that is connected to this content directly when its generated is the one using the tool.
A user tells a tool to generate content: no staff at OpenAI writes a word of it. Its no different than giving a prompt to a search engine: its merely more sophisticated in its output. The effort seems to be to take all agency away from the human involved.
There is someone fighting the copyright office now explaining that their work should be copyrighted since they spend long hours finding the right prompt to get the computer generated images they used in their book. They grasp they are the author of the content, even if the copyright office misguidedly seems to agree with you since they don't grasp the tech.
I applaud the copyright office for being among the first to get this question right.
Your argument depends entirely on the current interaction mode of popular AI tech, which is this sort of prompt, response loop, like an old command-line repl.
But that interaction model isn't inherent to the tech, in fact it's a little bit awkward, requiring some additional mechanics around the core AI. The most natural mode for AI is to just continuously output a stream of consciousness.
How does your argument fare if you take the prompting user out of it? Imagine an AI that's set up to tweet something random every 10 minutes, and it tweets that "Eugene has no sense of humor." Who's liable for that defamatory tweet?