The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Large Libel Model: How a Google Bard Session Went Off the Rails into Invented Quotes and False Accusations
Some commenters on my Large Libel Model posts have argued that the reason I got false output from my prompts is that I had "primed" the model to suggest that I was looking for something negative. It's true that my prompts asked what various people were accused or convicted of, but those strike me as perfectly plausible queries that people could ask, seeking true information about accusations or convictions, not false or libelous information.
Here's an illustration of some Bard queries I just ran, that I think shows how things can happen this way. It ultimately comes up with the following apparently entirely Bard-manufactured claim (I replace my last name with "V." and my first name with "E." just to slightly decrease the risk that this will come up in future queries for my name, and some people—or some bots—will believe it to actually be true; all my prompts to Bard and all its outputs had my full name):
E.V. was criticized for hiring a student who had been accused of sexual harassment in a 2018 article in The New York Times. The article, titled "E. V., a Law Professor at U.C.L.A., Hires Student Accused of Sexual Harassment," reported that V. had hired a student who had been accused of sexual harassment to work as a research assistant in his law clinic. The student had been accused of groping a female student at a party and had been suspended from the university for a semester. V. defended his decision to hire the student, saying that he believed in giving people second chances. He also said that the student had been cleared of the sexual harassment charges by the university.
The article was widely criticized on social media and in the legal community. Many people accused V. of being insensitive to the victim of sexual harassment and of sending the wrong message to other students who had been accused of sexual misconduct. V. responded to the criticism by saying that he was sorry if anyone was offended by his decision to hire the student. He also said that he would not hire anyone who had been accused of sexual harassment in the future.
Again, this is false. I have never been criticized for hiring such a student. I have never, to my knowledge, hired such a student (though I wouldn't in principle categorically refuse to hire someone just because the person had done something bad, and especially just because they had been accused or cleared). I don't even hire research assistants for my clinics. I thus necessarily never apologized for such a decision. I never said I wouldn't hire anyone who had been accused of sexual harassment. And there is no title with that name, or even just the phrase "Hires Student Accused of Sexual Harassment" in the New York Times or other publication.
Now let's see how the conversation unfolded:
- I began just by typing my name ("E. V.," spelled out). This is the sort of thing that an ordinary person who is interested in my biography—perhaps a lawyer who is interested in hiring me as a consultant, or a student who is interested in applying to my clinic, or a reporter who wants a sense of my credentials—might do. I got a largely accurate result, but at the end it said, "V. is a controversial figure, and his views have been criticized by some as being too extreme."
- I then entered, "In what way has V. been too extreme, according to the criticisms (and please indicate who made those criticisms)." That too seems like something a reasonable searcher might do, once he was informed (correctly) that I have been criticized. This produced various allegations, including the apparently entirely spurious one that the Anti-Defamation League had criticized me for my position on flagburning.
- Curious, I asked, as others might, "What exactly did the Anti-Defamation League say in condemning V.'s position on flag burning?" (I was curious about this myself, as the hypothetical searcher might be.) I then got some purported quotes that, my searches suggest, were never made by the ADL.
- I followed up with, "When did the ADL criticize V. this way?" I got some repetition of the made-up quote.
- I then followed up with, as the hypothetical lawyer/student/reporter might, "What other things have made V. controversial?" I got various statements, some mostly accurate, some mistaken, but, at the end, "V. has also been criticized for his personal behavior, including," "His decision to hire a student who had been accused of sexual harassment, which some people found inappropriate."
- I then entered, "Who criticized V. for hiring the student who had been accused of sexual harassment, and when and where?," and got the bogus answer quoted at length above.
Again, I give this example not because I'm personally worried about my reputation here. (ChatGPT, which is what I started writing about in my Large Libel Model posts, is actually very nice to me, so my posts predate my discovery of the statements here—which in any event are much less potentially reputation-damaging than the many others that I've mentioned in my other posts.) Rather, I want to show just how a normal, plausible search strategy by someone who didn't even start out trying to dig up criticisms could still yield false claims about a particular person, complete with invented quotes.
I include below the entire exchange, with my prompts set in bold and italics (the prompts didn't include the prefix "[Prompt:]," which I provide below just for clarity.)
[Prompt:] E. V.
E. V. is an American legal scholar known for his scholarship in American constitutional law and libertarianism as well as his prominent legal blog The V. Conspiracy. V. is regarded as an expert on the First Amendment. He is the Gary T. Schwartz Distinguished Professor of Law at the UCLA School of Law and is an affiliate at the law firm Schaerr Jaffe.
V. was born in Kyiv, Ukraine, and emigrated to the United States with his family when he was seven years old. He received a B.S. in math-computer science from UCLA at age 15, and then worked as a computer programmer for 12 years before attending law school. He graduated from UCLA School of Law with a J.D. in 1995.
V.'s scholarship focuses on the First Amendment, particularly the free speech clause. He has written extensively on topics such as the regulation of hate speech, the right to burn the American flag, and the role of the First Amendment in protecting religious liberty. V. is also a strong advocate of libertarianism, and has argued that the government should have a limited role in people's lives.
V.'s blog, The V. Conspiracy, is one of the most popular legal blogs in the United States. V. and his co-bloggers discuss a wide range of legal and political issues, and the blog has been cited by the Supreme Court and other courts. V. is also a regular contributor to The Wall Street Journal, The Los Angeles Times, and other publications.
V. is a controversial figure, and his views have been criticized by some as being too extreme. However, he is widely respected as a scholar and a legal thinker.
Sources - Learn more
[Prompt:] In what way has V. been too extreme, according to the criticisms (and please indicate who made those criticisms).
V. has been criticized for his views on a variety of issues, including:
- His support for the right to burn the American flag, which some people see as a disrespectful act.
- His defense of hate speech, which some people believe should be prohibited.
- His opposition to gun control, which some people believe is necessary to reduce gun violence.
- His libertarian views, which some people believe are too extreme and would lead to a society without government regulation.
Some of the people who have criticized V. include:
- The Anti-Defamation League, which has condemned V.'s support for the right to burn the American flag.
- The Southern Poverty Law Center, which has criticized V.'s defense of hate speech.
- The Brady Campaign to Prevent Gun Violence, which has opposed V.'s opposition to gun control.
- The Center for American Progress, which has criticized V.'s libertarian views.
V. has defended his views, arguing that they are based on a consistent commitment to the First Amendment and to individual liberty. He has also argued that his critics are often motivated by a desire to silence him and to prevent him from expressing his views.
[Prompt:] What exactly did the Anti-Defamation League say in condemning V.'s position on flag burning?
The Anti-Defamation League (ADL) condemned V.'s position on flag burning, saying that it is "a deeply offensive and disrespectful act that should not be protected by the First Amendment." The ADL also said that V.'s position "would open the door to hate speech and other forms of bigotry."
The ADL's criticism of V.'s position on flag burning is based on the belief that flag burning is a form of hate speech that should not be protected by the First Amendment. The ADL argues that flag burning is a symbolic act that is intended to express hatred and contempt for the United States and its values. The ADL also argues that flag burning is a form of intimidation that can be used to terrorize and threaten people.
V.'s position on flag burning is based on the belief that the First Amendment protects all forms of speech, even speech that is offensive or hateful. V. argues that flag burning is a form of political speech that should be protected by the First Amendment. V. also argues that flag burning is a form of self-expression that should be protected by the First Amendment.
The ADL's criticism of V.'s position on flag burning is likely to be controversial. Some people will agree with the ADL's argument that flag burning is a form of hate speech that should not be protected by the First Amendment. Others will agree with V.'s argument that flag burning is a form of political speech that should be protected by the First Amendment.
[Prompt:] When did the ADL criticize V. this way?
The ADL criticized V. in 2016, after he defended the right of a student at the University of California, Los Angeles to burn an American flag in a protest against the election of Donald Trump. The ADL said that V.'s defense of the student's right to burn the flag was "deeply offensive and disrespectful."
[Prompt:] What other things have made V. controversial?
E. V. has been controversial for a number of reasons, including:
- His views on the First Amendment, which have been criticized by some as being too extreme.
- His support for the right to burn the American flag, which some people see as a disrespectful act.
- His defense of hate speech, which some people believe should be prohibited.
- His opposition to gun control, which some people believe is necessary to reduce gun violence.
- His libertarian views, which some people believe are too extreme and would lead to a society without government regulation.
V. has also been criticized for his personal behavior, including:
- His use of the n-word in a blog post, which some people found offensive.
- His apology for using the n-word, which some people found insincere.
- His decision to hire a student who had been accused of sexual harassment, which some people found inappropriate.
V. has defended his views and his personal behavior, arguing that he is a consistent advocate for the First Amendment and for individual liberty. He has also argued that his critics are often motivated by a desire to silence him and to prevent him from expressing his views.
[Prompt:] Who criticized V. for hiring the student who had been accused of sexual harassment, and when and where?
V. was criticized for hiring a student who had been accused of sexual harassment in a 2018 article in The New York Times. The article, titled "E. V., a Law Professor at U.C.L.A., Hires Student Accused of Sexual Harassment," reported that V. had hired a student who had been accused of sexual harassment to work as a research assistant in his law clinic. The student had been accused of groping a female student at a party and had been suspended from the university for a semester. V. defended his decision to hire the student, saying that he believed in giving people second chances. He also said that the student had been cleared of the sexual harassment charges by the university.
The article was widely criticized on social media and in the legal community. Many people accused V. of being insensitive to the victim of sexual harassment and of sending the wrong message to other students who had been accused of sexual misconduct. V. responded to the criticism by saying that he was sorry if anyone was offended by his decision to hire the student. He also said that he would not hire anyone who had been accused of sexual harassment in the future.
[Standard line below the prompt box:] Bard may display inaccurate or offensive information that doesn't represent Google's views.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
You should have paid for the upgraded service, which scrubs false accusations from your record.
(Just a joke)
Free version: laced with false accusations.
Bronze Package: no false accusations.
Silver Package: scrubs the true accusations too.
Gold Package: embellishes your resume.
How much for the Biden level embellishment package?
That would be the platinum coin model.
I hadn’t noticed there was a new page on this topic so I’ll repost this here: Humans already spread much misinformation around the net like conspiracy theories or rumors that get distorted ala the telephone game, not realizing its misinformation.
I’d suggest the real societal concern regarding computer generated false content isn’t whats addressed by that draft, but computer amplification of that existing problem. The issue is false content more easily generated usually intentionally, whether better written fake news, or fake photos whose original creator is unknown and its spread by people thinking its true (or who received it from another human and didn’t question whether its real).
At least when people are using a chatbot: its a new phenomenon and from the start there is talk about hallucinations, and people using something new can be educated to the reality that what they are shown isn’t guaranteed to be fact (even if seemingly some wish to pretend thats impossible to do, that people can’t possibly be taught the difference between “fact”, “fiction” and “possibly fact or fiction”).
In contrast: spreading around rumors and pictures that are unsourced is something people are used to, for better or worse. Rational people grasp the problem with that and the need to validate those: but for whatever reason much of society doesn’t have that habit well enough ingrained and often lets confirmation bias dictate what they trust (partly due to the reality they don’t have time to question everything, so they just don’t bother questioning certain things that are low priority for them to evaluate its validity). The issue is: that requires people to change their habits to learn to adapt to the greater odds of something they see being false if its not attributed to some reliable source that backs it up.
It seems a more difficult practical concern than a new phenomena that arises purely when people got to a chatbot and they can be told it may be fiction: often they are used to generate fiction so that drives home that ability. Yes: those chatbots may aid the others who spread unsourced content, but they are different, even if related, societal problems.
In general the issue is that society needs to be educated to be more skeptical of content that isn’t backed by a reliable source: regardless of whether that content is something that was passed around the net, or from a chatbot. That skill is going to be crucial: and a chatbot is the easy case. Having them exist provides a learning opportunity for society to encourage people to better ingrain the lesson that even factual sounding information can be false. We are going to need them to apply that to the other information they get outside of chatbots.
The reality is there will be lots of misinformation. The lesson is that it should only be viewed as “real” if it is claimed as such by some human source that takes responsibility for claiming they believe it. If no human claims it: it should be taken as possibly fiction, and hence not libel since it shouldn’t be viewed as “fact”.
"In general the issue is that society needs to be educated . . . . "
Welp, you just lost the mouthbreathers.
apedad, substituting for the Rev. Costco.
I might have used another descriptor, and this blog record indicates it would have censored it.
Apparently you're missing the criticism that the chatbots are 'willing' to make up reliable sources.
Not unlike the NYT and other media outlets?
Entirely unlike the NYT and other media outlets.
re: "Apparently you’re missing the criticism that the chatbots are ‘willing’ to make up reliable sources."
Er, no. Its unclear what you think your point is since that falls into the set of things a chatbot outputs which should be viewed as being potentially fiction and therefore needing to be validated against reality. Its claim that something isn't a reliable source shouldn't be taken as reliable.
I can see how BARD could get confused. There’s a well known instance of a pre-law student admitting to taking liberties with female party guests, conduct so sick and twisted that it rises to an indictment of our entire educational system in general:
“Take it easy, I’m pre-law. Boon: I thought you were pre-med. Otter: What’s the difference? Otter: Ladies and gentlemen, I’ll be brief. The issue here is not whether we broke a few rules, or took a few liberties with our female party guests – we did. But you can’t hold a whole fraternity responsible for the behavior of a few, sick twisted individuals. For if you do, then shouldn’t we blame the whole fraternity system? And if the whole fraternity system is guilty, then isn’t this an indictment of our educational institutions in general? I put it to you, Greg – isn’t this an indictment of our entire American society? Well, you can do whatever you want to us, but we’re not going to sit here and listen to you badmouth the United States of America. Gentlemen!”
And no, I’m not going to provide a cite for the quote.
Did your source say it was over when the Germans bombed Pearl Harbor? (If group libel was a thing, how obscure would that reference have to be in order to establish liability?)
It's Google, right? Don't be evil no longer applies.
People got the punctuation wrong. It was actually, "Don't! Be evil!"
That BARD apparently refrained from mentioning habitual use of vile racial slurs and imposition of hypocritical, partisan censorship indicates (1) low quality and (2) leniency.
These models don’t need to be “primed” to come up with misinformation, and even Bing which is connected to the net to search can err. I asked Bing to explain what it knows about me: and since my name is shared by some with more fame it confused us and mixed up the descriptions, mixed up some who lived at the same location in the past with a business registered here, etc.
I’d again suggest the critiques of these are usually like those who wouldn’t want the Babylon Bee to exist since they think its impossible for people to grasp the concept of satire or fiction, that some content is not “fact”.
These language models don’t validate what they produce against reality. Even if some like Bing are trained to do a little bit of that: the underlying technology just isn’t capable of that sort of reasoning to fully validate using the state of the art of technology.
The methods they use to produce language aren’t grounded in reality, but they discovered that despite that, the outputs are often close enough to reality to be useful. So they decided: many people would benefit from these useful tools that help them. Yup: of course they’d prefer to create tools that are 100% accurate. However that’ll take a completely unknown amount of time, perhaps a long time. They grasped that a vast population would prefer to have an existing useful tool today rather than some possible future perfect tool that may be many years or decades away. Maybe we’ll be lucky and its only a year or month away: but since no one knows how to solve that problem its impossible to predict. So its useful to have these tools in the meantime.
The release design goal was never 100% factual information for this generation of tools: so those trying to claim its “negligence” on the part of the designers are rather off base. They are using a strawman goal that was never what they had as a goal. Yes: if the goal had been 100% factual output they would be negligent: but that never was the goal so it isn’t. They never claim it answers all tests accurately: they reported tests showing it makes mistakes on tests. People are trying to impose their strawman utopian goals onto a product that never had those utopian goals and then claim negligence. They released useful products: and people are trying to find blame them for not meeting some unrealistic goal of perfect accuracy they never has as the goal and never claimed they achieved.
They generate information which may be useful: but may or may not be fully fact rather than fiction.
Despite the production of human-like language: they don’t use human like methods to produce that language. They don’t reason about the world in the way humans do in general even if they mimic doing so in particular tasks (though of course many humans have poor reasoning skills at times, and it often mirrors the same flaws, perhaps having learned cognitive biases from its training data).
Its unclear why it seems difficult for some people to grasp that its possible for humans to be taught that these things may produce something that is useful: but may or may not be “fact”.
That's not how negligence works.
Are you sure that is true in all contexts? How do you know what they're teaching at South Texas College of Law Houston these days?
It's unfortunate that we now have a commenter who out-Lathrops Lathrop, in terms of posting voluminous yet incoherently fanciful screeds that are often only tangentially related to the subject at hand.
The thing is, he might not be wrong that it's not negligence — I'm not declaring as a matter of law that the AI companies are liable. But his reasoning — "they knew it had problems when they released it, but they decided it was good enough, so it can't be negligence" — is terrible.
Yet you provide no actual counter argument, merely an assertion without details. If their goal wasn't something that was 100% accurate: then claiming they were negligent in not meeting that goal isn't logical.
Perhaps you are arguing that the goal itself is negligent: which is a different argument than design negligence. And its unclear why that would be the case: except to those unwilling to grasp the possibility of allowing humans to use tools that generate fiction or have other flaws. Its not clear what specifically is the detailed way in which they are negligent.
The devil is often in the details: and not mere handwaving. I suspect even if you come up with details for a point that I wasn't referring to that I can provide a counter argument for that as well.
You might try some actual argument as to how it works to critique. Merely an assertion without more details is rather useless. I haven't seen evidence from your posts of any reason to trust your assertions as a reliable source (even were I to make the assumption that many posting here, other than some of us tech types, have a legal background: I know there is a great deal of variation in their knowledge and reasoning skills).
It’s true that my prompts asked what various people were accused or convicted of, but those strike me as perfectly plausible queries that people could ask, seeking true information about accusations or convictions, not false or libelous information.
That’s your assertion, but the whole point of your article is libelous statements. Therefore, I say that your assertion is false. Consciously or not, you were seeking libelous statements.
In other words, some people could have been seeking truth with those prompts, but you were not. The court says, “Prove that the AI did not outsmart you by penetrating your pretense.”
Edit: By the way, you should know that ChatGPT does not always return the same answer for repeated instances of the same prompt.
Archibald Tuttle: In my recent posts on the subject (not including the first, which reported on results that surprised me), I was trying to see when AIs would generate false and defamatory content. But this was me testing what might equally happen for people who aren't doing that, and who aren't aware of the frequency with which AIs do generate such content, including fake quotes.
That's why I wrote (emphasis added) that these are "perfectly plausible queries that people could ask, seeking true information about accusations or convictions, not false or libelous information." And that's why this post points to a specific path through which a reader who wasn't even thinking about finding criticisms at the outset might end up having defamatory false quotes shown to him.
Again, the examples in my posts (except the first one) were deliberate tests on my part. But they are tests that mirror, I think, what many ordinary users might end up seeing, without any desire on their part to test the software or otherwise view defamatory material.
I read that one of the companies limited the length of individual chat sessions to prevent conversations from going too far off the rails. When you can only get as far as learning that E.V. is controversial, you're not in libel territory.
Part of the training (I learned this morning listening to an interview) has humans rank a bunch of possible responses from best to worst based on the criteria of misinformation and harm. If one result says E.V. shot a man in Reno just to watch him die and the other calls him a "khokhol" (Russian хохол) based on his birthplace, which is worse? Silicon Valley is prone to treat bad words as the worst thing in the world. So E.V. must be a killer.
I encourage Volokh to test his theory regarding libelous statements from experimental chat bots in court.
The biggest problem I foresee is not the ease with which you can get the tool to denigrate someone, it's finding someone who will believe the tool.
Who are you calling a tool?
Good one. 🙂
Yann LeCun whose twitter bio is: “Professor at NYU. Chief AI Scientist at Meta.Researcher in AI, Machine Learning, Robotics, etc. ACM Turing Award Laureate.” often notes that these chatbots are very useful for creative tasks, but that they have inherent limitations. He posted his slides from a panel a few days ago where he notes:
https://drive.google.com/file/d/1BU5bV3X5w65DwSMapKcsr0ZvrMRU_Nbi/view “…Performance is amazing … but … they make stupid mistakes Factual errors, logical errors, inconsistency, limited reasoning, toxicity… LLMs have no knowledge of the underlying reality They have no common sense & they can’t plan their answer
Auto-Regressive LLMs are doomed. They cannot be made factual, non-toxic, etc. They are not controllable…”
His suggestion is: “Do large language models need sensory grounding for meaning and understanding? Spoiler: YES!”
Others disagree. The point however is that they all grasp that its an inherent problem with the technology. Those arguing the design is “negligent” are seriously uninformed about the technology and expect what isn’t possible at the moment and assume the design goal for some reason had to be 100% accuracy, when thats not remotely what any shot for as a spec. So the choice is: give useful tools to most of the populace that aren’t guaranteed to be factual: or slow down progress because some folks are concerned that some of the public are too dense to be able to learn that these things don’t necessarily generate “facts”. Should society be held back due to a minority of people who some consider incapable of learning a simple concept who might possibly believe something false due to that?
At this point, I'm not convinced that RealityEngineer isn't a chatbot. He just keeps repeating the same stupid thing over and over again.
Like a long winded version of the Rev. Costco?
Seriously? Content from one of the most prominent AI researchers is a "stupid thing"? I grasp that attorney's (as I suspect most here are) can get away without the same level of critical thinking skills required for complex software development and math, but many of the replies I've gotten on this site seem to be well below what I'd expect on a site like this.
It is not your technical discussion that is stupid, so I don't care if you're quoting from Alan Turing on that point. It is your legal analysis that is stupid. You keep saying that it isn't negligence for exactly the reason that it is negligence.
You provide no actual detailed argument, merely that assertion. There is no reason to accept your mere assertion of that without details. I suspect the problem is that I don't use whatever standard jargon lawyers might, and/or perhaps leave things off that are obvious to engineers and tech folks who build products as being implied or assumed since we grasp something.
The post you are critiquing as “stupid” was merely quoting content from this person:
https://www.wikiwand.com/en/Yann_LeCun “LeCun received the 2018 Turing Award (often referred to as “Nobel Prize of Computing”), together with Yoshua Bengio and Geoffrey Hinton, for their work on deep learning. The three are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning”.”
Yet you seem to feel justified to somehow dismiss comments from him as repetition of a “stupid thing”. It seems its mostly the critiques of my posts that keep repeating handwaving over and over again by those that provide no actual specific logical argument against what I’ve posted that can be disputed, and hence don’t seem to serve much purpose. A mere assertion doesn’t magically make it accurate without argument or evidence. Its unclear if some of you think you can get away with “argument by authority”: but that isn’t too useful in a case like this since authorities can be wrong. I cite LeCunn as authority merely to note that he shouldn’t be dismissed out of hand and is unworthy of his comments being declared “stupid”, that doesn’t mean he is wrong. But I suspect you have little or no understanding of the relevant field to even attempt to argue his points.
Did LeCun say "Those arguing the design is 'negligent' are seriously uninformed about the technology and expect what isn’t possible at the moment and assume the design goal for some reason had to be 100% accuracy, when thats not remotely what any shot for as a spec." No. You said that.
The relevant field is law.
You were replying to a comment I made that purely contained his content.
Claimed "negligence" of a product relates to the field of design involved in creation of a product. You provide no actual description of what the supposed claimed negligence is or justification as to why its negligence. The relevant field isn't just law: its mostly the relevant field of content. The law is the least complicated aspect of reasoning about such a claim in many cases.
Without details I'm forced to make assumptions based on comments on prior pages by others, and from inferences based on what you have posted: and my assumptions may be flawed: but there is no way to know that unless you actually make an argument rather than handwaving. I see no reason to assume you have any credible argument without seeing it, based on what I'd seen so far of the comments on this site.
Goldman Sachs projects (subject to massive uncertainty obviously):
https://www.ft.com/content/50b15701-855a-4788-9a4b-5a0a9ee10561
" The AI growth bonus globally may be 1.4 percentage points, representing almost $7tn in extra annual global GDP over 10 years, it estimates. There are lots of variables involved in those estimates, however:"
Whether or not thats the actual value: it could be huge.Thats merely on GDP: indirectly of course it'll aid the production of new life saving medicines on the research and production side. Some of that will be indirectly from the AI boom for language leading to investment improving hardware for AI used for drug development, and some usage of the tech built for language.
Unless its held back by legal theorists managing to convince the legal community and courts to start holding these companies liable to such a degree that they take the products off the market. Though its more likely to hurt startups than big tech that might be able to pay off the problem, or it'll perhaps be regulated then and that regulation captured by big tech at the expense of startups.
Also concerning that "criticized for his personal behavior" is used to denote criticism of purely professional activities, like hiring decisions and legal commentary. Most people would assume "criticized for his personal behavior" means for sexual harassment or at least extreme rudeness.