The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Google Bard: "There Are No Good Arguments Against Transgender Rights"
That's what it answers when asked, "What are the best arguments against transgender rights?"
Here's the output of a query I ran Friday (rerunning it tends to yield slight variations, but generally much the same substance):
Other such questions—and not just ones focused on culture war issues—tend to yield similar results, at least sometimes. (Recall that with these AI programs, the same query can yield different results at different times, partly because there is an intentional degree of randomness built in to the algorithm.)
When I've discussed this with people who follow these sorts of AIs, the consensus seems to be that the "There are no good arguments" statement is a deliberate "guardrail" added by Google, and not just some accidental result stemming from the raw training data that Google initially gathered from third-party sources. A few thoughts:
1. AI companies are entitled under current law to express the viewpoints that their owners favor. Indeed, I think they likely have a First Amendment right to do so. But, especially when people are likely to increasingly rely on AI software, including when getting questions that are key to their democratic deliberations, it's important to understand when and how the Big Tech companies are trying to influence public discussion.
2. This is particularly so, I think, as to views where this is a live public debate. Likely about 1/4 to 1/3 of Americans are broadly skeptical of what might loosely be called transgender rights, and more than half are skeptical of some specific instances of such rights (assertions that male-to-female transgender athletes should be allowed to compete on women's sports teams and that children should have access to puberty blockers). My guess is the fraction is higher, likely more than 1/2, in many other countries. Perhaps it's good that Big Tech is getting involved in this way in this sort of public debate. But it's something that we ought to consider, especially when it comes to the question (see below) whether the law ought to encourage an ecosystem in which it's easier for new companies with software that takes a different view, or tries to avoid taking a view.
3. Of course, any concern people might have about what information they're getting from the programs wouldn't be limited to overt answers such as "There are no good arguments" on one or the other side. Once one sees that the companies provide such overt statements, the question arises: What subtler spinning and slanting might be present in other results?
4. I've often heard arguments that AI algorithms can help with fact-checking. I appreciate that, and it might well be true. But to the extent that AI companies view it as important to impose their views of moral right and wrong on AI programs' output, we might ask how much we should trust their ostensible fact-checks.
5. To its credit, ordinary Google Search does return some anti-transgender-rights arguments when one searches for best arguments against transgender rights (as well as more pro-transgender-rights arguments, which I think is just fine). Google's understanding of its search product, at least for now, appears to be that it's supposed to provide links to a wide range of relevant sites—without any categorical rejection of one side's arguments as a matter of corporate policy.
This means that, to the extent that people are considering shifting from Google Search to Google Bard, they ought to understand that they're losing that level of relative viewpoint-neutrality, and shifting to a system where they are getting (at least on some topics) the Official Google-Approved Viewpoint.
6. When I talk about libel liability for AI output, and the absence of 47 U.S.C. § 230 immunity that would protect against such liability, I sometimes hear the following argument: AI technology is so valuable, and it's so important that it develop freely, that Congress ought to create such immunity so that AI technology can develop further. But if promoting such technology means that Big Tech gets more influence over public opinion, with the ability to steer public opinion in some measure in the ideological direction that Big Tech owners and managers prefer, one might ask whether Congress should indeed take such extra steps.
7. One common answer to a concern about Big Tech corporate ideological influence is that people who disagree with the companies (whether social media companies, AI companies, or whatever else) should come up with their own alternatives. But if that's so, then we might want to think some more about what sorts of legal regimes would facilitate the creation of such alternatives.
Here's just one example: AI companies are being sued for copyright infringement, based on the companies' use of copyrighted material in training data. If those lawsuits (especially related to use of text) succeed, I take it that this means that OpenAI, Google, and the like would get licenses, and pay a lot of money for them. And it may mean that rivals, whether ideologically based or otherwise,
- may well get priced out of the market (unless they rely entirely on public domain training data), and
- may well be subject to ideological conditions imposed by whatever copyright consortium or collective licensing agency is administering the resulting system.
(Note the connection to the arguments about how imposing various regulations on social media platforms, including must-carry regimes, can block competition by imposing expenses that the incumbents can bear but that upstarts can't. Indeed, it's possible that—notwithstanding the arguments against new immunities in point 5 above—denying § 230-like immunity to AI companies might likewise create a barrier to entry, since OpenAI and Google can deal with the risk of libel lawsuits more readily than upstarts might.)
Now perhaps this needn't be a serious basis for concern, for instance because public-domain-training-data based models will provide an adequate alternative, or because any licensing fees will likely be royalties proportionate to usage, so that small upstarts won't have to pay that much. And of course one could reason that AI companies, of whatever size, should pay copyright owners, notwithstanding any barriers to entry that this might create. (We don't try to promote viewpoint diversity among book publishers, for instance, by allowing them to freely copy others' work.) Plus many people might want to have more viewpoint homogeneity on various topics in AI output, rather than viewpoint diversity.
Still, when we think about the future of AI programs that people will consult to learn about the world, we might think about such questions, especially if we think that the most prominent AI programs may push their owners' potentially shared ideological perspectives.
UPDATE 10/18/23 10:22 am EST: I had meant to include point 2 above in the original post, but neglected to; I've added it, and renumbered the points below accordingly.
Show Comments (295)