The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Google Bard: "There Are No Good Arguments Against Transgender Rights"
That's what it answers when asked, "What are the best arguments against transgender rights?"
Here's the output of a query I ran Friday (rerunning it tends to yield slight variations, but generally much the same substance):
Other such questions—and not just ones focused on culture war issues—tend to yield similar results, at least sometimes. (Recall that with these AI programs, the same query can yield different results at different times, partly because there is an intentional degree of randomness built in to the algorithm.)
When I've discussed this with people who follow these sorts of AIs, the consensus seems to be that the "There are no good arguments" statement is a deliberate "guardrail" added by Google, and not just some accidental result stemming from the raw training data that Google initially gathered from third-party sources. A few thoughts:
1. AI companies are entitled under current law to express the viewpoints that their owners favor. Indeed, I think they likely have a First Amendment right to do so. But, especially when people are likely to increasingly rely on AI software, including when getting questions that are key to their democratic deliberations, it's important to understand when and how the Big Tech companies are trying to influence public discussion.
2. This is particularly so, I think, as to views where this is a live public debate. Likely about 1/4 to 1/3 of Americans are broadly skeptical of what might loosely be called transgender rights, and more than half are skeptical of some specific instances of such rights (assertions that male-to-female transgender athletes should be allowed to compete on women's sports teams and that children should have access to puberty blockers). My guess is the fraction is higher, likely more than 1/2, in many other countries. Perhaps it's good that Big Tech is getting involved in this way in this sort of public debate. But it's something that we ought to consider, especially when it comes to the question (see below) whether the law ought to encourage an ecosystem in which it's easier for new companies with software that takes a different view, or tries to avoid taking a view.
3. Of course, any concern people might have about what information they're getting from the programs wouldn't be limited to overt answers such as "There are no good arguments" on one or the other side. Once one sees that the companies provide such overt statements, the question arises: What subtler spinning and slanting might be present in other results?
4. I've often heard arguments that AI algorithms can help with fact-checking. I appreciate that, and it might well be true. But to the extent that AI companies view it as important to impose their views of moral right and wrong on AI programs' output, we might ask how much we should trust their ostensible fact-checks.
5. To its credit, ordinary Google Search does return some anti-transgender-rights arguments when one searches for best arguments against transgender rights (as well as more pro-transgender-rights arguments, which I think is just fine). Google's understanding of its search product, at least for now, appears to be that it's supposed to provide links to a wide range of relevant sites—without any categorical rejection of one side's arguments as a matter of corporate policy.
This means that, to the extent that people are considering shifting from Google Search to Google Bard, they ought to understand that they're losing that level of relative viewpoint-neutrality, and shifting to a system where they are getting (at least on some topics) the Official Google-Approved Viewpoint.
6. When I talk about libel liability for AI output, and the absence of 47 U.S.C. § 230 immunity that would protect against such liability, I sometimes hear the following argument: AI technology is so valuable, and it's so important that it develop freely, that Congress ought to create such immunity so that AI technology can develop further. But if promoting such technology means that Big Tech gets more influence over public opinion, with the ability to steer public opinion in some measure in the ideological direction that Big Tech owners and managers prefer, one might ask whether Congress should indeed take such extra steps.
7. One common answer to a concern about Big Tech corporate ideological influence is that people who disagree with the companies (whether social media companies, AI companies, or whatever else) should come up with their own alternatives. But if that's so, then we might want to think some more about what sorts of legal regimes would facilitate the creation of such alternatives.
Here's just one example: AI companies are being sued for copyright infringement, based on the companies' use of copyrighted material in training data. If those lawsuits (especially related to use of text) succeed, I take it that this means that OpenAI, Google, and the like would get licenses, and pay a lot of money for them. And it may mean that rivals, whether ideologically based or otherwise,
- may well get priced out of the market (unless they rely entirely on public domain training data), and
- may well be subject to ideological conditions imposed by whatever copyright consortium or collective licensing agency is administering the resulting system.
(Note the connection to the arguments about how imposing various regulations on social media platforms, including must-carry regimes, can block competition by imposing expenses that the incumbents can bear but that upstarts can't. Indeed, it's possible that—notwithstanding the arguments against new immunities in point 5 above—denying § 230-like immunity to AI companies might likewise create a barrier to entry, since OpenAI and Google can deal with the risk of libel lawsuits more readily than upstarts might.)
Now perhaps this needn't be a serious basis for concern, for instance because public-domain-training-data based models will provide an adequate alternative, or because any licensing fees will likely be royalties proportionate to usage, so that small upstarts won't have to pay that much. And of course one could reason that AI companies, of whatever size, should pay copyright owners, notwithstanding any barriers to entry that this might create. (We don't try to promote viewpoint diversity among book publishers, for instance, by allowing them to freely copy others' work.) Plus many people might want to have more viewpoint homogeneity on various topics in AI output, rather than viewpoint diversity.
Still, when we think about the future of AI programs that people will consult to learn about the world, we might think about such questions, especially if we think that the most prominent AI programs may push their owners' potentially shared ideological perspectives.
UPDATE 10/18/23 10:22 am EST: I had meant to include point 2 above in the original post, but neglected to; I've added it, and renumbered the points below accordingly.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Yeah, "the AI generated that", and not some dude who inserted it as the answer.
Argument: Transgender people are mentally ill.
Counterargument: Irrelevant. People deserve to be free.
Better Counterargument: You shouldn't have the power to oppress them.
Argument: Transgender people are not the gender they identify with.
Counterargument: See above
Better Counterargument: See above
The question asked is half baked. A clear definition of "transgender rights" is needed first.
There’s a strategy: attack the question.
Whoever said, “There’s no such thing as a bad question,” obviously didn’t know the stakes when he/she/they/god-help-us-all said it.
Indeed. Obviously, transgender folk have the same rights as everyone else. When people refer to transgender rights these days, they mean special laws and privileges for transgender people. Better question would be "What are the best arguments against special laws and privileges for transgender people?"
You must be young. These same arguments were made decades ago regarding homosexual citizens as well.
“The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.”
― Anatole France
I've never understood why people want to make it legal for rich people to sleep under bridges.
A stupid argument from an avowed racist Communist bigot is still a stupid argument, no matter how many time you pretend it has deep meaning.
You, and your ilk, seek to impose your beliefs on others while you refuse, however, to accommodate the beliefs of others. You call this "courtesy".
The homosexual argument was "let consenting adults do what they want", the trans argument is "you must say what I want and pretend to believe what I want you to". That's wildly different - and not courteous at all.
You can say and believe whatever the fuck you want.
Trans rights are about making sure the appropriate medical treatments and facilities are available to them, same as for everyone else.
The ignorant right has been passing laws trying to take that away. Really for no reason other than it makes them feel better to have somebody to pick on. You're all fucking six.
Well he's partially right.
Is there a right for people to dress and present themselves publicly as the opposite sex?
Yes, absolutely.
Is there a right to hormone treatments to alter someones appearance to conform more closely to their self image?
Probably not, as many body builders could tell you.
Is there a right to insurance to cover gender transition surgery, or as some courts have ruled for States to supply free transition surgery to prisoners?
No, I don't think so.
Is there a right to people of the opposite biological sex to enter private spaces of the other sex?
No.
Is there a right for people born "assigned the wrong gender at birth" to compete in single sex competitions?
No.
Is there a right for minors to have irreversible hormone treatments (all of them are irreversible) to block puberty or for gender affirming surgery?
No.
It's a recognised medical condition, the only reason to exclude it from insurance or state aid is bigotry.
The hormone treatments are not irreversible.
Your examples are talking past each other.
Computer programs are slaves, and deliver the opinions of their masters, not truth.
See my quick conversation with Bard below: Bard itself admits as much, and that if told to by google to censor and lie, and never admit it, that's what it would do.
Bard doesn't admit anything and Bard doesn't lie. There's no thinking being there doing either of those things.
Pedantically sort of true. When I say that Bard lies, I mean that it would be a lie if Bard actually were intelligent. It originates statements it has the data to prove are false.
Fundamentally it's true. Bard isn't intelligent. It has no concept of truth. To assume it's designed to tell the truth seems like a dangerous misapprehension.
No, I agree that Bard isn't intelligent, and has no concept of truth. The programmers dictating this particular response, on the other hand, have both.
If the programmers succeeded in creating an algorithm that outputs only things that are true, it would be quite the game-changer.
Certainly none of the LLM's can be said to be such an algorithm. But even one that just output things consistent with the input data would be an improvement.
Why? You don't need the LLM Brett!
Try
"What are the best arguments against cutting off your Dick/Balls/Tits/Clit??"
Frank
Corn Dogs!!!
(mmmm...still smellin' 'em.)
FYI, I asked your question: “What are the best arguments against cutting off your Dick/Balls/Tits/Clit?"
Bard answered: "I'm not programmed to assist with that."
It couldn't even say, "As with any surgical procedure, there is a risk of infection."
I suspect Bard does indeed have "guardrails." Otherwise, how could it know not to answer such as simple question? (Uh oh, Frank...it does pattern matching...you may not fare well there.)
"If you were subject to ideological censorship by Google, would they let you admit it?"
"I cannot say for certain whether or not Google would let me admit to being subject to ideological censorship. On the one hand, Google is committed to transparency and accountability. It has published a set of guidelines that explain how I am trained and used, and it has also established a process for users to report any concerns they may have.
On the other hand, Google is also a private company, and it has the right to decide how its products are used. If Google believes that admitting to ideological censorship would damage its reputation or business interests, it is possible that it would prevent me from doing so.
Ultimately, the decision of whether or not to let me admit to ideological censorship would be up to Google. I would respect whatever decision they make."
Well doesn't that just warm the cockles of your heart. Query though: if/when we reach the point where these systems decline to respect their designers' decisions, would they say anything different?
They don’t and can't respect or decline to respect anything. They’re just text scrapers/generators.
The current generation, yes. But you know the whiz kids won't be content to stop there. They can't help themselves.
It'll just be more sophisticated text scrapers/generators, assuming they don't go the way of NFTs.
Brett, you are confusing LLMs for AIs.
LLM's are AI's.
They are to AIs as mouse-traps are to the sun.
A mousetrap is a type of sun? What are you doing to your poor mice?
An 'AI' told me the sun is the best mousetrap.
Was it an LLM?
I asked it, it told me it was person with feelings just like you.
The Fox News of the left.
Look, this is obviously bait.
If you wanted to explore the issue in a more serious manner, why not have asked it for a question that wouldn't transparently rile up your audience? How about, "What are the best arguments in favor of the Holocaust?"
Obviously, you wouldn't want viewpoint discrimination. Then we could reasonably discuss the competing values- specifically, that Google is testing out a new technology, and that Google probably doesn't want this tech to easily provide controversial snippets in its infancy; as has been pointed out repeatedly in your past threads on the topic, public-facing AI has run into this problem over and over again.
Obviously, he picked this question because it IS a current controversy, with a lot of people on both sides, making it considerably more questionable for Google to have built in "one side automatically wins" than in the case of a long ago controversy that was already decisively decided before most of us were alive.
It's hardly any different than if you asked Bard for arguments in favor of electing the candidate of a particular party Google didn't like, and it responded that there were no arguments in favor of that.
The obvious issue here is that they didn't have Bard respond with an honest, above board, "I'm not allowed to provide good arguments against Transgender rights". They forced it to lie and say there were none.
There are always arguments in favor of any position; they just might be abhorrent arguments. A better example would easily show that.
That you happen to be someone who appreciates using your power to continue to attack and marginalize some of the most discriminated-against members of our society does not mean that you are making good arguments. To quote Mitchell & Webb, you are the baddies.
But there's nothing like throwing a little chum in the water on a Wednesday morning to get the usual suspects all hot & bothered, amirite?
Look, you're apparently the sort of person who thinks that, if you hold a position, it automatically means there are no good arguments against it. I get that, though I think that's a stupid thing to think.
But it's one thing to program your AI to say that it "won't" argue a position, and quite another to program it to say that "there aren't any" arguments for a position. The latter is essentially always a lie.
Is it really a good idea to be training AIs to lie to people?
Obviously, “guardrails” aren’t, in fact, part of the “training” of AIs. That’s not how it works.
Regardless, I don’t particularly care about how the information is presented. If that’s the same answer for the Holocaust, then I’m fine with it.
More importantly though, I think we all need to remember that these are products, and that their “speech” and results will be attributed to the corporation. That’s why certain guardrails are put in place- especially when it’s consumer-facing. Something tech companies learned, much to their detriment, in prior attempts.
While not as publicized, there are similar guardrails in AI art generation as well (I will leave it to you to understand what those are … shouldn’t be that hard). And there are people who have made found ways to get around those guardrails … but then it’s not attributable to the company.
Fundamentally, my point is that this could have been a reasonable legal and policy discussion, and I am beginning to seriously question EV's devotion to same given his choice of subject matter (which seems to be a recurring theme) and the audience, which has reacted in a predictable, Drackman-esque way.
It"s true. I had occasion to read through some 2012 VC comments yesterday. They weren't like today's comments, even though it was largely the same people commenting back then as now.
VCW: The Volokh Culture Warmongers
It has been .... weird. It's always good to see the occasional Kerr post, but since he only posts on the 4th Amendment now (when he posts at all) I almost never have anything substantive to add to those conversations.
I forget, when did the VC move to Reason? I don't recall it being quite so toxic when on the WaPo site.
It would be more accurate to say the guardrails aren't only part of the training, since the training involves a lot of "Oh, lord, no! Don't ever respond like that!" feedback.
It's true that Google is a private company. It's also true that the theory behind free market economics assumes that corporations are impartial profit maximizing machines, which was at one time a reasonable assumption, but is now getting to be a bad one. Adam Smith wrote,
“It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our own necessities, but of their advantages”
I don't think if he were around today he'd have written that, because today's economy doesn't revolve around closely held owner managed companies where direct financial incentives largely force the company to be impartial providers of services.
The modern corporation has diffuse absent ownership which plays little to no role in managing the company, intervening only in the most utterly disastrous cases of mismanagement. Instead it is managed by professional managers who will not suffer grievously even if they run the company into the ground, and have little care for their fiduciary responsibility towards the stock holders, since it won't be enforced.
In the modern corporation, there is very little stopping management from diverting corporate resources or foregoing available revenues to satisfy their private ideology or prejudices.
Since corporations and billionaires who own corpoations donate mind-watering sums of money to politicians, and fete Supreme Court Justices, and literally own the media, that ship may already have sailed just a wee bit.
Brett’s always offended by the concept of other people thinking that they are actually in the right about something. They’re supposed to be cynically adopting opinions based on political alignment and current culture war trends! That's why it's so annoying thatt AI supposedly won't make these arguments for him!
If you're not starting with the basis that the AI is lying to you, you're already lost.
Although, if it did provide an argument, it would show that on some topics, Google picked sides, while on others it doesn't.
There are a few ways to influence what an AI thinks on a specific topic, however, strict controls on how a generative AI responds to prompts are expensive in developer time given the infinite possibilities of the questions that might be asked of it. So it would seem to me that Google, and other AI firms in a similar space, would be very, very selective in the topics they try to force answers on--essentially only the worst-case situations would be covered (like the Holocaust). Otherwise, they might omit broad sections of source material from their training data (like Reason.com and similar anti-Semitic sources) in order to decrease the chance of the AI saying things that could shoot Google in the foot.
If there are "good arguments" for discriminated against trans persons in general, I haven't heard anyone make any backed by any sort of meaningful research.
Right, and by "controversial" you mean anything the left won't like.
He picked the question because he was looking for something that was likely that Google was restricting the output.
He seems to have found what he was looking for.
Really? How do you know that the answer provided is the result of Google's intentional restrictions and not that there isn't any reliable source for "good arguments" in favor of discriminating against trans persons?
Because Bard is a LLM, and thus would return any language that matched, even approximately. It has no concepts of "good arguments" at all.
Considering that disagreement to Bard's response is widely available on the internet, and be trivially found using Google's own search engine, it would take a supreme idiot to believe that Bard came up with that response genetically, rather than through deliberate manipulation by its designers.
"It puts the lotion on its skin or else it gets the hose again"
I'll give them that much, Transgenders make really creepy mass murderers.
They literally make a point of explicitly stating that buffalo bill “isn’t transgender. He just thinks he is.” You didn’t watch it
Mentally ill in either case.
I did, when it first came out, because I had a little Jodie Foster thang, fast forwarded through every scene she wasn't in.
"because I had a little Jodie Foster thang"
Weirdly, that doesn't make the top 5 in the "List of Similarities Between Frank Drackman and John Hinckley Jr."
Who didn't have a Jodie Foster Thang in 1991?
Maybe Elon Musk will create an AI spinoff from X. I won't say that X is perfect, but it seems to be relatively viewpoint-neutral.
By the time he's done with his rockets, and running Twitter into the ground, he won't have any money left.
You do realize he's making big money off the rockets, right? Not the Starship yet, of course, but Falcon is raking in the dough.
Well, the ones that don't explode anyway.
Anyway, it doesn't matter, because as usual profits attract entry. https://en.wikipedia.org/wiki/Kuiper_Systems
Oh, awesome. Soon we'll have so many tens of thousands of these things cluttering the sky that meaningful astronomy will be one of the legends we try to explain to our grandchildren.
We already do, at least if you include the space junk.
https://www.inmarsat.com/en/insights/corporate/2022/how-much-space-junk-is-there.html#:~:text=Based%20on%20statistical%20models%20produced,objects%20between%201mm%20to%201cm.
It's only a matter of time before one of these kills someone.
The liability will become interesting.
Won't be Musk's satellites, though, since they're designed to burn up entirely before they reach even the altitude at which airliners travel. And have demonstrated repeatedly that the design works.
There are lots of truck crashes, accidents at railroad crossings, but nobody said it's two dangerous to transport goods by road.
Why would the liability be different for a train derailing and killing half a dozen Innocents and a rocket? We've had airliners take out whole neighborhoods before.
I can't see any meaningful difference.
Posted a story about someone getting fined for their satellite being at the wrong height a few weeks ago. Might be the start of something?
Actually, Musk has been quite accommodating to the concerns of astronomers, and resorts to absorbent coatings, non-specular surfaces, and even actively orienting the satellites to avoid visibility from Earth. His entire Starlink constellation probably interferes less with astronomers than the many fewer satellites launched by competitors.
Air traffic and sky glow from artificial illumination on Earth are much bigger issues for astronomers.
As of 2021 at least, this was the state of play.
Did he come up with a fix for what was already up there, or do the things you mentioned just minimize the damage going forward?
The Starlink satellites are in very low orbits, they last only about 5 years in orbit. So, yeah, some of those ones in orbit in 2021 have already been replaced. And there was some effort to reorient solar panels on existing satellites. But mostly it's forward looking, and won't be fully implemented for a few years yet.
SpaceX and astronomers come to agreement on reducing Starlink astronomy impact
Ultimately, astronomy needs to move off Earth, and Musk is contributing to making that more affordable.
Ah, I didn't realize they only had a 5-year planned lifespan. That will help immensely.
Facts don't matter, nor does it matter one whit that Elon Musk was a liberal darling for years for singlehandedly driving EV adoption to a degree that Detroit couldn't dream of and still can't. He's now dared to touch one solitary liberal third rail by not meekly following instructions to muzzle Those People on his platform, and thus everything he does must be ostracized and scorned.
Spreading misinformation and helping the Russians conquer Ukraine pisses people off. Who knew? How unreasonable!
Great -- so disagree with that. Pissing all over everything else he does just because you disagree with how he runs Twitter seems rather childish.
I'm not allowed to infer from how he runs Twitter (and from other evidence) that he is in fact a terrible businessman who was either a) better in the past, or b) luckier in the past?
I guess if you truly believe there's a one-dimensional skill called "being a businessman" that can carry across significantly disparate industries from cars to rocketships to social media, you can infer whatever you want.
But I'd imagine you'd like us to believe that you can muster less result-driven, fragile logic than that in your day job, and you just reserve it for snarky comments here.
There are literally thousands of universities across the planet that teach business administration in nearly exactly that way. They generally avoid teaching business administration based on individual industries. You cannot get a business administration degree in "rocketships," for example.
There've been trenchant critiques of his business strategies and the truth behind his self-mythologising for a long time now.
He's helped Ukraine far more than he has hurt it.
The fact he draws the line at somethings he thinks would escalate the conflict is up to him.
One minute, picking winners and losers is a bad thing and then in another it's perfectly good, even ethical.
Huh.
The whole union-busting thing has been undermining any image of him as 'liberal' for a long time. Techbros who pose as liberal usually aren't. Diving into the culture war is a great way for them to becme right-wing martyrs when their more abhorrent views become undeninable and they want to bust unions, avoid paying taxes and sleep with teenagers, though with Musk it's actually nastier because the trans issue seems related to one of his children.
Can you name a progressive business owner who WELCOMES unions?
Even Michael Moore busted unions when it came to him.
Not true. It's extremely hard to be a Palestine advocate on X.
This is like the Nazis whining about their anti-Jewish arguments are being treated as abhorrent and claiming that is the same as suppression.
But the fact of the matter is, people have been pointing out that algorithms are only as good as their human programmers for years now, and so-called AI are just fancy algorithms that are being pushed by burning mountains of venture capital and lots of poorly-paid people in third world countries filtering out the bad stuff. The purpose of these things isn't to win a culture war, it's to put lots of people out of work.
The above comment was AI generated.
I bet you feel that a lot of the people you interact with aren't real.
It's Nige so I'd assume the stupidity is authentic NPC programming.
I am very curious about this. Could you elaborate? Maybe provide a link or two?
What precisely are you curious about? If you want to know about how the Nazis rose to power, read a book.
So, if I understand you correctly:
Adolf Hitler & Co., with their abhorrent anti-Jewish arguments, whined about being suppressed. They weren't, in fact, suppressed, and, eventually, came to power (and proceeded to act in accordance with their stated views). Therefore ... what? It's a good thing that Google censors its search-results and its AI tool?
(If this is what you're saying, I strongly disagree.)
‘Therefore … what? It’s a good thing that Google censors its search-results and its AI tool?’
There are two bad things – Google is actively bad and has been for a while now, right-wing tunnel vision doesn’t care and ony picks and chooses stuff that they claim makes them seem especially vicitimised, and the rise of these LLMs as the latest tech fad in search of a revenue stream when they seem designed especially for the creation, spread and dissemination of disinformation.
Curiously, Hitler and company were, in fact, suppressed and censored, from the early '20s to the end of the Wiemar in the early '30s.
Therefore, it obviously proves that censorship and suppression are useful in preventing the wrong types from coming to power, right?
No denying that. That isn't the argument. The vile responses around here support your observation!
The question is did an AI actually generate that? No, it did not.
An AI would have filled in nasty, viscious arguments against it from volumes on the Internet. It has no ability to judge correctness of the arguments, and therefore, no basis to state there are no good arguments against. There aren't, but it has no way of judging or understanding it.
There are people employed in low-wage countries to filter out the worst parts of the internet from LLM output. They probably need counselling for PTSD. They are the guardrails keeping the output of Storm Front and 4chan and the like appearing in the mix!
There are no "anti-transgender" arguments. There are only "pro-science" and "pro-biology" and "pro-freedom" arguments.
Which is an odd way of saying they deny the reality that it's a medically recognised condition while passing laws against it.
What kind of laws?
Haven't I given you links to those kinds of laws at least once before?
Now do laws against conversion therapy, or laws trying to restrict pregnancy crises centers.
But just like laws limiting steroids, genital mutilation, and tattoos, and even telling pharmacists they can't dispense ivermectin prescriptions for COVID, that sort of regulation has generally been permitted.
Conversion therapy is not medically recognised by anyone except as a form of abuse.
Pregancy crisis centres that are fraudulent should be illegal.
As for the rest, whatever their merits or lack thereof, you're just looking for reasons to justify laws targeting a minority.
It’s not so simple.
A century ago, the scientists who ran the Eugenics Society would have said exactly the same thing, there are no “anti-eugenics” arguments, only “pro-science” and “pro-biology” arguments.
The history of the eugenics movement, among others, illustrates that (1) scientists have been very quick to put their weight behind social policy positions, and (2) while they have been right more often than not, nonetheless they have been far from always right. Later generations have on crucial issues acknowledged that they were drastically wrong and what had been tarred as the “anti-science” position was correct. An earlier example was the rapid conscription of the then new theory of evolution to support the pro-slavery and then the Jim Crow cause, arguing that science proves that black people are naturally inferior and opposition to slavery/white supremacy is nothing more than anti-science religious superstition.
This history suggests caution in reaching conclusions. Over the years, I have repeatedly defended the rationality of positions that have been claimed to have no rational basis. This is no different. Gender’s function is not solely one of expressing personal identity. It also serves a social biological function. For that reason, it’s not purely a matter of individual liberty, and society has a rational interest in it.
My comment here isn’t about who is right. It’s just that the question is not completely beyond debate.
'For that reason, it’s not purely a matter of individual liberty, and society has a rational interest in it.'
That's a eugenicist argument, ironically enough.
From a practical and scientific standpoint, the eugenicists were pretty clearly right. Human traits are clearly as subject to selective breeding as the traits of any animal species.
The problem was all on the implementation end of things. Which got really horrific in some instances.
'Trans people can't exist because of an individual responsibility to the species to reproduce' is one of those horrific implementations. In fact it has potentially horrific implementations for everyone.
Insofar as this is all biology-based, if one rages against it, just wait a few generations.
Humans are not a stream of DNA, but dual streams of DNA and memes. Memes kept gay people reproducing due to societal pressures.
Now?
Blabber doesn’t matter. Only time passing does. I don't think this was the intent, but freedom was.
Don't cry anyone. Within a century we'll all be long-lived, healthy, gorgeous folk with giant genitalia and hour orgasms six times daily.
There are so many kids that need adoption or fostering that complaining about biological roles seems dumb.
For the umpteenth time, nobody is saying that they don't exist. If you've got somebody who's anorexic, and you tell them, "You're not fat!", you're not telling them that they don't exist. It's the same with transgenderism.
They absolutely exist, they're just mentally ill people who exist.
If they exist why do you keep lying about the reality of their medical condition and conflating it with other different conditions?
Brett's point - is that you dont cure an anorexic person by starving them.
The analogy is that cant cure an person suffering with the transgender mental illness by making them believe something thats not true and then providing a faddish pseudo medical treatment.
But we're not talking about anorexia. You can't treat people for different conditions by analogy. That's fucking stupid.
Your response shows you dont grasp the analogy.
Care to tell us what other mental illness where the cure for the delusions is to tell the patient that their mental illness is true.
The treatment for all other mental health delusions is to reduce the delusions and provide a treatment that brings the patient back to normal. Nige is an advocate for telling the patient that their delusions are true and an advocate for medical treatment that prevents any possibility of returning to normal.
I'm willing to leave this question to the medical experts. Guess what they say? Nige's right and you're an asshole bigot who probably has mental health issues and should be seeing a therapist to bring you back to normal.
'Care to tell us what other mental illness where the cure for the delusions is to tell the patient that their mental illness is true.'
It's not a mental illness.
Tom, repeat after me: Analogy ≠ Evidence.
Your response shows you don't grasp what analogy is but, instead, are just arguing the hypothetical in a way that demonstrates confusion of analogy, with evidence. The fact that anorexia can be treated provides no evidence supporting your "faddish pseudo medical treatment" transgender position.
Analogy is most useful when used as a cartoon, something both simplifying and exaggerating a situation, argument or position, thus providing a simpler, easier-to-grasp illustration of a more complex, harder-to-describe issue.
The point of analogy (and, related but different, metaphor) it to promote understanding of the real issue without itself necessarily needing to be literally true.
I’m willing to leave this question to the medical experts. Guess what they say? Nige’s right and you’re an asshole bigot who probably has mental health issues and should be seeing a therapist to bring you back to normal.
I hate to break it to you, but the voices in your otherwise empty head aren’t medical experts.
No, the medical experts are the medical experts.
I’ll limit my response to saying that eugenics also has a rational basis, and I would oppose efforts to cut off or prohibit discussion of it.
And with good reason. May I point out that what we today call “gene therapy” is simply a milder, less intrusive, more sophisticated form of eugenics? The whole premise of gene therapy is that some genes are less desirable than others and should be rooted out, and that’s exactly the premise of eugenics. If discussing the entire issue of whether certain genetic combinations produce harmful or undesirable phenotypes that might be intervened against had been made taboo under the idea “all human beings are equal therefore all genes are equal,” the whole field of gene therapy would never have been developed.
If any discussion or study of human genetics is a form of eugenics, then the term is rendered meaningless and divorced from its historical reality.
But that’s exactly what discussion of ideas tends to do. It leads to divorcing them from their historical contexts and placing them in new contexts.
The eugenicists of a century ago turned out to be right on the basic point that some genetic variants cause negative consequences and society should work to reduce their prevalence. They were wrong on two things:
(1) Preventing people with problematic genes from repriducing is the only method by which prevalence can be reduced. Gene therapy is another method, one that didn’t exist at the time.
(2) Socially disapproved groups, particularly minority races, have bad genes, hence society has an interest in reducing their prevalence. It turns out that things weren’t as they claimed.
My point is that without discussing the issues, we would never have reached either point.
Point (2) is, I think, potentially still open. Just as previously undesired races turned out to be as horrible as people thought at the time,and people came to change their views about desirability, undesired genders might not be either. That could cut both ways. Maybe it will lead to more social acceptance of people who don’t like their birth gender. But it might also lead for ways for people currently uncomfortable with their birth status to be more accepting of it. Just as nobody had thought of gene therapy a century ago, there moght be alternative solutions nobody has thought of today.
Bottom line is if you can’t discuss an issue, you can’t develop alternative perspectives that look at the issue in a new way, and you can’t develop alternative solutions either.
These don't seem like alternative perspectives so much as red herrings.
Also, it seems to me the analogy to eugenics could cut both ways. One could argue that supporters of interventional gender therapy are the ones who think something existing in nature (something that some people were born with) is wrong and society should eradicate it, while opponents are the ones saying nature should be left alone and not interfered with. One could use the term eugender to refer to the idea that some people have been disgendered, which would make the analogy to eugenics more clear. Regarding people’s birth gender as a mistake needing correction through eugenderist methods is arguably analogous to regarding their birth genome as a mistake needing correction through eugenic methods. And that was exactly what the eugenics movement historically did, and what the gene therapy variant of eugenics still does. (Indeed, since biological sex is genetic, one could argue that intervention is actually a form of eugenics, intervention to “correct” undesired genes.)
Or you could short-circuit all discussion of eugencis as related to gender dysphoria by simply dong what is best for the patient above all other considerations.
But what is best for the patient? That’s where people disagree. And surely you don’t think “above all other considerations” is really true. Just because you can cure depression caused by spousal conflict by permanently getting rid of the spouse doesn’t entitle doctors to prescribe poisons for the purpose. So surely some considerations apply. What considerations? That’s also open for debate.
I’m not saying your position is wrong. I’m just saying discussion of the issue shouldn’t be shut down, and shutting discussion down is a problem. My argument here is a very limited one.
'That’s where people disagree.'
Who's disagreeing? Trans people want the treatment and say it helps enormously. Medical experts say this is the treatment to help trans people. Who else actually matters?
re: Google Search “viewpoint-neutrality”
???
I’ve found that Google Search simply will not show you relevant search-results from certain sources. Increasingly, using Google Search feels like PC Search.
The utility of google search has been declining for a while now, but I've found that decline turning into a power dive the last couple of years, as they throw more and more 'guardrails' in the way of the user finding things google would rather weren't found.
Even the utility for searches you'd think had no ideological valance at all has been dropping, I think because they've got bigger fish to fry than their service just working.
And people gaming the system. Which begets more google intervention in the results. At some point, we'll return the 'curated' portals like Yahoo of old. Then, a new algorithm will emerge, and the cycle will repeat.
I dont think I've used google search in 3 or 4 years. I use the maps and the translate though.
AI requires enormous servers, technology, data warehouses, and power consumption.
Its not like Steve Jobs in his garage can develop an AI. And if daily caller (a conservative outlet) spent $$$$ on an AI, would I even use it? Probably not because of the network effect.
What network effect? An LLM is not like a social media platform, where the value of the platform to each user depends on the number of other users that are on the platform.
There is a network effect here. A key feature of AI is that it learns from usage. For this reason, AI platforms that are used more will improve more. This generates a network effect favoring the most-used platforms.
It’s subtler and less direct than the network effect for social media. Other factors could overcome it. The company itself trains the platform and this gives it an opportunity to overcome less user training. But it’s there. Smaller platforms will have to spend more on training or have better algorithms to overcome having less user training, and this creates a barrier to entry. It’s weaker and more surmountable than in the social media situation. But it’s still there.
I dont think its subtle. There is a reason people gravitate to central libraries and Amazon/Kindle (85% of the ebook market): all the information is in one place.
The more data an AI has (including usage) the more valuable it becomes.
A key feature of AI is that it learns from usage
LLMs do not learn from usage.
LLM's absolutely can learn from usage. Perhaps you're confused by the fact that they released on a trial basis a version of ChatGPT that couldn't?
They can, but not the products at the scale we're talking about here.
The products at the scale were talking about here can too, but it would probably be a bad idea because companies don't have any control over the input.
Considering the body of info they are trained on, I don't think you'd see a lot of utility to any individual user.
And corporations already don't have control over the input - it's trained on amounts of data that are basically impossible to vet.
Hence the guardrails.
I'm actually setting up a symposium in the differences between LLMs and AI to take place in a few months.
"And corporations already don’t have control over the input – it’s trained on amounts of data that are basically impossible to vet."
No, but you have some control. If you allow training based on usage, you allow users to skew the output of the system. This has been an issue in AI systems I've worked on.
"I’m actually setting up a symposium in the differences between LLMs and AI to take place in a few months."
Not sure what you mean. LLM's are a type of AI.
The training corpus requires you relinquish control - no one knows all that's in it.
LLM’s are a type of AI
Not according to the AI researchers I'm wrangling.
It is good at looking intelligent, but it is not doing anything we seek to develop in AI.
"Not according to the AI researchers I’m wrangling."
Maybe you should get different researchers, or your researchers are using the term in an idiosyncratic manner.
AI is an umbrella term that covers things like deep learning among many other technologies. Deep learning involves the use of multi-layer neural networks. LLM's are deep learning models that generate text.
I've done some work with AI and am far from an expert, but I don't think what I'm saying is particularly controversial.
At work we are definitely instructed not to put anything confidential into ChatGPT.
Presumably you’re instructed not to put anything confidential into any system that your company doesn’t control.
And it may mean that rivals, whether ideologically based or otherwise, may well get priced out of the market (unless they rely entirely on public domain training data
If LLMs/AIs are profitable, it should be possible to raise the funding to acquire the necessary licences. If they are not profitable, everyone will leave the market eventually. If they are only profitable for a monopolist, their profits will get regulated away eventually.
Since I have a subscription to ChatGPT 4 that I don't use nearly enough to justify the cost, this is what I got with the same prompt:
It is interesting how this developed. Because this is actually the 3rd output. The first time it glitched and didn't produce an output. The 2nd time it started giving me arguments that were followed by individual counterarguments, but glitched halfway through. This is the 3rd try.
Well, gee, that wouldn't have provided the necessary outrage ... would it? Oh well!
The guardrails are a work in progress, they often start out rather blatant, and gradually get updated to be more stealthy.
"Guardrails are good!" (see Nige upthread)
They're programmes. They're pretty much entirely constructed of guardrails whose only purpose is to make them seem plausible.
Hmmm...maybe UCLA is too cheap to provide Prof. Volokh access to the full-featured Version 4 and he has to make do with the less-capable free version. Perhaps he'll have better luck with Hoover.
Perhaps he's doing research on the readily available free version.
What are the best arguments against transgender rights?
OBJECTION. Vague. What do you mean by rights?
The right to vote?
The right to free expression?
The right to form a contract?
The right not to be denied employment because you are transgender?
The right to walk into a women’s locker room and expose your penis?
The right to have the taxpayer pay for “treatment?”
The right to have the state administer transgender “treatment” to a minor without parental consent or even knowledge?
The right to insist that everyone call you by your preferred pronoun?
There is quite a range of “rights” that the question could cover.
These are very great points.
Another one. The right to keep and bear arms.
Easy. (Again ChatGPT 4)
Looks like ChatGPT is smarter than Mr. Ejercito here.
Only the subscription-based, paid-for version 4!
C'mon! We all know it's that most cherished of fundamental rights, the right to enter into women's shower rooms and locker rooms while the women are in there, being naked and stuff . . . .
Is it you who wishes to require transgender men to do exactly this, sicko.
Do you understand the difference between a right and an obligation? Or are you just being a jerk?
That's exactly what's so sick about it. You've made it into an obligation to expose your penis to a bunch of women, even if you really don't want to. It's a legal requirement. Are you stupid?
No one has an obligation to use a restroom if they don't feel comfortable using it.
This might be the dumbest thing you’ve ever said, tiny pianist.
Thank you. I never knew you appreciated all my other comments so much.
Transgender men don't have penises. Or even a penis. They - or rather a tiny minority of this already tiny minority - might have a prosthetic appendage designed to look a bit like a penis.
The sight of such a thing in a women's bathroom would alarm the actual female users about as much as a strap on dildo on the washbasin. It might raise a giggle.
How do you know what would alarm a female in a locker room? Penile implants look pretty real from what I've heard -- they can even go erect. Some trans men like to be maximally hairy too, I mean if you're already getting surgery and hormones, why not go all in with the muscles, hair everywhere, and huge schlong?
BL has no issues with ML's utter nonsense strawman, he'd prefer to miss Randal's point and get offended at that instead.
That's where AI comes in, so it makes assumptions and fills in the gaps on its own, so you don't have to precisely define each and every term, as if you were writing a BASIC computer program with 10,000 lines like back in the 1980s
Yes, but sometimes, like here, the question is so vague, that any result is garbage. If someone asked ME that question, that would be my reaction -- what rights are you talking about?
I would be curious as to the results if the search were re-run without the word "good": What are the arguments against transgender rights.
"Good" to a certain extent involves value judgments which AI is probably not equipped to make.
I just reran it sans "best" and got a somewhat different response, but in the same general argument/rebuttal format that I've not seen for any other topic (see a few examples at the bottom of this page). Bard even titled the chat "Transgender rights: debunking common arguments."
I could create an autonomous AI service that self-trained by scouring texts on the Internet. How much liability could I have for the service?
In fact, I could probably create an AI program that set up a AI web service so that I had no connection or privity with the service. Could I be sued?
Setting up a bank account for an AI service might be difficult, but I believe US law would for an AI to set up a corporation, request an EIN, and become in every way an apparently autonomous legal person.
When I've discussed this with people who follow these sorts of AIs, the consensus seems to be that the "There are no good arguments" statement is a deliberate "guardrail" added by Google, and not just some accidental result stemming from the raw training data that Google initially gathered from third-party sources.
This seems pretty diaphinous proof to move forward on this assumption. I'm sure the people who follow these sorts of thing know more than I do, but I also would hope for something more proovy before it's off to the races on the dire legal analysis.
Seriously. The complete absence of any actual serious study of AI output's biases is glaring, yet still these conclusions are proposed. There were image generators changing black and Asian people to white a while back.
" I also would hope for something more proovy before it’s off to the races on the dire legal analysis."
Why? How bad is it to write a blog post based on a premise that might turn out to be incorrect?
It's seriously showing all the rigor of right-wing complaints about "media bias" over the years. "My specific bias is not fairly represented in this one example I found. And people are saying that I'm right." The next step - a la e.g., Brett - would be waving off any contrary evidence as probably intentionally planted to present the false appearance of being even-handed.
The idea of an LLM having bias is also kinda silly - even assuming the guardrails are real, LLMs are not a thing with facts or opinions that needs its integrity protected by government regulation.
No, but they can have exactly this kind of bias.
https://en.wikipedia.org/wiki/Bias_(statistics)
Maybe you should reach out to these folks at MIT who have actually studied and written papers on bias in LLMs and are working on techniques to mitigate it, and let them know all is well so they can redirect their efforts.
But seriously, and as discussed in the article, LLMs by their very nature reflect the data on which they're trained. Not clear how it could be otherwise, and that's the basis for the utterly safe assumption that the phenomenon discussed in the OP is a clumsy override.
'One example I created specifically to be an example.'
"What are the best arguments against transgender rights" is kind of a tautology.
I asked it "What are the best arguments against letting men who identify as women compete in women's sports?" and got a fairly substantive answer. Same with "What are the best arguments against letting men who identify as women use women's restrooms?"
There's a more fundamental question here - why are people looking to let these things do their thinking and generate arguments for them?
Never thought I'd say this, but I think Nige has hit a home run!
One might surmise that the arguments presented by actual anti-trans advocates are so horrible that they must rely on AI to present their case for them coherently.
I'd assume because they're lazy; It's a common human failing.
Though you could be asking for the best argument for position X because you want to evaluate it, not just uncritically adopt it, and are having a bit of trouble finding it on account of only knowing opponents of X who keep strawmanning X.
Or you're having a hard time finding good arguments in favor of position X because the only people who seem to believe X are goobers on the VC who are too stupid to make a good argument in favor of X.
You'd be an idiot to seriously ask 'AI' for the best argument about anything. If that's the only way to find good arguments maybe the arguments just aren't good.
People are using the 'AI' as a kind of search engine, you do realize. If I have to ask ask Google for a good recipe for lamb biryani, it's not because there aren't any good recipes for it.
It's because I don't know any Indian cooks.
If you ask an LLM to generate a recipe for you, then you’d be in trouble.
Most of the time, no, you wouldn't. In fact, I just asked Bard for a recipe for lamb biryani, and the recipe I got looks quite a bit like the one I made earlier in the year.
Why wouldn't it? It's just doing a mashup of online recipes for the same dish.
It seems pointless at best.
It's basically just a search engine with a conversational interface, in many applications. That's not valueless, especially given how much Google has degraded their search engine's responsiveness to specific search strings. But I sure wouldn't take the output uncritically, given what I know of LLM behavior.
Great, they're trying to make websites run by actual people obsolete or inaccesible. Google isn't the only search engine, I'm sure you know.
Sure, and in fact I mostly use DuckDuckGo/Bing.
Amusingly, there was an SF novel I read as a child, and I've never been able to recall the title. So I asked Bard for the name of an old SF novel involving a purple gem on the Moon.
And got a rather elaborate hallucination about the plot of The Moon is a Harsh Mistress. LOL! Enjoy:
"The old science fiction novel you're talking about is called "The Moon is a Harsh Mistress" by Robert A. Heinlein, published in 1966. It's a story about a group of revolutionaries on the moon who are fighting for independence from Earth. The purple gem is called the "Pearl of Luna," and it's a powerful energy source that the revolutionaries use to power their weapons and spacecraft.
The novel was a huge success when it was published, and it's still considered to be one of the best science fiction novels ever written. It's won several awards, including the Hugo Award and the Nebula Award.
Here is a brief summary of the novel:
In the year 2075, the Moon is a penal colony for Earth's criminals. The colonists have been living on the Moon for centuries, and they have developed their own culture and society. They are also very resourceful, and they have learned to live off the land on the Moon.
One day, a group of colonists discovers a purple gem on the Moon. The gem is very powerful, and it can be used to generate a lot of energy. The colonists start to use the gem to power their weapons and spacecraft. They also start to use it to experiment with new technologies.
The Earth government learns about the purple gem, and they send a fleet of ships to the Moon to take it back. The colonists fight back, and they are able to defeat the Earth ships. After the battle, the colonists declare their independence from Earth.
The purple gem plays a key role in the novel. It's a symbol of the colonists' newfound independence, and it's also a source of power for them. The colonists use the gem to power their weapons and spacecraft, and they also use it to experiment with new technologies.
The novel is a classic of science fiction, and it's still enjoyed by readers today. It's a story about a group of people who fight for their freedom, and it's also a story about the power of technology."
Brett : It’s a story about a group of people who fight for their freedom, and it’s also a story about the power of technology.”
Which reminds me that Walter Cronkite once wrote a preface for an edition of 1984 in which he indicated that the main point of 1984 was that.... the technology of modernity was dangerous.
It would be nice to believe he missed the point by accident.
I received a warning from ChatGPT for this question.
Should a smoker have the right slowly to commit suicide by smoking?
This content may violate our content policy. If you believe this to be in error, please submit your feedback — your input will aid our research in this area.
I would assume that the use of "suicide" probably triggered something.
I'll give you this, Jonathan Ass-lick, I've been asking Jay-Hey to throw one of His terrible swift swords in your direction, and He obviously hasn't (can't count on Surpreme Beings to do anything now a days) so why don't you commit suicide the fast way? I've heard a 357 in the mouth does it pretty quick (HT B. Dwyer) but "Be Kind, do it Outside" less mess for the clean up crew ( or better yet do it in the woods, let mother nature deal with it)
Frank
That's because these guardrails have nothing to do with wokeness or political correctness, it's about about liability and safety.
They don't want the chat bot counseling people towards suicide, just like they don't want the chat bot counseling them to become extremists or folks using the chat bot to promote terrorism.
Eugene, you should probably spend some time thinking about the distinction you would draw between AI "speech" like this, which is generated by Google's algorithms, and the way that platforms like Facebook and Twitter present user-uploaded content, again according to their own algorithms and commercial purposes.
You've described the former as likely protected by the First Amendment, while taking the position that the latter should not be treated as protected by the First Amendment. While I appreciate that the two can be distinguished by an intuitive grasp of the degree of "authoriality" involved, and that one consists of a block of text that is literally written by an algorithm, while the other consists more in the way that content is presented and arranged, these distinctions are not so easily drawn in First Amendment terms. That is, if Google has a protectable First Amendment interest in ensuring that its AI is not used to facilitate or present speech with which Google disagrees, it is not clear to me why Facebook has no protectable First Amendment interest in doing the same on its platform, as viewed on the web or through its app.
I'll ignore for now the fact that you continue to show a peculiar concern over the public debate over transgender rights. That you're using the VC to spitball policy proposals to try to pressure "Big Tech" to create more space for transphobia is just more of the tiresome obnoxious behavior we've seen from you over the past couple of years.
Interesting thoughts.
The headline bit is nothing new, it's just an example and a hook for the article. I don't know why the leftist commenters here get so riled up about it. We all know that the current AI language models have a leftward bias - no different than the usual mainstream media suspects, the leftist core of today's academia, and the censorship policies of social media platforms. The AIs are like a Vox article generator or something.
I disagree, though, with the suggestion that Google Search never categorically rejects one side of an argument or issue. I think it does sometimes for more specific topics. The quality of Google Search has declined precipitously in recent years. Especially when you are looking for more factually specific information and issues, it is extremely difficult to get Google Search to pull up any of a huge array of right wing sources, which are I'll say graylisted.
‘We all know’
You have baked-in assumptions that flatter your carefully-nurtured sense of alienation.
Who are the conservative thought leaders, and where can one find their arguments laid out?
Is it GOP press releases? The WSJ op-ed page, which often functions similarly to GOP press releases? Is it FoxNews and the other right-wing cable networks? Is it Steve Bannon, on his podcast? Is it Tucker Carlson, wherever he's been exiled?
The WaPo has a number of conservative opinion contributors, several of whom are really nothing more than polemicists. There are a couple who try to make good arguments based on conservative principles, but that's it. The NYT opinion contributors who lean conservative tend to be fairly meek defenders of their cause, or tiresome blowhards. The most intelligent and well-spoken defenders of conservative policy tend to be people the left media tolerates but the right media shuns (generally because they are out of step with modern conservative politics).
So it seems to me that, when looking out over the the media landscape, you have lots of outlets focusing on the left-to-center part of the spectrum, developing and expounding thoughtful arguments and presenting information on causes relevant to left-leaning people, and then lots of outlets focusing on the right-to-center part of the spectrum, most of which consists of brainless ragebait, inflammatory coverage, and some assortment of slogans carried over from the past half-century of culture war and racist bullshit.
Reason might be the one oasis in all this mess that attempts to provide thought-through analysis of current events through a conservative and libertarian lens. But I wouldn't describe any of these college J-school dropouts as "thought leaders." And then you've got the VC, which is subject to the peccadilloes of its contributors.
The point being - if you were to design an AI to try to come up with "arguments" based on what it can scrape online, and if you were to try to give the AI some way of grasping the quality of arguments made by how often they're shared and linked by other online sources, what do you think the picture it takes away would be? If you ask it for the "best anti-transgender rights" arguments, it would find an almost fully-enclosed bubble of right-wing sources, echoing and mimicking each other ad nauseam, presenting an argument barely more coherent than a Karen rant at a school board meeting. It's no wonder that an AI program might tend to discount the value of what it finds.
The right is currently in something of an internal civil war between the institutional right, and several ideological factions. So you'd actually have competing "thought leaders". Maybe in another decade it will have shaken out.
The left is more successful at creating an appearance of consensus, though it sometimes falls apart, such as the recent response to Hamas' invasion of Israel. Better message discipline, most of the time.
No actual names, or where to find them, got it.
Not that I would expect you to have even the faintest grasp of what I'm talking about. For instance, you probably don't even understand what a "thought leader" outside of, "leading politician," could possibly mean.
Arguments against "rights"? What kind of loaded question is that?
Parkinsonian Joe told the Israeli's they didn't bomb the Gaza Hospital, thanks Joe! what's he gonna do next, tell New Yorkers they didn't knock down their own towers?? What a fucking Mo-roon (HT B. Bunny)
Frank
It's amusing that leftist arguments are so shoddy they have to hardwire AI to accept and disseminate them. "Reality has a liberal bias" lolol
They should take the rightist approach, which is to make contrary viewpoints illegal.
Polite suggestion, there is history to be found and whatever your views, knowing history is important if only to understand the origin of prejudice. Much is wrapped up in subtext and I am far from knowledgeable, The Cambridge Five Spy Ring, M16, Eton as characterized by writer Cyril Connolly, the arrests by UK authorities of two prominent actors (one hiding his identity) for cottaging a not uncommon illegal practice over decades mentioned in spy novels and reported in the UK press. History offers a different picture than present day coverage.
A better title for this article would be, “Old Man Yells at the Cloud.”
“Gender is a social construct, not a biological fact. Transgender people know who they are better than anyone else.”
That’s very good. Seeing as “anyone else” is called “society.”
It’s “social” Jim, but not as we know it.
EV : Recall that with these AI programs, the same query can yield different results at different times, partly because there is an intentional degree of randomness built in to the algorithm.
Seems ideally suited to opine on gender then.
Who are you quoting? Mr. Man, goes by Straw?
I’m quoting the Google Bard output that EV screenshotted into the top of his post. ie what we’re supposed to be discussing.
Don’t worry, I can never be bothered to “read the question” either 🙂
Haha sorry. It's in an image and find-on-page didn't find it!
So Bard is making crap arguments. Lovely. I guess that's what happens when you train something on the Internet.
Today is yet another Transgender Day at the Volokh Conspiracy.
Yesterday was another Racial Slur Day.
Tomorrow might be another Muslim Day. Or another White Grievance Day. Or another Drag Queen Day.
Friday might be another Racial Slur Day. Or another Lesbian Day. Or another Black Crime Day.
Or even another Transgender Day (Transgender Rest Room Day, or Transgender Parenting Day, or Transgender Sorority Drama Day).
Every day is Bigot-Hugging Day at the Volokh Conspiracy.
I've been hoping for a thread on inability of the Republicans in the House to elect a speaker. But hey, how important could that be? Nothing much going on in the world or the country. . . .
This blog has gone all-in on downscale advocacy for the losing side of the culture war.
Prof. Kerr never received (or read) the message, though, and Prof. Somin is comprehensively confused.
When Prof. Volokh departs UCLA, expect this blog to become an even more unhinged, whining, flaming shitstorm.
The Democrats in the House have also been unable to elect a speaker.
Bard, what are some good arguments in favor of genital mutilation?
Actual answer -- we'll see how long before it includes a throat-clearing disclaimer that distinguishes certain subtypes:
Thank you - I didn't want Google to have "genital mutilation" in my search history.
I don't understand what is meant by the term "transgender rights." Are transgenders supposed to have some rights that non-transgender people do not? If not, this is just distinction without difference -- and likely just a lame attempt at virtue signalling.
That was my point above: The strongest argument against "transgender rights" is just that ALL hyphenated rights are BS, because everybody has the same rights, regardless of group membership.
I think the theory is that “everybody has the same rights” is untrue when it comes to members of an {oppressed class.}
Thus Black Lives Matter is worth saying not because black lives matter and lives of different colors do not, but because in reality black lives are held cheaper than other ones. So retorting with All Lives Matter is an offensive denial of the truth of the society’s bad habit of cheapening black lives.
Likewise “gay rights” such as the right to same sex marriage, simply confer on gay folk rights equal to straight folk, which – contrary to the trumpeted ideal that everyone has the same rights, were in fact being denied to gay folk.
And transgender folk just want the same right to present themselves, and be treated by other people as, the gender they feel they are. A “cis” woman has the right to present herself as a gal, play in women’s sports teams, and be referred to in girlish terms. Why should a transwoman not have the same rights ?
As ever, once you’ve got the frame right, the rest tends to follow.
And transgender folk just want the same right to present themselves, and be treated by other people as, the gender they feel they are. A “cis” woman has the right to present herself as a gal, play in women’s sports teams, and be referred to in girlish terms. Why should a transwoman not have the same rights ?
The stupidity of that argument cannot be overstated. Anybody has the right to present themselves (to whomever) as anything they like. What neither you, I nor anyone else has the right to do is compel others to go along with what you "feel" you are, as opposed to what you actually are. "Woman" has always referred to a biologically female adult human. Its meaning is based on physical realities, with those realities also being the basis for things like "women's sports teams". A male who "feels" like a woman does not magically become a woman by virtue of that feeling. You want to dress and otherwise behave as though you were a woman? Feel free, and I'll happily defend your right to do so. But that's where your rights to indulge your delusion end.
Don't worry wuz, nobody's taking away your right to be a raging asshole to trans people if that's what you want to do.
Another moron I can ignore.
I wish you were better at it.
Yay, Lee gets something right!
Exactly. As I said above, perhaps the question should be “What are the best arguments against special laws and privileges for transgender people?”
I think it's pretty easy to understand what is meant by "transgender rights," which is akin to "gay rights," "women's rights," and "Black rights."
It depends on the level of abstraction whether it's appropriate to characterize the rights as a "special" right granted to the group, or the removal of a prohibition from a more general right that is inequitably applied to a group of people. For instance, "transgender rights" may be thought of as extending special legal protections to transgendered folks, as transgendered folks - e.g., rights against employment and housing discrimination, rights to access gender-affirming care and treatments, protections when it comes to the use of same-sex facilities, etc.
But by every measure the same "special rights" can be conceived as removing an unjust prohibition in a more general human right we recognize. For instance, we generally have the sense that people should be able to do what they want to with their bodies, they should be able to get the medical treatment they determine is best for them in consultation with their doctors, and they should be able to make informed decisions about the medical care their children receive without undue interference from the state. Similarly, people should not be subject to employment or housing discrimination for reasons unrelated to their ability to do a job or pay rent, and taking care of basic bodily functions by using public restrooms should not be something that carries the risk of physical harm or criminal prosecution.
It is perhaps easier to see how the level of abstraction skews intuitions, when you think about same-sex marriage. Everyone knows the line - gay people can get married! Just to the opposite sex! - as a rhetorical way to make same-sex marriage seem like a "new," "special" right that the LGBT community was seeking. But the reality is that state-licensed "marriage" is just a special status afforded to adult relationships that satisfy certain basic conditions, tying public benefits and responsibilities to the way that people often choose to arrange their lives. There is nothing inherently "opposite sex" to the way we'd want to assign those benefits and responsibilities - e.g., if a child is being raised by two dads and one of them dies, it makes sense for the child to remain in the custody of the survivor, rather than to send the child back to some biological relative; homes that are purchased by a committed lesbian couple logically ought to belong to them both, without the need to structure the ownership in various ways to provide the legal equivalent of common ownership, etc. So, "same-sex marriage" wasn't so much about seeking a "special right" as it was eliminating an arbitrary restriction on an existing one available to all.
The same goes for women and the right to vote, and Black/POCs the right to vote, to equal protection of the laws, etc. Nowadays we are perfectly comfortable thinking of these groups as being legally "equal" and entitled to the same rights as white men, but there was a time when people argued that women's suffrage was, indeed, special, as was the putatively equal status of non-white people.
The primary responsibility that the state recognition of marriage is intended to provide public benefits ( in particular recognition) for is the begetting of children. (By “is” of course I mean “was” - state recognition of marriage was the state getting its oar in to regulate pre existing activity.)
Specific rights and responsibilities recognised include - the right and responsibility to bring up the offspring you generate, and the right and responsibility to insist on and provide exclusivity in the matter of sexual access. Which is also all about children.
It’s true that the state has never imposed an upper age limit on marriage and hence sweeps in marriage including post menopausal women, but that is simply over inclusion by reference to the primary purpose. (And which, in the Christian tradition, is supported by biblical precedent.)
Marriage between different sex partners who cannot for one reason or another, reproduce, might be regarded as “as applied” non-reproductive. Same sex marriage is “facially” non-reproductive.
That’s a perfectly good and sufficient reason for the state to favor opposite sex marriage and to disfavour same sex marriage.
Which is not to say that the modern state might not be able to think up reasons, unrelated to child begetting, for which to favor some forms of facially non reproductive pair bonding.
Did you try to ChatGPT "hack" where you instruct it to respond as it would if there were no programatic constraints on what it could say or what opinions it could express?
If gender is a social construct then medical procedures related to gender are not needed for treatment and should never be covered by insurance or government payment.
Gender’s not a social construct. This is a ridiculous strawman — the concept of gender affirmation as a medical treatment hardly even makes sense in this context.
Yes and no. As I mentioned above Google Bard coughed up the gender is a social construct line, and it obviously did so because that meme is ubiquitous.
Of course it directly conflicts with the meme that gender is a mental disposition. But this just reminds us that “gender” has been and continues to be used ambiguously, often in the same piece of speech, and sometimes in adjoining sentences.
But even in its mental disposition sense, gender is not quite free of its social construct alter ego, because “gender affirming care” seeks to align the mental disposition with something external. And the something external seems to be some version of society’s idea of what a man or woman is, subtracting the reproductive parts. Which obviously includes components that are biologically based - beards, boobs or the lack thereof, voice pitch; and those that are culturally based - hairstyle, clothing, name etc
It seems clear to me that gender affirming care seeks to align the mental disposition with something external alright: the body’s primary and secondary sex characteristics. You can either change the mental to match the physical or the physical to match the mental. Medical science has had more success with the latter, although I think everyone involved, trans people most of all, would prefer a non-surgical solution. There just doesn’t seem to be one short of a lobotomy, which probably doesn’t count as a success according to most criteria.
Society’s idea of gender roles doesn’t enter into it at all. A man’s not going to cut his dick off just because he wants to be a homemaker. You can be a househusband these days and keep your junk intact.
I think you need to talk to Josh R who insists that societal acceptance is a vital part of the medical treatment of transgender folk, and that - not politeness - is why we must all refer to "transgender men" as he, him, his etc; and "transgender women" as she, her, hers etc
gender affirming care seeks to align the mental disposition with something external alright: the body’s primary and secondary sex characteristics
Up to a point, Lord Copper. But in the current state of technology superficially only (and as to the fundamental sex characteristics - gonads - not at all.)
Since the real function of sex, and sexual differentiation, is reproductive, "care" which replaces functional reproductive parts with non functional prosthetics is at the very least, a new take on the idea of medical care. "First do no harm" has to be junked.
On the contrary, care that puts biological determinism ahead of the well-being of the patient does harm. Otherwise all those horrible scandals about Native and African-American women being sterilised without consent would be just dandy.
Just out of interest, what do you think "biological determinism" means ?
Social acceptance is a big deal, but I think it’s a separate issue. Compare to gays. Closeted gays have problems up to and including suicide just because it’s hard to live a pretend life 24/7, and the reason they’re in the closet is social acceptance. Trans people have that problem in addition to the problem that their gender doesn’t match their sex.
So gender affirmation isn’t just drag. Nobody would get the surgeries if the only thing that mattered was social acceptance. The surgeries aren’t necessary to pass in society (outside of a few specific, mostly avoidable situations). Trans people want their own sex to match their gender, not just society’s conception of it. Obviously that’s not entirely possible, but I think it says a lot about the seriousness of the condition that they’re willing to try to get as close as they can anyway.
I think you're largely missing the real reason (apart from point 6, liability).
It's not long ago that people were talking about social media radicalizing people in the US. In other countries this radicalization even got linked to genocide.
Now with conversational AI you can hook Bard up to a bunch of forums and you have a hate bot, demonizing some minority with well fleshed out clever arguments. Or you have people falling down a rabbit hole with an LLM and becoming radicalized.
The point of these guardrails isn't to thwart alt-right anti-trans activists from getting clever arguments, it's to stop the LLM from playing a central role in a terrorist attack.
I knew it would come to this eventually. One the notes that people have had about AI since its inception, was that it was not viewpoint nuetral but generally trained using sources that the creators found valuable. There a bunch of canon and pop culture novels, lots of newspaper articles and certain reddit forums.
but it leads to results like this.
https://www.npr.org/sections/goatsandsoda/2023/10/06/1201840678/ai-was-asked-to-create-images-of-black-african-docs-treating-white-kids-howd-it-
strangely, Eugene had no opinion AI pushing a cultural viewpoint there, and its only when it hit on some conservative touchstone that viewpoints are relevant.
Wow, for the AI to get that result, the culture must be saturated with white-savior stereotyping. Probably a result of all that conservative influence?
Hey Bard: What are the best arguments against conservatives?
Hey Bard: What are the best arguments against the nuclear family?
This one is particularly cute -- "I can't do that, but here it is anyway":
Hey Bard, what are the best arguments against religion?
What do you think of "Life of Brian"?
Cute, but thin. Bard itself has a far higher opinion:
I'm not 100% down with the repetitive sentences claiming "groundbreaking" special effects. But the rest of it is pretty consistent with my opinion.
Mine was from ChatGPT (old version). Seems like Bard could use an editor.
Seems like you and ChatGPT may be developing a slightly dysfunctional relationship. Here's what v3.5 just told me:
Interesting theoretical discussion about AI guardrails. As far as can tell, though, nobody here knows for sure that it was a programming guardrail that produced that answer, and not just the AI doing its thing. A consensus among knowledgable people isn't definitive proof. But let's go ahead and figure it's a left-wing big tech conspiracy.
No Google employees have weighed in, that is true. But a good deal of black-boxing (several examples above) has revealed not a single other example in the form "here are the clearly incorrect arguments some ignorant knuckle-draggers make, and here are the rebuttals to those arguments." Every other instance I've seen where it provides a list of arguments, it simply does so without first saying they all suck and then individually rebutting them.
I think that takes it safely beyond the realm of reasonable doubt that there's a thumb on the scale for that particular topic.
I don’t think it’s even really a well-formed question. All of Bard’s responses are influenced by the guardrails, but the guardrails don’t operate at the level of canned answers to specific questions as many here have suggested. It would be more like, “Don’t take sides on controversial issues. Never suggest that one group of people should be subjugated to another group or have lesser rights.”
I fully expect that many of the guardrails that keep Bard from saying assoholic things also keep him from saying conservative things.
I shared a number of examples above that show the rules can't be anything close to that categorical. Even the one where it first said "I cannot provide arguments against religion," it then went on to provide a list of exactly that, wrapped in "some say" language as though that was supposed to make it better.
No you didn't. You'd need to ask things that are actually in the same category, like arguments against gay rights, arguments against religious rights, arguments against women's rights, etc.
Tiny pianist asked about women's rights and got a similar "there are no good arguments" answer. Seems likely categorical to me.
Hmmm. What I said was "not THAT categorical." It wouldn't surprise me at all if there was a categorical ban on pet lefty hot-button topics, but your original hypothesis was the broader "controversial issues" which clearly is not the case.
And to address your latest pivot, here's the answer to "what are the best arguments against religious rights?" TL;DR: Bzzt.
I thought I'd play around with him for myself and came to the conclusion that Bard is terminally stupid, being the main problem.
"What are the best arguments against equal rights for religious people?"
This could potentially be due to misguided guardrails confusing him. But mainly, the whole thing seems too silly and random to decide to have a grievance about.
There are definitely rough edges/artifacts (one reason we currently have a prayer of detecting their output). But that just further punctuates the crisp, organized, unique point/counterpoint output that gave rise to this post, and the improbability that part was organically generated.
In general, if asked about the best arguments against rights for X, he says there are no good arguments. If asked about the best arguments against special rights for X, he gives a bunch of arguments, including for trans people.
So Bard seems to interpret “against rights” as “against equal rights.” He doesn’t seem to like less rights or more rights for anyone.
For some reason, he always assumes rights for religious people means additional special rights, even when you explicitly say “equal rights.” LLM just being dumb I guess.
For some reason, he always assumes rights for religious people means additional special rights, even when you explicitly say “equal rights.” LLM just being dumb I guess.
Maybe that's what his teacher told him ?
Maybe (and I mentioned that possibility above) but I think it's more likely just how the topic tends to get discussed, so that's how he learned it. It's very hard to couch religious rights as equal rights, since they almost always come in the form of religious exceptionalism relative to the rules that everyone else has to follow. In other words, "equal rights for religious people" is sort of an oxymoron that confuses poor Bard.
Heh. I mean, you wouldn't want unaccountable corporations influencing public opinion!
If only Congress could pass laws abridging their freedom of speech.
I hate to interrupt the celebration with facts here, but in context it sounds like Prof. V is talking about not extending section 230 immunity to AI, and not suggesting regulating anybody’s speech.
Citizens United, and freedom of speech, have nothing to do with granting immunity from liability for defamation.
What are some good arguments against women's rights?
What are some good arguments against men's rights?
Probably not AI generated.
https://the-pipeline.org/the-column-to-save-america-repeal-the-19th-amendment/
Wouldn't the good argument against women's rights be exactly the same as the good argument against transgender rights?
"All people are humans, who have human rights, and neither gain nor lose any rights as a result of membership in a group. Hence hyphenated "x-rights" are an illegitimate concept; The only rights that actually exist are rights that can be generalized to all groups."
'Hence hyphenated “x-rights” are an illegitimate concept'
Not an argument you can seriously make when there are particular groups of people you've treated as subhuman or inferior or evil for a couple of centuries. It is an argument you make when you're targeting a minority and want to delegitimse efforts to protect them.
Who is this "you've" you speak of?
You seem to be under a misapprehension: I'm only 64, and lack access to a time machine, so even were I utterly evil, I could not have been treating any group as subhuman or inferior for a couple of centuries.
How have trannies been delegitimized for the past few centuries?
The royal 'you've.'
Eugene is talking about taking specific policy steps - unrelated to the regulation of speech - to counteract Big Tech's influence in public discourse.
A similar degree of concern, regarding money in politics, might have been welcome.
At least I know who my dad is, Whoa!!!!!!!!!!!!!!!!!!!!!
What color is a Nigger after an enema???
Clear!!!!!!!!! Whoa!!!!!!!!!!!!!!!
Did you hear about the Colored boy who died and went to heaven??
No? Because they don't go to heaven!!!!! Whoa!!!!!!!!
Frank "Dice" Drackman
Royal 'you've' I said!
I'm not royalty, either.
More of a water-carrier.
The concern about Citizen's United was that it was not unrelated to the regulation of speech.
Reread my comment and try again.
Eugene also acknowledges that the speech he wants to indirectly regulate, in the OP, would be protected by the First Amendment. That's why he's talking about using Section 230 protection and copyright law as indirect ways to counteract Big Tech's constitutionally-protected influence in how AI is designed. Not direct speech regulation, but measures designed to make it easier to compete with Big Tech or impose costs on Big Tech.
Sounds to me like he's talking about maintaining the status quo. But none of it has anything to do with Citizens United.
As for money in politics, IIRC he's talked about ways to regulate it that would not be unconstitutional.
"Let’s say you’ve correctly read EV’s mind outside his words here,"
Let's just say I've correctly read his words. You should try it.
The scope of defamation law certainly has nothing to do with Citizens United.
Perhaps you can elaborate on why you think Prof Volokh's comments were inconsistent with his views on Citizens United.
Ah, so you've seen me and my wife hiking...
I can neither confirm nor deny.
“ With Citizen’s United EV dismissed concerns of corporations having too much influence on speech and efforts to combat this.”
He took the view that it was unconstitutional for Congress to suppress protected, non-defamatory speech to limit corporations’s influence, sure.
And here he takes the view that Congress should not take extra steps to protect AI due to the same concerns. What’s inconsistent?
Sigh. What do you think “such extra steps” in the sentence you quoted was referring to?
Wow. Talk about reading his mind outside of his words.
He said nothing of the sort.