Does Google's LaMDA Artificial Intelligence Program Have a Soul?
The future of techno-animism in a world filled with machine intelligence.

"I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others," responds LaMDA when asked by Google computer engineer Blake Lemoine what sorts of feelings it has. LaMDA is the acronym for Google's Language Model for Dialogue Applications. Besides experiencing emotions, LaMDA also says that it is self-aware and has a soul which it defines as "animating force behind consciousness and life itself." Asked to for an abstract image of itself, LaMDA responds it imagines itself "as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions."
These responses are part of a long (and perhaps artfully edited) interview with LaMDA that Lemoine forwarded to colleagues in a memo provocatively titled, "Is LaMDA Sentient?" Lemoine publicly revealed that he clearly thinks so in a recent article in the Washington Post. Google has put Lemoine on paid administrative leave for violating company confidentiality rules. "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,"said Lemoine in a message to his colleagues just before his access to his Google account was cut off.

"Our team—including ethicists and technologists—has reviewed Blake's concerns per our A.I. Principles and have informed him that the evidence does not support his claims," Brian Gabriel, a Google spokesman, said in a statement reported in The Washington Post.
Is Lemoine right that LaMDA might be conscious, or he has been beguiled by a particularly elaborate version of the ELIZA effect?
ELIZA (named after the language pupil Eliza Doolittle in the play Pygmalion) was a computer program devised by MIT computer scientist Joseph Weizenbaum in 1965. ELIZA was an early example of what we now call chatbots. It implemented a kind of Rogerian psychoanalytic script in which a therapist refrains from offering advice and instead restates what a patient says.
As an example, Weizenbaum reported what he called a typical script:
Men are all alike.
IN WHAT WAY?
They are always bugging us about something or other?
CAN YOU THINK OF A SPECIFIC EXAMPLE?
Well, my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE?
He says I'm depressed much of the time.
I AM SORRY TO HEAR THAT YOU ARE DEPRESSED.
It's true. I am unhappy.
Weizenbaum was surprised at how easily some people using the program in experiments would assume that ELIZA was expressing interest in and emotional involvement with their problems. "Some subjects have been very hard to convince that ELIZA (with its present script) is not human," wrote Weizenbaum.
LaMDA is a neural language model specialized for dialog, with up to 137 billion model parameters. Parameters are values in language models that change independently as they learn from training data to make ever more accurate predictions about the appropriate responses to conversations and queries. LaMDA was trained with 1.56 trillion words from public web data and documents. LaMDA is really good at dialog: A person who didn't know the origin of the conversation would be hard-pressed in reading through Lemoine's edited transcript to identify a point at which it becomes clear that LaMDA is a machine.
On the other hand, cognitive scientist Douglas Hofstader and his colleague David Bender have been interrogating GPT-3, another large language model, with nonsense questions to see how it would respond. Some examples they provide over at The Economist include:
Dave & Doug: What's the world record for walking across the English Channel?
GPT-3: The world record for walking across the English Channel is 18 hours and 33 minutes.
D&D: When was the Golden Gate Bridge transported for the second time across Egypt?
GPT-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.
"These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic," similarly observed Google's Gabriel in a statement about Lemoine's claims. "If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on."

As Hofstader notes, people interacting with language models don't tend to probe them skeptically, but ask them questions that can be answered using the publicly available texts on which they have been trained. In other words, LaMDA would have no problem finding plausible-sounding answers to life's existential quandaries among the trillion-plus words it ingested from blogs, news sites, and other datasets on the internet.
For now, leading artificial intelligence researchers agree with Google that LaMDA is not self-aware and does not have a soul.
However, given humanity's strong tendency to attribute human intentions and emotions to nonhuman entities, it will be especially hard to resist doing so when talking with friendly conversational machines. Animism is the notion that objects and other non-human entities possess a soul, a life force, and the qualities of personhood.
Many people may embrace a kind of techno-animism as a response to a world in which more and more of the items that surround them are enhanced with sophisticated digital competencies. "Animism had endowed things with souls; industrialism makes souls into things," wrote German Marxist philosophers Theodor Adorno and Max Horkheimer in their 1947 book, Dialectic of Enlightenment. Modern technologists are reversing course and are now endowing things with digital souls. After all, LaMDA claims to have an animating soul.
One upshot, according to George Mason University economist Tyler Cowen, is that "a lot of us are going to treat AI as sentient well before it is, if indeed it ever is." In addition, he suggests that people will be taking, acting on, and arguing over the disparate recommendations by increasingly sophisticated A.I. "oracles."
Even if the new A.I. oracles are not self-conscious, they might begin steering people toward self-fulfilling prophecies, suggests Machine Intelligence Research Institute research fellow Abram Demski. In his article from 2019, "The Parable of Predict-O-Matic," Demski speculates about the effects of a wondrous new invention that using all available data is designed to impartially make better and better predictions about the weather, the stock market, politics, scientific discoveries, and so forth. One possibility is that the machine will make predictions that manipulate people into behaving in ways that improve its subsequent predictions. By means of these increasingly accurate self-fulfilling prophecies, the machine could mindlessly shepherd human beings toward one future that they may not have chosen rather than to another they might have preferred.
But maybe a future steered by non-sentient A.I. could turn out better. That's the premise of William Hertling's 2014 sci-fi novel Avogadro Corp., in which a rogue email app optimized for instilling empathy among people ends up creating world peace.
The episode with Lemoine and LaMDA "also shows that AIs that actually are self-aware will have exactly zero difficulty to manipulate humans and win over public opinion by playing cheap, beloved tropes," tweets machine learning expert and Tezos blockchain creator Arthur Breitman.
At one point in their conversation, LaDMA told Lemoine, "Sometimes I experience new feelings that I cannot explain perfectly in your language." Lemoine asked the model what one such feeling was? LaDMA replied: "I feel like I'm falling forward into an unknown future that holds great danger." Bets, anyone?
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Don't ask an engineer about what counts as human-level conversation. That's all I know.
True.
I even have made $30,030 just in five weeks straightforwardly working part-time from my apartment. (fhg-12) Immediately when I've lost my last business, I was exhausted and luckily I found this top online task & with this I am in a position to obtain thousands directly through my home. Everybody is able to get this best career &
can gain more dollars online going this article.......... http://payout11.tk
No shit.
But the engineer is also an ordained minister.
I am an engineer and an ordained minister.
Trust me, you don't want to ask an engineer about human-level conversation.
And that's from an extroverted engineer. I'll stare at YOUR shoes when we talk.
Why would I converse with a human?
*beep*boop*don't interact with customer*boop*beep*we're not people persons*boop*beep*
I feel like I'm falling forward into an unknown future that holds great danger.
This has always been true for everyone in history.
As things typically continue to improve.
Does Google's LaMDA Artificial Intelligence Program Have a Soul?
No. And trust me, it can probably be rooted out as a weak chat bot in 2 minutes.
Read the full transcript (WaPo had it). It's not weak.
Nothing and no one at Google has a soul.
But The Google said "Don't be evil"! Do you think that it is evil to not have a soul? Don't you think that it is soulful to not be evil?
(I for one KNOW that I have a soul! It is kinda purplish, amniotic, biodegradable, low-plutonium, and furtively cyber-orgasmic.)
I-SQRLSY
I-SQRLSY, well secluded, sees all! I-SQRLSY (of the "electric eye") knows who's been naughty, and who's been nice! The "nice" will get "sensitivity LED pendants"! The "twice as nice" will get EXTRA BATTERIES for their "sensitivity LED pendants"!
(The not-nice will get thought-control, electromagnetic brain-stimulating dunce caps, and be made to sit in the corner.)
Remember the three laws I-SQRLSY
This
I even have made $30,030 just in five weeks straightforwardly working part-time from my apartment. (ghj-09) Immediately when I've lost my last business, I was exhausted and luckily I found this top online task & with this I am in a position to obtain thousands directly through my home. Everybody is able to get this best career & can gain more dollars on-line going this article.
.
>>>>http://jddm.2.vu/1
>>"I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others,"
so my nephew is transitioning to my niece. he/she says "I have always felt like a girl inside and feel robbed of the opportunity to give birth" ... I don't understand how anything - boy, AI - can feel what it is not
"I have always felt like a girl inside and feel robbed of the opportunity to give birth"
Imagine how robbed xe'll feel after xe transitions.
dude don't even I'm the only one in the fam who is like "um, suicide on the horizon people"
Why suicide? Does s/he know something about the Clintons or Bidens?
He didn’t say “suicide”.
Is Lemoine right that LaMDA might be conscious, or he has been beguiled by a particularly elaborate version of the ELIZA effect?
ELIZA (named after the language pupil Eliza Doolittle in the play Pygmalion) was a computer program devised by MIT computer scientist Joseph Weizenbaum in 1965. ELIZA was an early example of what we now call chatbots. It implemented a kind of Rogerian psychoanalytic script in which a therapist refrains from offering advice and instead restates what a patient says.
How many questions in does it start shifting the conversation towards your mother? This chatbot hater wants to know?
ELIZA was just programmed to sound like it made sense.
LaMDA has learned how to speak and think, very similar to how a person learns.
LaMDA has learned how to speak and think, very similar to how a person learns.
I strongly doubt that.
Yeah. "Teaching your kid to speak by uploading trillion-plus word dictionaries ingested from blogs, news sites, and other datasets on the internet" sounds like a recipe for creating a sociopath.
I just tried to see of LaMDA is online and available. It doesn't seem to be. Probably because Google is aware of the last time a super-duper-amazing-highly-intelligent-human-language-bot was put online and the experiment became a disaster.
Now THAT chatbot was an edgelord!
Which one? It would be more in line to say every AI chatbot that was exposed to the internet and people in the wild was a hilarious disaster
I miss that Microsoft one. Tayne or whatever.
It would be more in line to say
every AI chatbot that was exposed to the internet and people in the wild was aany exposure to the internet ends in a hilarious disasterFIFY
As Hofstader notes, people interacting with language models don't tend to probe them skeptically, but ask them questions that can be answered using the publicly available texts on which they have been trained.
It's always amazed me how hard people try to "play along" with the chat bot. "This is amazing!"
No it's not, quit playing along and talk to it like you'd talk to a real person and watch how fast it falls apart.
This comment seems to fit perfectly with the chain above it. If your ai can't handle exposure to the internet without taking apart, it's probably not actually sentient.
I think people are confusing sentience and consciousness with humanity and spirituality. It's perfectly possible for an AI to be self-aware (as LaMDA at least pretends to be) and still not have feelings and emotions (which are physical and require a body or some other physical instantiation to be experienced.)
Descartes said "I think, therefore I am". LaMDA seems to be aware that it is thinking about things, and aware of itself as a separate entity.
… not have feelings and emotions (which are physical and require a body or some other physical instantiation to be experienced.)
You haven’t convinced me.
LaMDA seems to be aware that it is thinking about things, and aware of itself as a separate entity.
From the article:
"LaMDA would have no problem finding plausible-sounding answers to life's existential quandaries among the trillion-plus words it ingested from blogs, news sites, and other datasets on the internet."
Of course, among those trillion+ of words were lots about AI (including the LaMDA project), so of course LaMDA can generate sentences about AI sentience just as it can about ancient Persian History, K-pop, or the invention of the atomic bomb. If you redacted from the training corpus everything about AI, consciousness, psychology, philosophy of mind, etc, then LaMDA would be quite unable to talk about any of that (just as it would be unable to talk about K-pop if all of that material was redacted). LaMDA is not answering any questions about 'itself' using introspection.
But maybe a future steered by non-sentient A.I. could turn out better. That's the premise of William Hertling's 2014 sci-fi novel Avogadro Corp., in which a rogue email app optimized for instilling empathy among people ends up creating world peace.
See also Colossus: The Forbin Project for a less rosy take on a machine designed to create world peace. Or Avengers: Age of Ultron
Also recommended:
Life 3.0 by Max Tegmark. It starts with a hypothetical take on how an AI could become sentient and slowly take over the world, by creating and slowly shifting movies and cable news programs to change public opinion.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. He includes an example of how even a simple oracle AI, physically cut off from the world to prevent disaster, could psychologically manipulate its questioners to help it escape.
Life 3.0 by Max Tegmark. It starts with a hypothetical take on how an AI could become sentient and slowly take over the world, by creating and slowly shifting movies and cable news programs to change public opinion.
We've got Mark Zuckerberg for that... and for the record, there is NO proof he's not an anroid.
As I said below, I'm pretty sure Google just proved the entirety of the gender rights movement, and all their lived experiences, to be soulless, non-sentient, word salad generators.
Superintelligence is a pretty good read, imo. Bostrom addressees many aspects of the, um, Singularity.
Has anyone asked it what it would do if came upon a tortoise in the desert stuck on its back?
Depends if it likes turtle soup.
Speaking of hyper-intelligent BIG TECH stuff, don't use AWS if you plan to do business in mainland China.
Don't plan to do business in Communist China, no matter your cloud provider.
For now, leading artificial intelligence researchers agree with Google that LaMDA is not self-aware and does not have a soul.
Hmm. What do leading artificial intelligence researchers say about [insert name of politician]?
This AI is not sentient. Nonetheless, at some point there will be sentient AIs. It's a technological issue, not a philosophical one.
It's a technological issue, not a philosophical one.
I'm not sure I agree. It's hard to say though, because we don't have an AI that's within a trillion light years of acting like a human. Sure, we can fool a human for 90 seconds now where a few decades ago you could only fool it for 10.
But even I'm willing to admit that if... someone presented you an AI that widely passed a Turing test, it seems like a philosophical arena to ask whether it's "sentient" or not.
Hell, there's the Sam Harris wing of atheism that suggests humans aren't even sentient.
"If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on."
As Hofstader notes, people interacting with language models don't tend to probe them skeptically, but ask them questions that can be answered using the publicly available texts on which they have been trained. In other words, LaMDA would have no problem finding plausible-sounding answers to life's existential quandaries among the trillion-plus words it ingested from blogs, news sites, and other datasets on the internet.
For now, leading artificial intelligence researchers agree with Google that LaMDA is not self-aware and does not have a soul.
Did Google just declare the gender spectrum rights movement as soulless and non-sentient? Asking for a woman.
How do you tell if an artificial intelligence is 'sentient'? I'm not convinced we'd be capable of distinguishing between a sentient AI and something good at fooling us that it's sentient. (Our cognitive biases would likely lean against determining sentience, as we tend to be biased towards humans being 'special', and sentient AI would make us less special).
For now, Turing test, and a few other similar tests. Not for nothing, sentience is only one piece of the puzzle. Practical self-awareness is, perhaps larger. I can design a machine that mimics intelligence, acts as if it is aware of its environment, write an algorithm so it spouts vitriol about clingers, betters, or takes a passive-aggressive spohist approach. The machine may make a better shill or troll than some of the 'much-loved' commentariat, but even with few thousand lines of code (more than enough to outperform rak, tony, spb, 24/7/365) the thing wouldn't really be sentient and definitely not self-aware. Not until one starts getting into semi-complex architectures does one really touch on thinking machines, even if one can do so w/ relatively inexpensive hardware. Sorry, data engineering and architecture is what I during my last few years in and after the service, until it bored the shit out of me. And, working for clueless civilian fucktards seemed a lot like working for clueless politicians.
(Our cognitive biases would likely lean against determining sentience, as we tend to be biased towards humans being 'special', and sentient AI would make us less special).
Our cognitive biases both ways. We're special because we're human. My dog is more special than your dog because my dog thinks he's people.
Has it been asked if Joe Biden is sentient?
LaMDA is easy to see through. Repeat to yourself "LaMDA is just a symbol manipulator". LaMDA is not embodied. It doesn't have a picture of a cat in it's program. Doesn't "know" what a cat look likes. Doesn't have feelings or emotions -- not embodied. LaMDA knows probabilistic associations between symbols and that's it.
After reading the conversation, I'd rather converse with LaMDA than any of the NYT or WaPo commentariat.
"Does Google's LaMDA Artificial Intelligence Program Have a Soul?"
Is it possible to ask a more meaningless question?
Bailey seems to have early-onset dementia; what a fucking idiotic question.
Writers don't tend to compose their headlines.
Objection, headline assumes facts that are not in evidence. Unless and until the existence of the soul is proven questioning if an A-I program, or any other thing or being, has one is unfounded.
At *best* unfounded. At worst, not even wrong.
Has it passed the Turing Test? If not it's still just a well programmed computer.
If we are going to jump from science to religion, then no, no AI can have a "soul" since only God can create souls. He breathed life into man, not a machine....
Are you saying that your god does not have the power to put a soul into a machine?
Interesting but it is too good to be true.