The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The Giraffe's Speech
An address to the American Chemical Society.
On the subject of ChatGPT, per Eugene's post on AI libel:
As an originalist delivering the keynote address to a Common Good Constitutionalism symposium, I mentioned that I felt "somewhat like a giraffe being asked to address a meeting of the American Chemical Society."
I'm aware that giraffes sometimes have trouble composing such addresses on their own, so a few months back I asked ChatGPT for its suggestions, and I wasn't at all disappointed by what it produced:
Ladies and Gentlemen of the American Chemical Society,
While it may seem out of the ordinary for a giraffe like myself to address a prestigious gathering of chemists, it serves as a testament to the boundless curiosity and innovative spirit that you all represent. I may not be a chemist myself, but I believe my presence here is symbolic of the unity that comes from exploring the richness and diversity of life on this planet, an exploration that your field greatly contributes to.
If there's one thing that life has taught me, it's that nature is the greatest chemist of all. From the swirling galaxies above to the intricate networks of life beneath our feet, chemistry is everywhere. As a giraffe, I am a beneficiary of chemistry's fascinating workings, whether it's the complex cocktail of compounds responsible for my spotted coat or the intricate dance of molecules that enables me to digest acacia leaves in my four-chambered stomach.
Through me, you can see the effects of your work. You dissect and understand the natural phenomena that allow life to thrive. You uncover the mechanisms that, for millions of years, have guided evolution's hand to craft creatures as diverse as the microscopic tardigrade to towering creatures like myself.
From my unique vantage point, I also offer a reminder. Every time I stretch my long neck to nibble the highest leaves of an acacia tree, I'm reminded of how delicate our ecosystems are. Each being plays a part, each chemical interaction matters. As chemists, you have the power to ensure that the actions we take today are sustainable and safe for the planet that sustains us all.
I encourage you to continue your brilliant and necessary work, to dig deeper into life's chemical code. I urge you to use your expertise, your curiosity, and your creativity to create solutions that will lead us towards a healthier, more sustainable future. I invite you to not only study the chemistry of life but also to understand and appreciate the life within chemistry.
With each molecule you manipulate, with each reaction you observe, and with every equation you solve, remember that you're not just exploring abstract concepts. You're delving into the heart of life itself. And from up here, it's a view worth appreciating.
Thank you for your time, and keep reaching for those metaphorical acacia leaves. The future of our world and its wondrous creatures depends on your continued efforts.
If you're not impressed by AI's abilities, or think it just copy-and-pastes from whatever it's seen before, I don't know what to say.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
It is a wonderful address for a giraffe. Thanks for sharing.
He used ChatGPT to spit out a speech. You’re congratulating him for the 2024 equivalent of a google search. At the very least he could have expended a microscopic bit more effort and used a local LLM less connected to privacy destroying, data vacuuming, censoring, megalomaniacal big tech.
I dunno. Kind of a stretch.
zing!!!
Boooooooo! I mean, I snickered, yes. But still: booooooo!
(This was intended as a reply to alphabet man.)
One is reminded of the story plots produced by kaleidoscope machines, tend by greasy mechanics like Julia, in George Orwell’s 1984.
Is one? If so, why?
Yeah, I was sticking my neck out for this crowd.
.
If you’re not impressed by AI’s abilities, or think it just copy-and-pastes from whatever it’s seen before, I don’t know what to say.
AI is never tongue-tied: When AI doesn’t know what to say, it spews nonsense and defamation.
Sorry, flagged due to fumble thumb. They need to invent the un-flag button, or just pay the $73 million to the patent holder.
No problem. Someone would have flagged it if you had not.
That’s actually quite impressive. I suppose someone, somewhere wrote about the wonderfully complex chemicals for the spots on a giraffe. Did it recall some praising template and inserted the wonderfully where there was none in a terse chemistry discussion of patterns?
Very interesting.
ChatGPT represents the philosophical problem of the simulacrum.
Mechanically, it's nothing like what we call intelligence. It's not performing any cognitive function; it's not solving problems. It's just a jumped up predictive text layered neural network.
But it sure does *look* like it's reasoning, even if it is just an extremely high quality simulacrum of reasoning. At what point is it so close we should treat it as the thing? Is there ever such a point? Is the fact that the neural net has a black box aspect part of this calculation?
If you've not come across this: https://en.wikipedia.org/wiki/Chinese_room
Or, better yet:
https://studio.ribbonfarm.com/p/a-camera-not-an-engine
The Chinese Room is a ridiculously weak, question-begging thought experiment. Searle is a doofus.
The camera thing is much better and more interesting.
It’s not performing any cognitive function; it’s not solving problems. It’s just a jumped up predictive text layered neural network.
These things are not necessarily mutually exclusive. A huge portion of problem solving is pattern matching. How is this problem similar to previous problems I've encountered and what solutions worked for those problems? And that's also what neural networks do. How is this text similar to previous text and what responses worked for those texts? Already ChatGPT can solve novel problems based on the fact that it's encountered related problems and their solutions before... as long as they're simple enough.
At what point is it so close we should treat it as the thing?
The point of the Turing Test really is just to highlight that intelligence can't be faked. Or maybe it's better to say that intelligence can't be objectively faked. It's certainly possible to con someone into thinking you're more intelligent than you are. That's called bullshitting, and people are really good at that already. Turns out ChatGPT is really good at bullshit too, I wonder why.
But assume you can recognize and disregard the bullshit, that is, you can evaluate intelligence objectively. Then, something might be more intelligent than it lets on, but it can't be less intelligent. (This is of course an irrefutable argument, which makes it a bit of a semantic argument, but I think there is still some nugget of insight in there.)
Is the fact that the neural net has a black box aspect part of this calculation?
People also have a black-box aspect. There's nothing distinctive about that.
I always thought the Turing test had a sort of moral component to it: If you can't tell that something isn't intelligent without popping open it's skull to see if there's grey jelly or transistors in there, by what right do you deny that it's intelligent? Isn't it just bigotry if you need to know what a possible intelligent being is made of, before deciding if it's really intelligent?
That's part of where it becomes semantic. It's certainly possible to define "intelligence" in such a way that AI can never possess it, in much the same way that you could define it such at Black people never possess it. Penrose -- another doofus -- gets close to that kind of definition. Although he's walked it back somewhat in his old age.
One of the reasons that it looks like intelligence, is that the dirty little secret of human intelligence is that most of what we do doesn't actually involve much of it.
Intelligence for humans is mostly just an error handling routine, that's invoked when habit and reflex don't cut it. But once we've learned to do something, we don't use intelligence to do it anymore, we use hyped up complicated reflexes.
ChatGPT doesn't have that error handling routine, it can't recognize when just winging it isn't getting it done. But it does have those reflexes, and they've been trained on more data than an actual human could consume in a hundred lifetimes.
It's like spending a billion years breeding an insect mimic of a human. It's very successful mimicry right up until it goes off a cliff, but it WILL go off a cliff, because it doesn't actually understand anything, or have concepts. It's pure mimicry, all the way down.
I think you have a narrow definition of intelligence. Creativity is part of intelligence. Intuition need not be about patterns.
Not all of intelligence shows up via our consciousness, either.
once we’ve learned to do something, we don’t use intelligence to do it anymore, we use hyped up complicated reflexes.
That is not right either, I don't think. Maybe it's because my problem solving tends to be different than yours. But having learned something does *not* mean the problem solving is done in my experience - each repeat of a task manages to have it's little curve balls to keep you on your toes.
There is also the fact that ChatGPT is akin to humans only in this narrow function, and has a number of ways it materially deviates - the inability to put workable guardrails, and the trouble with citing real cases in briefs.
Brett and I agree on something!
Well, said Brett. I don't necessarily subscribe to everything in the comment, but certainly the overall gist. (For instance, intelligence isn't just an error handling routine, I don't think.)
complex cocktail of compounds
Nice alliteration.
Sic Google:
About 3,820 results (0.37 seconds)
Terrific.
Thanks.
Compared to some talks I've heard at ACS meetings this ain't half bad.
Mr. Sachs used the chatbot at its best ... manipulating words. That's where today's AIs are the strongest. Getting facts wrong or hallucinating, doesn't take away from that.
Even more impressive Mr. Sachs, you can find other AIs that can translate that speech into a video that delivers the speech for you. That too is manipulation of words and ideas into different forms.
Well that was a totally vacuous two minute introduction. Did ChatGPT offer anything substantive for the rest of your speech?
This says less about AI and more about how bland and stupid such addresses are.
I'm not sure who put the batteries back in Stephen Sachs of late, but let's give him props for coming up with one of the best titles of a Conspirator post in a while.
"The Giraffe's Speech" would not be out of place on a major publisher's hardcover in preferred placement near the door of an airport bookstore.
(Immortality!)
Large Language Models are data-driven mirrors of already-published words and phrases. Yes, it is “copy and paste” but at the level of a MODEL of human speech. It should not surprise you in the least that you recognize what it outputs. Don’t be the canary who is fascinated by its own reflection. Keep in mind that the canary’s owner put that mirror in its cage because it amuses him to watch it weave and dance.
What's wrong with the canary? It seems to enjoy the mirror, why shouldn't we?
Happy Yellow Pig's Day!
If someone asked the program to do a PowerPoint version of the Gettysburg Address, could it come up with something as good as Norvig's?
https://norvig.com/Gettysburg/
For more on PowerPoint: https://www.inf.ed.ac.uk/teaching/courses/pi/2016_2017/phil/tufte-powerpoint.pdf
My favorite bit (discussing a slide on potential damage to Columbia before it exploded):
The vaguely quantitative words “significant” and “significantly” are used 5 times on this slide, with de facto meanings ranging from “detectable in largely irrelevant calibration case study” to “an amount of damage so that everyone dies” to “a difference of 64O-fold.” None of these 5 usages appears to refer to the technical meaning of “statistical significance.”
I liked this: "How is it that each elaborate architecture of thought always fits exactly on one slide?"
I have become something of an ace in PowerPoint though I despise it. Unfortunately almost everyone now expects to receive a PPT presentation and, worse, seems incapable of any deeper thought than that presented to them on a slide.
But then, I never had presentation jitters.