Ethan Mollick: How Will AI Change Us?
Ethan Mollick, Wharton School professor and author of Co-Intelligence, discusses AI's likely effects on business, art, and truth seeking on the latest episode of Just Asking Questions.
"I discovered something remarkably similar to an alien co-intelligence," wrote Ethan Mollick in his new book Co-Intelligence: Living and Working with AI, describing the "sleepless nights" he experienced upon first encountering ChatGPT 3.5 in November 2022.
Mollick, a professor at the Wharton School of the University of Pennsylvania and author of the One Useful Thing Substack, has studied, taught, and written about the effects of artificial intelligence on work and education for years. He joined Reason's Zach Weissmueller and Liz Wolfe on the latest episode of Just Asking Questions to discuss the ways in which large language models like ChatGPT and Google Gemini are already transforming the workplace, the classroom, artistic production, and the truth-seeking process itself.
In this episode, they discuss why you should treat your chatbot like a person even though it's not, how AI is "decomposing" jobs, what tools like OpenAI's Sora mean for the future of filmmaking, how to protect one's identity in the age of deepfakes, The New York Times' copyright lawsuit against OpenAI, the prospects for AI "doomsday," and whether regulation of AI is necessary or even possible.
Watch the full conversation on Reason's YouTube channel or on the Just Asking Questions podcast feed on Apple, Spotify, or your preferred podcatcher.
Timecodes:
0:00- Creating a digital clone of yourself
3:21- What exactly is artificial intelligence?
5:40- No one knows why ChatGPT is so good
10:37- Why you should give your AI chatbot a personality
15:03- Microsoft's AI said it was in love with a reporter
22:21- Can AI replace business school?
23:47- How AI has already transformed the workplace
30:02- AI will "decompose" human jobs
35:50- Will AI replace therapists?
40:59- How will AI affect art?
45:05- Do you have a right to your image?
50:02- Why the New York Times is suing OpenAI
57:33- Does AI content lack originality?
1:02:35- Are deep fakes a threat?
1:11:47- Four possible AI-infused futures
Sources referenced in this conversation:
2. New York Times lawsuit against OpenAI
3. New York Times reporter Kevin Roose's conversation with Microsoft's AI
4. "Air Head," a short film by shy kids created with OpenAI's Sora
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I remember first encountering Emacs’ Eliza.
Just as with Chat GPT and the rest, it’s more fun baiting it and confusing it than believing it.
Just as with Chat GPT and the rest, it’s more fun baiting it and confusing it than believing it.
LOL. IMO people project far more intelligence into AI than is actually there.
Eliza was a programming trick.
ChatGPT actually learns. And ChatGPT is already obsolete compared to what is in the works.
There’s always “something in the works “.
I wrote elsewhere:
On Artificial Intelligence
The first problem is the framing. Once you use the term “Artificial Intelligence”, people assume that we’re dealing with intelligence. We’re not. What we’re dealing with, at best, can be called “Artificial Idiot Savantry”. Idiot savants are by definition stupid but have high specific functionality and processing power. They lack a high “g factor” , that is, generalised intelligence.
Then there’s “instant expert” syndrome – people who have never considered AI until about five minutes ago are now supposed to be experts given their current position in their firms, or are purporting to be experts and represent themselves as such either internally or to potential clients. You want an expert on AI? Ask anyone who’s been reading science fiction for fifty years. We’ll have read more about it, and thought more about it than anyone else except a long-time AI researcher. There’s an entire subcategory of SF that is, in effect, AI thought experiments.
I think it was J.E. Gordon who, in “Structures, Or Why Things Don’t Fall Down”, identified a three-generation cycle in new civil engineering technology, like bridge design and materials. In my words, A. We don’t know much, so let’s really overengineer this. B. We think we get it, so we don’t need too much of a safety margin now. C. We really understand it now, and everyone before us was too conservative. Disaster follows thereafter. A similar risk is likely with respect to generative AI and its use in corporations.
The first generation will use it as a specific tool, have human firewalls, and be very cautious. The second generation will broaden its applications, reduce human involvement, and be prudent. The third generation will continue with broad applications, will let AIs “manage” other AIs, and will be overconfident and hence careless. The major difference between bridges and AIs, in context, will be the length of the generations. For bridges it might be 20 or more years, for AIs, it might be a matter of a year or two.
Waiting on 2G then.
We can’t discuss AI without having a philosophical discussion of intelligence. It doesn’t seem to me that scientific minds think they have much use for philosophy. You’re spot on when speaking of science fiction fans.
What they’re creating with LLMs is a very complex algorithm that mimics communication. There isn’t anything intelligent about them. Impressive and clever, sure.
I’ll admit I don’t understand the “AI” that creates pictures at any level. But I’m sure it’s just as worthless to the advancement of humanity.
I think “AI” will be useful for advertising and analysis of metadata for intelligence purposes. Unfortunately, that’s like oil and religion now.
I’ll admit I don’t understand the “AI” that creates pictures at any level.
AI image generators are both amazing and stupid.
I got Bing to generate this fabulous car, and there’s a chap on FB called Wray Schelin who posts amazing AI car creations.
And often they have peculiar features, like a third windscreen wiper in an incongruous place, or exhaust pipes that make no sense given where the engine would be,
I’m gonna go with “not at all.”
More AI hype is understandable.
But it is still hype. Yeah, I know NVDA is killing it with data center chips. But where are the useful apps?
I bought Schrodinger stock (SDGR) . Computational drug discovery. Successful candidates discovered. Pure AI. Stock price near lows even in this bull market.