We're Moving Too Slow on AI
Technologist Pablos Holman warns that slowing AI progress cedes the future to gatekeepers and explains how open competition can unlock breakthroughs in energy, health, and innovation on a massive scale.
The Reason Interview with Nick Gillespie goes deep with the artists, entrepreneurs, and oddballs who are making the 21st century more libertarian—or at least more interesting—by challenging old, worn-out ideas and orthodoxies.
Today's guest is Pablos Holman, a legendary hacker and cypherpunk who holds over 100 patents and has worked with Bill Gates to cure malaria and with Jeff Bezos to get Blue Origin off the ground.
Pablos also runs a venture fund called Deep Future, which is committed to "creating technology that matters." In his new book—also called Deep Future—he exhorts the reader to "boycott dystopia" and describes companies that are saving bee colonies by using mushroom spores to inoculate bees against pests; recovering ancient Roman secrets to make concrete that lasts for thousands of years; and launching solar panels into space to deliver a constant, uninterruptible supply of clean energy.
He and Nick Gillespie talk about the need to move faster with AI, why hardware ultimately matters more than software, and why decentralization will eventually triumph over the Facebooks, Apples, and Googles of the world.
How can we make The Reason Interview better? Take our listener survey for a chance to win a $300 gift card: http://reason.com/podsurvey
0:00–Intro
1:53–Evaluating our technology
3:55–Deep tech and Holman's innovation investments
7:36–Energy demand, consumption, and production
15:09–Nuclear energy adoption
20:05–AI and creating better hardware
27:31–Holman's introduction to computers and hacking
33:06–Who were the cypherpunks?
37:30–BitTorrent, Bitcoin, and decentralization
43:41–Can RSS feeds solve tech pessimism?
49:28–The origin of Holman's eyewear
- Producer: Paul Alexander
- Audio Mixer: Ian Keyser
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Just for fun, try to formulate arguments against these highly disruptive and existentially risky new technologies:
Fire
Farming
Language
Writing
Printing
Electricity
Radio
mRNA
Segway
Google Glass
Mastodon
The Star Wars reboot
and launching solar panels into space to deliver a constant, uninterruptible supply of clean energy.
Never trust a 'technologist'.
The glasses he's using look like the tech bro version of wearing body armor in the HQ. If you aren't playing racquetball, shooting a gun, or using an angle grinder at this specific moment, why are you wearing those? Dork*.
*And I say this as someone who's spent time choosing which *which pair* of safety glasses to wear to conduct chemistry experiments.
They are a display of his idiocy.
and why decentralization will eventually triumph over the Facebooks, Apples, and Googles of the world.
Need to check back with ENB and ask her how Mastodon is going.
Speaking of which, this is an article:
How many left coasters prefer the science of crystal healing?
In fact, I double dare Kotek, Ferguson, and Newsom to tell people that all new-age hippy creeds about food and health are mystical bullshit.
Or think "gender affirming care" is a thing.
protected by science, not politics
So does this mean fewer people and homes destroyed by wildfires for lack of water, or more?
Sadly, it will be more.
SkyNet will eventually kill its competition.
AI is very dangerous. How much damage could a hacker do if they had expertise in every hacking method, ability to write code 5000x faster than a human, can think 5000x faster than a human, are let past firewalls, and have zero concept of ethics.
They could destroy the world before humans could even know what is happening. The only thing missing is no AI has a desire to do that, but with the tendency of AI to hallucinate, that could happen.
You’ll get used to it.
Most of my experience with AI has been that it slows down what should be basic searches and then hallucinates the results, giving wrong answers to specific questions because somewhere on the internet, a similar question might exist, to which that response is the correct answer.
I do know people who have found it useful in automating repetitive, mundane tasks.
"I do know people who have found it useful in automating repetitive, mundane tasks."
Easy, you stop caring that it will give errors sometimes and marvel at the time you saved.
why decentralization will eventually triumph over the Facebooks, Apples, and Googles of the world.
That may well be. But FIRST, the massive money suck of the Mag7 will have to be turned into shit. Because there is no way any decentralized effort can offset the $2 trillion or so in direct AI spending by the Mag7 or the $15+ trillion in market cap diversion by them.
IOW - kill the AI bubble - and all the hyperspending on AI - and new models will emerge. That almost assuredly will not happen first in the US because we've already created and bought into a centralized cartelized model that has to fail before a newly created decentralized model can emerge.
I have read in these very pages that AI will displace millions of jobs in the next couple of years, that most White Collar work will be obsolete, and that AI is being used to diagnose disease, drive cars, and just about everything else.
Given all that disruption and job loss, don't you think we should take a few minutes to think about it?
These are just the highlights. In other places where culture and language is more stratified, AI is being used to scam people and falsify elections where, e.g., speeches are mistranslated into regional dialects and distributed through unofficial channels before official policy statements are broadcast/distributed.