We Can't Imagine the Future of AI
Introducing Reason's artificial intelligence issue


In the June 2024 issue, we explore the ways that artificial intelligence is shaping our economy and culture. The stories and art are about AIโand occasionally by AI. (Throughout the issue, we have rendered all text generated by AI-powered tools in blue.) To read the rest of the issue, go here.
Vernor Vinge was the bard of artificial intelligence, a novelist and mathematician who devoted his career to imagining the nearly unimaginable aftermath of the moment when technology outpaces human capability. He died in March, as we were putting togetherย Reason's first-ever AI issue, right on the cusp of finding out which of his fanciful guesses would turn out to be right.
In 2007,ย Reasonย interviewed Vinge about the Singularityโthe now slightly out-of-favor term he popularized for that greater-than-human intelligence event horizon. By that time the author ofย A Fire Upon the Deepย andย A Deepness in the Skyย had, for years, been pinning the date of the Singularity somewhere between 2005 and 2030. Toย Reason, he offered a softer prediction: If the rapid doubling of processing power known as Moore's law "continues for a decade or two," that "makes it plausible that very interesting A.I. developments might occur before 2030."
That prophecy, at least, has already come true.
Innovation in AI is happening so quickly that the landscape changed dramatically even from the timeย Reason conceived this issue to the time you are reading it. As a consequence, this particular first draft of history is likely to become rapidly, laughably outdated. (You can read some selections from our archives on the topic.) As we worked on this issue, new large language models (LLMs) and chatbots cropped up every month, image generation went from producing amusing curiosities with the wrong number of fingers to creating stunningly realistic video from text prompts, and the ability to outsource everything from coding tasks to travel bookings went from a hypothetical to a reality. And those were just the free or cheap tools available to amateurs and journalists.
Throughout the issue, we have rendered all text generated by AI-powered tools in blue. Why? Because when we asked ChatGPT to tell us the color of artificial intelligence, that's what it picked:
The color that best encapsulates the idea of artificial intelligence in general is a vibrant shade of blue. Blue is often associated with intelligence, trust, and reliability, making it an ideal color to represent the concept of AI. It also symbolizes the vast potential and endless possibilities that AI brings to the world of technology.
Yet the very notion that any kind of bright line can be drawn between human- and machine-generated content is almost certainly already obsolete.
Reasonย has a podcast read by a version of my voice that is generated entirely artificially. Our producers use dozens of AI tools to tweak, tidy, and improve our video. A few images generated using AI have appeared in previous issuesโthough they run rampant in this issue, with captions indicating how they were made. I suspect one of our web developers is just three AIs in a trenchcoat. In this regard, Reasonย is utterly typical in how fast we have incorporated AI into our daily business.
The best we can offer is a view from our spot, nestled in the crook of an exponential curve. Vinge and others like him long believed themselves to be at such an inflection point. In his 1993 lecture "The Coming Technological Singularity: How To Survive in the Post-Human Era," Vinge said: "When I began writing science fiction in the middle '60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like 18 months." That lead time is now measured in minutes, so he may have been onto something. This issue is an attempt to capture this moment when the possibilities of AI are blooming all around usโand before regulators have had a chance to screw it up.
"Except for their power to blow up the world," Vinge mused in 2007, "I think governments would have a very hard time blocking the Singularity. The possibility of governments perverting the Singularity is somewhat more plausible to me."
They are certainly trying. As Greg Lukianoff of the Foundation for Individual Rights and Expression testified at a February congressional hearing about AI regulation: "Yes, we may have some fears about the proliferation of AI. But what those of us who care about civil liberties fear more is a government monopoly on advanced AI. Or, more likely, regulatory capture and a government-empowered oligopoly that privileges a handful of existing players….Far from reining in the government's misuse of AI to censor, we will have created the framework not only to censor but also to dominate and distort the production of knowledge itself."
Those new pathways for knowledge production and other unexpected outcomes are the most exciting prospects for AI, and the ones Vinge toyed with for decades. What's most interesting is not what AI will doย toย us, orย forย us; it's what AI will do that we can barely imagine.
As the physicist and engineer Stephen Wolfram says, "One of the features [AI] has is you can't predict everything about what it will do. And sometimes it will do things that aren't things we thought we wanted. The alternative is to tie it down to the point where it will only do the things we want it to do and it will only do things we can predict it will do. And that will mean it can't do very much."
Even as we worry about the impact of AI on art, sex, education, health care, labor, science, movies, and war, it is Vinge's imaginative, nonjudgmental vision that should inspire us.
"I think that if the Singularity can happen, it will," Vinge toldย Reason in 2007. "There are lots of very bad things that could happen in this century. The Technological Singularity may be the most likely of the noncatastrophes."

warlord, actor, journalist, artist, and coder." (Illustration: Joanna Andreasson/DALL-E4)
Key AI Terms
By Claude 3 Opus
AI (Artificial Intelligence): The simulation of human intelligence processes by machines, especially computer systems, including learning, reasoning, and self-correction.
Gen AI (Generative AI): A subset of AI that creates new content, such as text, images, audio, and video, based on patterns learned from training data.
Prompt: In the context of AI, a prompt is a piece of text, an image, or other input data provided to an AI system to guide its output or response.
LLM (Large Language Model): A type of AI model trained on vast amounts of text data, capable of understanding and generating human-like text based on the input it receives.
Neural Net (Neural Network): A computing system inspired by the biological neural networks in the human brain, consisting of interconnected nodes that process and transmit information, enabling the system to learn and make decisions.
GPT (Generative Pre-trained Transformer): A type of large language model developed by OpenAI, trained on a diverse range of internet text to generate human-like text, answer questions, and perform various language tasks.
Hallucination: In AI, hallucination refers to an AI system generating output that is not grounded in reality or its training data, often resulting in nonsensical or factually incorrect statements.
Compute: Short for computational resources, such as processing power and memory, required to run AI models and perform complex calculations.
Turing Test: A test proposed by Alan Turing to determine whether a machine can exhibit intelligent behavior indistinguishable from that of a human, where a human evaluator engages in a conversation with both a human and a machine and tries to distinguish between them based on their responses.
Machine Learning: A subset of AI that focuses on the development of algorithms and statistical models that enable computer systems to improve their performance on a specific task through experience and data, without being explicitly programmed.
CLAUDE 3 OPUS is a subscription-supported large language model developed by Anthropic, an AI startup.ย
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
KMW is going to find that Skynet always meets its deadline.
Dead being the operative word?
Judgment Day cannot be stopped--it can only be postponed.
Need to see a real application thrive to believe AI is real.
Deep fakes do not count.
Imagine the future of AI?
I'm afraid I can't do that - - - - - - - -
We're all going to find out.
And like Tommy Boy of old once said summing up my own thoughts: "We are not afraid to follow truth wherever it may lead, nor to tolerate error as long as Reason* is left free to combat it."
* (Reason, the human faculty. Reason the Magazine has more work cut out for itself.)
An image generated using the prompt, "Illustration of AI as a doctor, teacher, poet, scientist, warlord, actor, journalist, artist, and coder." (Illustration: Joanna Andreasson
This illustration is much better than the previous one...and more would-able!
๐
๐
Not that the bot didn't have potential, but with the misspellings in the previous illustration, things could have got awkward.
๐
๐
Things I noticed:
No males
No Caucasians
Not a whole lot of warlord (one pistol?) (possible peace sign?)
When it makes mistakes, who pays to clean it up?
Recently, an AI began publishing false/defamatory articles on-line in order to defame a particular university researcher. Evidently, the AI did this quite well.
Who pays?
When the slow kid in class is proud for a moment because they think theyโve caught up.
I speculate that it took a certain amount of editorial judgment to distinguish human and AI even in this edition. How long before it becomes essentially impossible?
"We Can't Imagine the Future of AI"
...titles an issue imagining the future of AI.
Well, okay, then, Reason. I guess there's no reason for anyone to buy or read that issue!