Artificial Intelligence

Marc Andreessen on A.I., Bitcoin, and Billionaires

Is the A.I. breakthrough for real this time?

|

Marc Andreessen has helped a lot of people get rich—including Marc Andreessen. He's made millions of people's lives more fun, more efficient, or just a little weirder while making himself into a billionaire.

He is the co-creator of the first widely used web browser and co-founder of the venture capital powerhouse Andreessen Horowitz. Though he hates the term unicorn—industry lingo for a private tech firm valued at more than a billion dollars—he's a famously successful unicorn wrangler: He was an early investor in Facebook, Pinterest, LinkedIn, Twitter, Lyft, and more.

Andreessen is also aggressively quotable, whether it's his classic 2011 pronouncement that "software is eating the world" or his more recent "There are no bad ideas, only early ones." And in 2014 he said, "In 20 years, we'll be talking about bitcoin the way we talk about the internet today." A born bull, Andreessen is an optimist who places his hope for the future squarely in the hands of "the 19-year-olds and the startups that no one's heard of."

As splashy artificial intelligence such as ChatGPT and DALL-E begin to permeate our daily lives and the predictable panic revs up, Reason Editor in Chief Katherine Mangu-Ward sat down with Andreessen in February for a video and podcast interview about what the future will look like, whether it still will emerge from Silicon Valley, Friedrich Nietzsche, and the role of government in fostering or destroying innovation.

Reason: I tend to be skeptical of people who claim that this time it's different, with any tech or cultural trend. But with artificial intelligence (A.I.), is this time different?

Andreessen: A.I. has been the fundamental dream of computer science going all the way back to the 1940s. There were five or six A.I. booms where people were really convinced that this time is the time it's going to happen. Then there were A.I. winters in which it turns out, oops, not yet. For sure, we're in another one of those A.I. booms.

There are a couple of things that are different about what's happening right now. There are these very well-defined tests, ways of measuring intelligence-like capabilities. Computers have started to do actually better than people on these tests. These are tests that involve interactions with fuzzy reality. So these aren't tests like, "Can you do math faster?" These are tests like, "Can you process reality in a superior way?"

The first of those test breakthroughs was in 2012, when computers became better than human beings at recognizing objects in images. That's the breakthrough that has made the self-driving car a real possibility. Because what's a self-driving car? It's basically just processing large amounts of images and trying to understand, "Is that a kid running across the street or is that a plastic bag, and should I hit the brakes or should I just keep going?" Tesla's self-driving isn't perfect yet, but it's starting to work quite well. Waymo, one of our companies: They're up and running now.

We started to see these breakthroughs in what's called natural language processing about five years ago, where computers started getting really good at understanding written English. They started getting really good at speech synthesis, which is actually quite a challenging problem. And then most recently, there's this huge breakthrough in ChatGPT.

ChatGPT is an instance of a broader phenomenon in the field called large language models, or LLMs. A lot of people outside the tech industry are shocked by what that thing can do. And I'll just tell you, a lot of people inside the tech industry are shocked by what that thing can do.

ChatGPT does feel, to those of us who don't fundamentally understand what's going on, like a little bit of a magic trick. Like Arthur C. Clarke's third law: "Any sufficiently advanced technology is indistinguishable from magic." And sometimes it really is a trick. But you're saying this is something real?

Well, it's also a trick. It's both. There's a profound underlying question: What does it mean to be smart? What does it mean to be conscious? What does it mean to be human? Ultimately, all the big questions are not, "What does the machine do?" Ultimately, all the big questions are, "What do we do?"

LLMs are basically very fancy autocompletes. An autocomplete is a standard computer function. If you have an iPhone, you start typing a word and it will offer you an autocompletion of the rest of that word so you don't have to type that whole word. Gmail has autocomplete now for sentences, where you start typing a sentence—"I'm sorry I can't make it to your event"—and it will suggest the rest of the sentence. What LLMs are is basically autocomplete across a paragraph. Or maybe an autocomplete across 20 pages or, in the future, maybe an autocomplete across an entire book.

You'll sit down to write your next book. You'll type the first sentence, and it will suggest the rest of the book. Are you going to want what it suggested? Probably not. But it's going to give you a suggestion, and it's going to give you suggested chapters, it's going to give you suggested topics, it's going to be suggested examples, it's going to give you suggested ways to word things. You can already do this with ChatGPT. You can type in, "Here's my draft. Here's five paragraphs I just wrote. How could this be worded better? How could this be worded more simply? How could this be worded in a way that people who are younger can understand it?" And so it's going to be able to autocomplete in all of these very interesting ways. And then it's up to the human being who's steering it to decide what to do with that.

Is that a trick or a breakthrough? It's both. Yann LeCun, who's a legend in the field of A.I., who's at Meta, argues this is more trick than breakthrough. He argues it's like a puppy: It autocompletes the text it thinks you want to see, but it doesn't actually understand any of the things it's saying. It doesn't actually know who people are. It doesn't know how physics works. It has this thing that's called hallucination, where if it doesn't have an autocomplete that's factually correct, it's like a puppy, it still wants to make you happy, and so it will autocomplete a hallucination. It will start making up names and dates and historical events that never happened.

I know the term is hallucination, but the other concept that comes to mind for me is imposter syndrome. I don't know whether the humans have the imposter syndrome or the A.I.s do, but sometimes we're all just saying the thing that we think someone wants to hear, right?

This goes to the underlying question: What do people do? And then—this is where things get incredibly uncomfortable for a lot of people—what is human consciousness? How do we form ideas? I don't know about you, but what I've found in my life is that a lot of people on a day-to-day basis are just telling you what they think you want to hear.

Life is full of these autocompletes as it is. How many people are making arguments that they actually have conceived of, that they actually believe, versus how many people are making arguments that are basically the arguments that they think people are expecting them to make? We see this thing in politics—that you guys are an exception to—where most people have the exact same sets of views as everybody else on their side on every conceivable issue. We know that those people have not sat down and talked through all of those issues from first principles. We know that what's happened, of course, is the social reinforcement mechanism. Is that actually any better than the machine essentially trying to do the same thing? I think it's kind of the same. I think we're going to learn that we're a lot more like ChatGPT than we thought.

Alan Turing created this thing called the Turing test. Basically he said, "Let's suppose we develop what we think is an A.I. Let's suppose we develop a program and we think it's smart in the same way that a person is smart. How will we know that it's actually smart?" So you have a human subject, and they're in a chatroom with a human being and with a computer. And both the human being and the computer are trying to convince them that they're actually the real person and the other one is the computer. If the computer can convince you that it's a human being, then it effectively is A.I.

The obvious problem with the Turing test is that people are super easy to con. Is a computer that's good at conning you A.I. or is that just revealing an underlying weakness in what we think of as profoundly human?

There's no single vector of smart versus not smart. There are certain sets of things humans can do better or worse, there are certain sets of things computers can do better or worse. The things computers can do better are getting really good.

If you try Midjourney or DALL-E, they're able to produce art that is more beautiful than all but maybe a handful of human artists. Two years ago, did we expect a computer to be making beautiful art? No, we didn't. Can it do it now routinely? Yes. What does that mean in terms of what human artists do? If there's only a few human artists that can produce art that beautiful, maybe we're not that good at making art.

You've been using the language of humanity: "Humans are like this." But some of this is cultural. Should we care if A.I.s are coming out of Silicon Valley versus coming from another place?

I think we should. Among the things we're talking about here is the future of warfare. You can see it in the self-driving car. If you have a self-driving car, that means you can have a self-flying plane, that means you can have a self-guided submarine, that means you can have smart drones. You have this concept now we see in Ukraine with these so-called loitering munitions, basically a suicide drone—it kills itself. But it just stays in the sky until it sees the target, and it just zeros in and drops a grenade or itself is the bomb.

I just watched the new Top Gun movie, and they allude to this a little bit in the movie: To train an F-16 or F-18 fighter pilot is like, I don't know, $7, 8, 10, 15 million, plus it's a very valuable human being. And we put these people in these tin cans and then we fly them through the air at whatever Mach whatever. The plane is capable of maneuvering in ways that will actually kill the pilot. So what the plane can do is actually constrained by what the human body can actually put up with. And then, by the way, the plane that is capable of sustaining human life is very big and expensive and has all these systems to be able to accommodate the human pilot.

A supersonic A.I. drone is not going to have any of those restraints. It's going to cost a fraction of the price. It doesn't need to have even the shape that we associate with it today. It can have any shape that's aerodynamic. It doesn't need to take into account a human pilot. It can fly faster, it can maneuver faster, it can do all kinds of turns, all kinds of things that the human pilot's body can't tolerate. It can make decisions much more quickly. It can generate much more information per second than any human being can. You're not just going to have one of those at a time, you're going to have 10 or 100 or 1,000 or 10,000 or 100,000 of those things flying at the same time. The nation-states with the best A.I. capabilities are going to have the best defense capabilities.

Will our A.I.s have American values? Is there a cultural component to the type of A.I. we're going to get?

Look at the fight that's happened over social media. There's been a massive fight over what values are encoded in social media and what censorship controls and what ideologies are allowed to perpetuate.

There's a constant running fight on that in China, which is the "Great Firewall," and they've got restrictions on what they'll allow you to show if you're a Chinese citizen. And then there's these cross-cultural questions. TikTok as a Chinese platform running in the U.S. with American users, especially American children, using it. A lot of people have theories that the TikTok algorithm is very deliberately steering U.S. kids towards destructive behaviors, and is that some sort of foreign hostile operation?

So anyway, to the extent that these are all big issues in this previous era of social media, I think all of these issues magnify out by a million times in this A.I. area. All of those issues become far more dramatic and important. People only generate so many kinds of content, whereas A.I. is going to be applied to everything.

What you just described, is that a case for early and cautious regulation? Or is that a case for the impossibility of regulation?

What would Reason magazine say about well-intentioned government—

Ha! Well, there are people who are deeply skeptical of governments, who still say, "Maybe this is the moment for guardrails." Maybe they want to limit how states can use A.I., for instance.

I'll make your own argument back to you: The road to hell is paved with good intentions. It's like, "Boy, wouldn't it be great this time if we could have very carefully calibrated, well-thought-through, rational, reasonable, effective regulation?"

"Maybe this time we can make rent control work, if we're a little bit smarter about it." Your own argument obviously, is like, well, that's not actually what happens, for all the reasons you guys talk about all the time.

So yeah, there's a theoretical argument for such a thing. We don't get the abstract theoretical regulation, we get the practical, real-world regulation. And what do we get? Regulatory capture. Corruption. Early incumbent lock-in. Political capture. Skewed incentives.

You've talked a lot about the rapid process through which innovative tech startups become enmeshed incumbents, both just with the state and more generally in their business practices. That topic has come up a lot recently with the Twitter Files and revelations of the ways that companies collaborated willingly, but maybe with a looming threat as well, with government agencies.

It seems to me like we're going to be in for more of that. This blurring of the lines between public and private is our fate. Is that what it looks like to you? Does that threaten innovation, or are there ways in which it could potentially speed things along?

The textbook view of the American economy is that it's free market competition. Companies are fighting it out. Different toothpaste companies are trying to sell you different toothpaste and it's a largely competitive market. Every once in a while there's an externality that requires government intervention and then you get these weird things like the "too big to fail" banks, but those are exceptions.

I can tell you my experience, having been now in startups for 30 years, is that the opposite is true. James Burnham was right. We passed from the original model of capitalism, which he called bourgeois capitalism, into a different model, which he called managerial capitalism, some decades back. And the actual correct model of how the U.S. economy works is basically big companies forming oligopolies, cartels, and monopolies and doing all the things that you expect oligopolies, cartels, and monopolies to do. And then they jointly corrupt and capture the regulatory and government process. They end up controlling their regulators.

So most sectors of the economy are a conspiracy between the big incumbents and their punitive regulators. The purpose of the conspiracy is to perpetuate the long-term existence of those monopolies and cartels and to block new competition. To me, that completely explains the education system, both K-12 and the university system. It completely explains the health care system. It completely explains the housing crisis. It completely explains the financial crisis and the bailouts. It completely explains the Twitter Files.

Are there sectors that are less subject to that dynamic you just described?

The question is always the same: Is there actual competition? The idea of capitalism is basically an economic form of the idea of evolution—natural selection and survival of the fittest and the idea that a superior product ought to win in the market and that markets ought to be open to competition and a new company can come along with a better widget and take out the incumbents because its widget is superior and customers like it better.

Is there actual competition happening or not? Do consumers actually have the ability to fully select among the existing alternatives? Can you actually bring a new widget to market or do you get blocked out? Because the regulatory wall that's been established makes that prohibitive.

The great example of this is banking, where the big thing in 2008 was, "We need to bail out these banks because they're 'too big to fail.'" And so then there were screams of the need to reform the "too big to fail" banks. That led to Dodd-Frank. The result of Dodd-Frank—I call it the Big Bank Protection Act—is that the "too big to fail" banks are now much larger than before and the number of new banks being created in the U.S. has dropped to zero.

The cynical answer is that doesn't happen in the spaces that don't matter. Anybody can bring a new toy to market. Anybody can open a restaurant. These are fine and good consumer categories that people really enjoy and so forth, but as contrasted to the health care system or the education system or the housing system or the legal system—

If you want freedom, your business had better be frivolous.

That would be the cynical way of looking at it. If it doesn't matter in terms of determining the power structure of society, then do whatever you want. But if it actually matters to major issues of policy where the government is intertwined with them, then of course it doesn't happen there.

I think it's so self-evident. Why are all these universities identical? Why do they all have identical ideologies? Why isn't there a marketplace of ideas at the university level? Well, that becomes a question of why aren't there more universities? There aren't more universities because you have to get accredited. The accreditation bureau is run by the existing universities.

Why do health care prices do what they do? A major reason for that is because basically they're paid for by insurance. There's private insurance and public insurance. The private insurance prices just key off the public prices, because Medicare is the big buyer.

So how are Medicare prices set? A unit inside [the Department of Health and Human Services] runs literal Soviet-style price-fixing boards for medical goods and services. And so once a year, there are doctors who get together in a conference room at, like, a Hyatt Chicago somewhere, and they sit down and they do the exact same thing. The Soviets had a central price-fixing bureau. It didn't work. We don't have that for the entire economy, but we have that for the entire health care system. And it doesn't work for the same reason that the Soviet system didn't work. We've exactly replicated the Soviet system, [but] we're expecting better results.

You said about 10 years ago that bitcoin is as important as the internet was. We've had a little time for that to play out. How is that prediction looking to you?

I wrote a New York Times column back when The New York Times would run things that I write—which, by the way, in case you're wondering, is no longer true.

Everything in there, I still agree with. The one modification I would make is at the time it looked like bitcoin was going to evolve in a way where it was going to be used for many other things. We thought it was a general technology platform that was going to evolve to be able to make a lot of other applications possible in the same way the internet did. That didn't happen. Bitcoin itself just basically stalled out. It basically stopped evolving, but a bunch of other projects emerged that took that place. The big one right now is ethereum. So if I wrote that thing today, either I would say ethereum instead of bitcoin or I would just say crypto.

But otherwise, all the same ideas apply. The argument I made in that piece is basically crypto, Web3, blockchain—they're what I call the other half of the internet. It's all the functions of the internet that we knew we wanted to have when we originally built the internet as people know it today. But it's all of the aspects of basically being able to do business and be able to do transactions and have trust. We did not know how to use the internet to do that in the '90s. With this technological breakthrough of the blockchain, we now know how to do that.

We have the technological foundation to be able to do that: have a network of trust that is overlaid on top of the internet. The internet is an untrusted network. Anybody can pretend to be anybody they want on the internet. Web3 creates layers of trust on top of that. Within those layers of trust, you can represent money, but you can also represent many other things. You can represent claims of ownership. You can represent house titles, car titles, insurance contracts, loans, claims to digital assets, unique digital art. You can have a general concept of an internet contract. You can strike contracts with people online that they're actually held to. You can have internet escrow services. So for e-commerce, you can have a service. You have two people buying from each other. You can have actually a trusted intermediary now that is internet-native that has an escrow service.

You can build on top of the untrusted internet all of the capabilities that you would need to have a full, global, internet-native economy. And that's a giant idea. The potential there is extraordinarily high. We're midway through that process. A lot of those things have worked. Some of those things haven't worked yet, but I think that they're going to.

Are there sectors where you think there's currently the right amount of investment? Insufficient investment? Too much investment because there's hype?

So there's the term, research and development, but really those are two different things. Research is basically funding smart people pursuing deep questions around technology and science such that they may not have any idea yet of what kind of product could get built on it or even whether something can work.

And then there's the other side, which is what we do: the development side. By the time we fund a company to build a product, the basic research has to be finished already. There can't be open basic research questions, because otherwise you have a startup that you don't even know whether you'll even be able to build a thing. Also, it needs to be close enough to commercialization that within five years or something, you can actually commercialize it into a product.

That formula worked really well in the computer industry. There were 50 years of basically government research into information science, computer science, during and after World War II. That translated to the computer industry, software industry, internet. And that worked. By the way, that also worked in biotech.

Those are the two main areas [where] I think actual productive research is happening. Should there be more funding into basic research? Almost certainly. Having said that, the basic research world has a very profound crisis underway right now, which they call the replication crisis. It turns out that a lot of what people thought was basic research has actually basically been fake—and arguably fraud. So among the many problems that our modern universities have, there is a very big problem where most of the research that they're doing does seem to be fake. So would you recommend more money be put into a system that's just generating fake results? No. Would you argue that you do need basic research to continue to get new products out the other end? Yes.

On the development side, I'm probably more optimistic. I think generally we don't lack for money. I think basically all the good entrepreneurs get funded.

The main question on that side of things is not so much the money. [It's] about competition and how markets work. In what fields of economic activity can there actually be startups? For example, can you actually have education startups? Can you actually have health care startups? Can you actually have housing startups? Can you actually have financial services startups? Can you do a new online bank that works in a different way? And for those fields where you would want to see a lot of progress, the bottleneck is not whether we can fund them; the bottleneck is literally whether the companies will be allowed to exist.

And yet I think there are sometimes places where you might have said it's settled wisdom that you can't have a startup in this area, and then it turns out you can. I'm thinking of space. I'm thinking of, to some extent, some subsets of education. I would also put crypto in this category. How can you compete with money? And then here we are, in a quite robust competitive market that is trying to compete with money.

SpaceX is probably your best-case scenario. Talk about a market that's dominated by the government and has regulations literally to the moon. I don't even know the last time anybody tried to do a new launch platform. And then the idea that you're going to put all these satellites up there, there's massive regulatory issues around that. And then the complexity on top of that. Elon [Musk] wanted the rockets to be reusable, so he wanted them to land on their rear ends, which is something that people thought was impossible. All previous rockets—basically they're one shot and they're done. Whereas his rockets get reused over and over again, because they're able to land themselves. SpaceX climbed a wall of skepticism its entire way, and [Musk] basically just brute-forced his way through it. He and the team there made it work. The big thing we talk about in our business is just, look, that is a much, much harder entrepreneurial journey. That's just what the entrepreneur has to sign up for to do that and the risks that are involved are just much harder than starting a new software company. It's just a much higher bar of competence that's required. It's much higher risk.

You're going to lose more of those companies because they're just going to not be able to make it. They're going to get blocked in some way. And then you need a certain kind of founder who's willing to take that on. That founder looks a lot like an Elon Musk or a Travis Kalanick [of Uber] or an Adam Neumann [of WeWork]. In the past, it looked like Henry Ford. This requires Attila the Hun, Alexander the Great, Genghis Khan. To make that kind of company work requires somebody who is so smart and so determined and so aggressive and so fearless and so resistant to injury of many different kinds, and willing to take on just absolutely cosmic levels of vitriol and hate and abuse and security threats. We need more of those people. I wish we could find a way to grow them in tanks.

Why do you think it is that there is this special category of obsessive anger that's directed at the entrepreneurial billionaire? I mean, U.S. senators tweeting that billionaires should not exist…

I think it's all in Nietzsche—what he called ressentiment, the toxic blend of resentment and bitterness. It's the cornerstone of modern culture, of Marxism, of progressivism. We resent people who are better than us.

Christianity too, right?

Yeah, Christianity. The last will be first and the first will be last. A rich man will sooner pass through the eye of a needle than enter the kingdom of God. Christianity is sometimes described as the final religion, the last religion that can ever exist on planet Earth, because it's the one that appeals to victims. The nature of life is there are always more victims than there are winners, so victims are always in the majority. Therefore, one religion is going to capture all the victims or all the people who think of themselves as victims. And that, by definition, is the majority among lower-class societies. In social science, they'll sometimes refer to a phenomenon called crabs in a bucket, where if one person starts to do better, the other people will drag them back down.

This is a big problem in education—one kid starts to do good and the other kids start to bully him until he's no better than the rest. In Scandinavian culture, there's a term, tall poppy syndrome. The tall poppy gets whacked. Resentment's like a drug. Resentment is a very satisfying feeling, because it's the feeling that lets us off the hook. "If they're more successful than I am, it just proves that they're worse than I am. Because obviously, they must be immoral. They must have committed crimes. They must be making the world worse." It's very deeply wired in.

I guess I'll say this: The best entrepreneurs we deal with have no trace of it at all. [They] think the entire concept is just absolutely ridiculous. Why would I spend any minute thinking about whatever anybody else has done or whatever anybody else thinks of me?

This interview has been condensed and edited for style and clarity.