I first met Peter Thiel—co-founder of PayPal, angel investor in Facebook, founder of the hedge fund Clarium Capital Management, adviser to the Singularity Institute for Artificial Intelligence, and self-described libertarian—at a party in his San Francisco home last September. Perhaps 100 digerati wandered through Thiel's sleek Marina District townhouse, chatting amiably over wine and canapés in rooms filled with up-to-the-minute abstract art.
The party launched the second annual Singularity Summit, held at the nearby Palace of Fine Arts during the ensuing two days. The Singularity, a term coined by the science fiction writer Vernor Vinge in 1983, refers to the eventual technological creation of smarter-than-human intelligence. Just as our model of physics breaks down when it tries to describe the center of a black hole, Vinge observed, our attempts to model the future break down when we try to foresee a world that contains smarter-than-human intelligences. The Singularity Institute takes it for granted that exponentially accelerating information technology will produce such artificial intelligences; its chief goal is to make sure they will be friendly to humans.
In 1987, while studying philosophy at Stanford, Thiel helped found the libertarian/conservative student newspaper The Stanford Review. As a law student at Stanford he was president of the university's Federalist Society. After working briefly for the law firm Sullivan and Cromwell in New York, Thiel switched to trading derivatives for Credit Suisse Financial. In the mid-1990s, Thiel transformed himself into a venture capitalist and a serial entrepreneur. He returned to California, where he has backed a number of startups. In addition to PayPal and Facebook, Thiel has invested in the social networking site LinkedIn, the search engine company Powerset, and the Web security provider IronPort.
Thiel also joined the culture wars by co-authoring The Diversity Myth: Multiculturalism and the Politics of Intolerance at Stanford (1996), and was an executive producer for the 2005 feature film Thank You for Smoking, based on Christopher Buckley's politically incorrect novel of the same name. Besides backing the Singularity Institute, Thiel pledged a $3.5 million matching grant in 2006 to the Methuselah Foundation to support its anti-aging research agenda.
I interviewed Thiel between sessions at the Singularity Summit.
reason: I've been told you consider yourself a libertarian.
Peter Thiel: I would consider myself a rather staunch libertarian. If we went down a list of litmus-test questions, I would probably score pretty well.
reason: How did that come about?
Thiel: Personal history is always tricky, but it was partially shaped by thinking about the totalitarian disasters in the 20th century. When I was growing up in high school, the Cold War was still very much going on, and the complete lack of freedom seemed like an absolutely terrible thing.
The kinds of authors I read in junior high school and high school weren't classic libertarian writers, but they definitely pushed one in that direction. So, for example, Solzhenitsyn in The Gulag Archipelago showed how terrible the totalitarian system was. Or in the fiction context, Tolkien and The Lord of the Rings expressed the ideas that absolute power corrupts absolutely, that the ideal world is one in which people enjoy more freedom, and that that's a world where you'll have the greatest amount of human prosperity and happiness and ultimately achievement.
reason: You were a Stanford undergrad and law student. After you graduated, your career seemed to be taking a policy wonk direction.
Thiel: As an undergraduate at Stanford, I started The Stanford Review, which ended up being very engaged in the hot debates of the time: campus speech codes, questions about diversity on campus, all sorts of debates like that. I ended up writing a book on it, The Diversity Myth, the thesis of which was basically that there was no real diversity when you had a group of people who looked different but thought alike, and what really was needed was a diversity of ideas.
In parallel I was obviously on the law track. I worked at a law firm in New York very briefly. I'd always been good at math—I was a nationally ranked chess player as an undergraduate—and I shifted over into trading financial derivatives at Credit Suisse Financial Products in '94.
reason: How did you make that transition?
Thiel: They gave me a math test, and I got all the questions right.
I moved back to California in '96. I started a small fund and started investing in tech companies. In the course of that, I invested in PayPal in late '98. I came on board as the interim CEO, and it evolved from four of us to a 900-person company. At this point, it's up to about 7,000 people working for the PayPal division of eBay. Basically creating this new payment system from scratch, which was one of these Holy Grail type of things that a lot of people had been focused on. The basic thought was if you could lessen the control of government over money and somehow shift the ability of people to control the money that was in their wallets, this would be a truly revolutionary shift.
In many ways, it succeeded better than we would've hoped. I wouldn't say it changed the whole world, but I think technologies like PayPal have been a major contributing factor toward the weakening of the nation-state over the last few decades. Countries are more competitive. They can't simply devalue currencies, because people can shift currencies from one country to another.
reason: Of course, Argentina did exactly that a couple of years ago.
Thiel: Well, there weren't that many people in Argentina using PayPal. I think it's a very different world from 1971, when John Connally, the U.S. treasury secretary, could go on TV and say it's our currency but it's your problem, because the only currency that existed was dollars. We now have competing jurisdictions of currencies. Technologies like PayPal foster competition because they enable people to shift their funds from one jurisdiction to another, and I think that ultimately will lead to a world in which there's less government power and therefore more individual control.
Technology is at the center of what will determine the course of the 21st century. There are aspects of it that are great and aspects that are terrible, and there are some real choices humans have to make about which technologies to foster and which ones we should be more careful about.
reason: Bill Joy, the former chief scientist at Sun Microsystems, declared in his famous article "Why the Future Doesn't Need Us" that we have to relinquish artificial intelligence, biotech, and nanotechnology because they're just too dangerous for human beings to handle. Is there some truth to that?
Thiel: I think it's not even wrong. It's one of those things that's so far off the mark that it is not even wrong.
There are obviously dangers in these technologies. But I think Joy's approach would actually lead to the future he fears. If the virtuous people relinquish these things, it means that they will be developed by the evil people, and that seems to me to be a recipe for these technologies going wrong.
The only way for something like Joy's approach to work would be basically a totalitarian world-state in which we control the technologies worldwide. It is incredibly arrogant to say that the only smart people in the world exist in the United States and that if you can stop it in the U.S., you'll stop it everywhere. Maybe it's going to be developed by the Chinese military. Maybe it'll be developed by people working for Islamic terrorist groups.
The anti-Joy view that I would articulate is that what we need to be doing is to be pushing the accelerator further and harder. What I fear is that people working in free countries, where I think these technologies are likely to be developed in a more benign way, are being blocked by bureaucratic regulation and by cultural ideas that we shouldn't be doing this. We are on this technological arc. We don't know where it's going to go, but I think the best trajectory is for us to just hit the accelerator really hard.
reason: You're a big supporter of the Singularity Institute for Artificial Intelligence. What is your definition of the Singularity?
Thiel: In physics the term refers to a break in the fabric of space-time where you can't extrapolate the standard laws of physics. This happens in a black hole, for example, where the gravitational field is so strong that it warps and breaks the fabric of space-time itself. The technological or historical analog to this is where changes start happening at such an accelerating rate that somehow they break the fabric of how we would extrapolate it to the future.
There are a variety of these technologies that are accelerating very quickly. They are probably focused mostly on information technology and the biotech revolutions. One of the things that's very misleading about acceleration and exponential growth is that it's slow at first and then it's fast, and so the future happens more slowly than people expect and then it happens more quickly.
And where you go with that abstract principle is not always easy to say. Just as with physics, where black holes are a rip in the fabric of space-time and so we can't exactly say what happens inside a black hole, or at least our standard physics theories break down inside a black hole. It still is much better understanding that we live in a universe in which black holes might exist and how they affect the structure and nature of the universe. So I think it is much better to understand that we're in a world in which there is potential for a Singularity-type event. I'm not saying it's going to happen, but just that this potential exists for a very different sort of a world.
reason: The folks at the Singularity Institute seem to argue that the rip in the fabric of history would occur with the creation of a greater-than-human intelligence. Is that right?
Thiel: There are a whole slew of versions of this. It could happen with computers. It could happen with enhanced human intelligence, where you have things that modify humans. There are aspects of the biotech revolution that could represent this. There are nanotechnological versions that could be very, very strange. There are all sorts of very bizarrely different versions of this, and it's very hard to know which of these trends is a dominant one. Maybe they have natural limits to them. Maybe Moore's Law [the observation in 1965 by Intel co-founder Gordon Moore that the number of transistors incorporated into integrated circuits doubles roughly every two years] breaks down. If Moore's Law were to stop tomorrow, then I think the hopes for A.I. and computer science may be deferred by centuries. Then the biotech revolution seems to have a lot of promise, but again maybe there are some strange constraints that it runs into.
reason: But you're betting that in fact there is a Singularity in the human future?
Thiel: By definition, these are one-time, nonrepeatable events. Assigning probabilities to them is not an easy exercise. There's something about the idea of a Singularity that is kind of unscientific, because you can't really do an experiment. We can't say, "What's the probability this is going to happen?" We can't assign numbers to this, but falsifiability I think is a bit too strong a criterion. We couldn't have assigned numbers to the probability of a communist revolution in the early 20th century, yet people would have done well to worry about that.
reason: There's another popular narrative for the 21st century that says humanity is going to wreck the planet. Are the environmentalist doomsayers right?
Thiel: My sense is that they're not right. I'm not an expert on it, but what I think is different from climate-change catastrophe vs. the Singularity is that climate seems like such a pedestrian thing to talk about. You talk about it every day. There's a tendency to overdramatize the climate, and it's something everybody can have opinions about. So I don't think there's a cognitive bias where people are incapable of imagining the world's climate changing. That seems like a very easy thing for people to imagine, and maybe it's also an easy thing for people to get hysterical about. On the other hand, computers running the world or this radical progress of technology —that's something where I think there's just no imagination at all.
reason: Why are you supporting the Singularity Institute?
Thiel: I think it's a group of really smart people working on an important problem. I think that the basic rule on philanthropy that I have is that I want to donate money to causes that are worthwhile but where there are no market-based mechanisms for them. There is a category of things that would benefit all of humanity but where the benefits are very diffuse and the costs are concentrated. Maybe it's very long-term. So I focused my philanthropy on things with a 20-, 30-, 40-year horizon. The horizons are too long for a for-profit company to take advantage of, and the government and universities are not pushing things because maybe it's too unconventional or it doesn't easily fit into a particular political agenda or vision of the future. Those areas are probably systematically underfunded. It may be the only area of philanthropy that's underfunded.
reason: And you would think that the Singularity Institute is an example of that?
Thiel: I think it's a great example. I also have been doing some work on radical life extension, which I think is similarly underfunded.
reason: What are you hoping to get out of the research? Do you want to benefit from it personally?
Thiel: I certainly think living longer is not a generally bad thing. I think that making sure the technology arc is positive rather than negative is not generally a bad thing. I think it probably would be somewhat mistaken to frame it in too narrowly selfish a way. It may be the case that the work being done on life extension is going to benefit people 100 or 200 years from now, but I think it still is a good thing to do it. My own guess is that I will live to age 100 to 120, so I'm frustrated that the technologies aren't going as quickly as they should because of government interference.
reason: Leon Kass, the former chair of President Bush's Council on Bioethics, thinks 70 years is enough. Do you think that's a respectable position?
Thiel: Criticizing Kass always feels like shooting a fish in a barrel. If people talk about living to 200, that's kind of weird. But if you ask people if it's a good or a bad thing to enable people to live healthily to 80, almost everybody will say that is a good thing.
The question's not an abstract question about "Is it desirable for people to live X years?" It's "Is it good to have a cure for this form of cancer? Is it good to do something where your bodies and minds stay younger and healthier for longer than they otherwise would?" The Kass approach encourages the rest of the society not to reflect about this. In the United States life is getting longer and longer, but we're not thinking about it. If we're actually going to live to age 100, the effect of Kass will be to encourage people to have a very unhealthy last 30 years because they will not have thought about and will not have prepared for it.
That's the incredible disconnect we have between technology and the culture. In the culture, everything's forced toward a shorter time horizon. As a libertarian, I tend to blame the government for a lot of this, because socialized welfare, socialized health care, and socialized retirement programs all encourage people not to think about the future.
reason: Do you consider yourself a transhumanist?
Thiel: The problem with the label is that it suggests that we should run away from being human. Take the question of aging. If you define that as the essence of being human, then transhumanists are anti-aging and therefore you try to transcend this human limitation. I don't think that death and life are inextricably interconnected in some sort of Eastern mystical sense in which for everything white there's something black and there's always a yin/yang type of thing. Every myth on this planet tells us the purpose of life is death, and I don't think that's true. I think the purpose of life is life.
reason: What do you think are the most dangerous political trends in the United States?
Thiel: It seems like things are moving in a very anti-libertarian direction politically on every issue. There may be a few exceptions, but generally we're moving toward a country that's fiscally more liberal and socially more conservative, which is a very odd configuration. You can debate why that is. Maybe politics has become purely reactionary. It's a reaction against progress, against globalization, against technology.
My optimistic take is that even though politics is moving very anti-libertarian, that itself is a symptom of the fact that the world's becoming more libertarian. Maybe it's just a symptom of how good things are.
Ronald Bailey is Reason's science correspondent.