What We Owe the Future Is Liberalism
What does "longtermism" offer those of us who favor limited government and free markets?
What We Owe the Future, by William MacAskill, Basic Books, 352 pages, $32
"When we look to the future, there is a vast territory that civilization might expand into: space," the Oxford philosopher William MacAskill observes in What We Owe the Future. "There are tens of billions of other stars across our galaxy, and billions of galaxies are accessible to us."
MacAskill is a founder of the effective altruism movement, which encourages philanthropists to use evidence and reason to direct their time and money in ways that help others as much as possible. Critics claim this amounts to little more than urging people to "do good well." After all, no one wants to back ineffective altruism. That complaint seems too harsh. The world is littered with well-intentioned programs that squandered vast resources while doing little to improve the circumstances of the people they aimed to help.
As the leading proponent of longtermism, a term he coined in 2017, MacAskill asks readers what we can do now that will positively affect people's welfare over trillions of years. While effective altruists ask how our charitable impulses can do the most good for the most people, longtermists extend the idea by taking into account the well-being of future people.
In a thought experiment, MacAskill asks readers to take the perspective of humanity, imagined as a single person who experiences every life that has ever or will ever be lived. "If you found out that the human race was certain to peter out within a few centuries, would you greet the knowledge with sadness because of all of the joys you would lose or with a sense of relief because of all of the horrors you would avoid?" he asks. Has the human story been, on balance, one of happiness or sorrow?
For me, a near-term end of humanity (and of our transhuman descendants) would induce a profound sense of sadness and loss. In other words, I am optimistic about the possibility of a future in which most human beings live flourishing lives. But as MacAskill notes, whether we get such a future depends partly on choices we make now.
"The primary question is how can we build a society such that, over time, our moral views improve, people act more often in accordance with them, and the world evolves to become a better, more just place," MacAskill argues. "The future could be very big. It could also be very good—or very bad."
The good futures involve the extension of progress already underway. MacAskill notes how much life has improved for most people during the last couple of centuries.
Average life expectancy at birth has risen from less than 30 to 73. The proportion of people living in absolute poverty has fallen from 80 percent to less than 10 percent. Adult literacy has increased from 10 percent to 85 percent.
During the last 50 years, global gross domestic product per person grew at a rate of about 2 percent a year. If that trend continues, the number will rise, in constant dollars, from just over $12,000 now to about $60,000 by 2100, meaning the average person will be five times richer.
Considerable moral progress has been made too. MacAskill's primary example is the moral revolution, sparked by religious dissenters in the 18th century, that ended the age-old, once largely unquestioned practice of slavery.
MacAskill does not directly address the question of which social, economic, and political changes produced this vast increase in human well-being. But the recent progress in health, wealth, and morals occurred only after the rise and expansion of such liberal institutions as free speech, free markets, secure property rights, religious and cultural tolerance, and the rule of law.
MacAskill argues that longtermism requires a morally exploratory world. Specifically, humanity needs to keep its options open as much as possible to avoid what he calls "value lock-in"—"an event that causes a single value system, or set of value systems, to persist for an extremely long time." Examples would include the establishment of a world government or the rise of a dominating artificial general intelligence executing a fixed set of rules. MacAskill worries that near-term space colonization also could be a point of value lock-in. The norms, laws, and distribution of power set by the first settlers could determine who is allowed to access and use other celestial bodies, replicating current inequities far into the future.
Some of the futures MacAskill imagines are even worse than that. We could face civilizational collapse and even extinction, thanks to such possibilities as a global nuclear war, a pandemic of deadly bioengineered pathogens, or the advent of an artificial superintelligence that decides to sweep away humanity as an unnecessary irritant.
Even an apocalypse could be survivable, for the species if not for every member of it. MacAskill considers a scenario where a nuclear war kills off 99 percent of the world's population. As depressing as that prospect is, MacAskill believes the survivors could recover from such a catastrophe fairly quickly, because they would be able to take advantage of the technological and medical knowledge accumulated during the last few centuries.
A more subtle disaster is the risk of economic and technological stagnation. MacAskill firmly rejects the idea of "no-growthism" as unsustainable. He likens civilization's technological advance to a climber scaling a sheer cliff face. If the climber stops his ascent, he will grow tired of hanging on and inevitably fall to his death. But if he presses on, he will eventually reach the safety of the summit.
MacAskill is especially worried about what could happen if global population starts falling later in this century. With fewer people, there are fewer innovators working to solve problems and therefore slower progress. "I think that the risk of technological stagnation alone suffices to make the net longterm effect of having more children positive," he says.
Stagnation, MacAskill argues, may be "one of the biggest sources of risk of extinction or permanent civilizational collapse that we face." He acknowledges that some technological breakthroughs could avert stagnation. The development of benign artificial general intelligence, for example, would enormously accelerate humanity's capacity to solve problems and create effective new technologies. MacAskill suggests that applying biotechnology to enhance people's mental capacities also could help forestall stagnation.
Longtermists also favor political experimentalism—that is, increasing cultural and intellectual diversity. MacAskill likens this proviso to John Stuart Mill's argument that individual liberty and free expression enable a beneficial intellectual and moral competition in which the best ideas win. MacAskill believes free speech is "crucial to enable better ideas to spread" and that "fairly free migration" will allow people to "vote with their feet" by fleeing unattractive societies for more attractive ones. He endorses charter cities as a possible mechanism for promoting diverse cultural, economic, and political experiments.
When MacAskill tries to imagine how a good future might unfold, one version involves people relying on reason and evidence to agree on the best possible future and then promoting that universal vision. But he suggests there may be no need for such a moral convergence. Instead, he says, we could get a future where people "have worked out their own visions of what a good life and a good society consists of and cooperated and traded in order to build a society that is sufficiently good for everyone." The result "would be a compromise among different worldviews in which everyone gets most of what they want," he writes. "Even if no one has a positive moral vision at all but just wants what's best for them, this could still result in a very good world."
Framed that way, longtermism should appeal to those of us who endorse limited government and free markets. And in fact, this vision of peaceful tolerance is not far from how emerging liberal social, political, and economic institutions have promoted economic, technological, and moral progress during the last two centuries.
A civilization spread out across millions of solar systems might last for trillions of years, MacAskill notes. Therefore, he concludes, the "future of civilization could be literally astronomical in scale, and if we will achieve a thriving, flourishing society, then it would be of enormous importance to make it so."
This article originally appeared in print under the headline "Does Longtermism Mean Liberalism?."
Show Comments (14)