A seminar student once became upset with me for suggesting that instead of running get-out-the-vote campaigns, the government should keep the dates of elections and the locations of polling places secret so that only people with enough initiative and interest to ferret out the information could vote.
It was not a serious suggestion, but the college-aged young woman didn't catch that, and she thought my idea was horrible. My point, of course, was that making it easy and even socially obligatory to vote does a disservice to the general welfare.
As I've previously explained, a mass democratic system encourages voter irresponsibility. Because the consequence of any single vote is negligible, individuals have an incentive to vote on some basis other than an understanding of current issues — which would require, among other things, the costly acquisition of a grasp of basic economics. Voters, then, are free to vote their biases. This voter mentality is known as rational ignorance. If there are no benefits, but only costs, associated with acquiring information, why acquire it? (This could not be more different than decision-making in the marketplace, where people expect to bear the costs and reap the rewards of their decisions. This does not mean that such market decision-making is flawless; but it does imply that people tend to learn from their errors.)
Fans of democracy might respond that rational ignorance is no big problem, because the votes of the masses will cancel out, leaving it to the minority of informed voters to cast the margin of victory. Hence, good policies will ultimately prevail.
Except they won't. Bryan Caplan, in The Myth of the Rational Voter: Why Democracies Choose Bad Policies, shows that the informed minority would have such clout only if rational ignorance led to random error. But when it comes to economic policy, voter error is not random but systematic — that is, predominantly in one direction, namely, the antimarket direction. For example, ignorant voters are not as likely to favor a free-trade candidate as a protectionist candidate. Rather, they are much more likely to back the protectionist. Why? Because, Caplan explains, a majority of the public harbors antimarket, antiforeign, and make-work biases. (The market's beneficial undesigned order is counterintuitive.) Again, given the negligible consequences of any one vote, no voter has an incentive to undertake the costly, unpleasant task of scrutinizing his irrational biases in search of error. It's cheaper and more comforting to vote one's biases, and there's no practical downside. Caplan calls this rational irrationality.
Under these circumstances, the informed minority won't be able to provide the margin of victory for the (relatively) promarket candidate (should there even be one). The result will be a systemic tilt toward intervention, mitigated only if officeholders betray their supporters and abstain from enacting the most egregious interventions. Caplan notes that if it wasn't for economists, who overwhelmingly oppose protectionism, victorious politicians would be far worse than they are in giving the people exactly what they want.
This brings to mind H.L. Mencken's maxim that "Democracy is the theory that the common people know what they want, and deserve to get it good and hard." The problem is that the rest of us get it too.
The philosopher Michael Huemer writes that systemic political irrationality is an excellent reason to keep important matters out of the political system. A related reason is that even politicians who know better will be motivated by their passion for power to ignore their best judgment and cater to the voters' irrationality.
Another related problem, Huemer writes (PDF), is that "social theory is harder than you think." While lots of theories are available to explain any phenomenon, only one is right:
So given just the information that T is a theory, the probability that T is correct is approximately zero. However, naive thinkers have often failed to realize this, because the theories that a typical human being can think of to explain a given phenomenon (and that will seem plausible to that person) are typically very few in number. It is not that we consider the truth and reject it; in the overwhelming majority of cases, when we first start thinking about how to explain some phenomenon, the truth is not even among the options considered.
A further obstacle is confirmation bias: "when we think about a hypothesis, our natural tendency is to look for evidence supporting the hypothesis, not to look for ways of falsifying it.… When we add the fact that in most theoretical questions, people are motivated more by the desire to find some belief to cling to than by a desire for the truth, the chances of winding up with erroneous beliefs are all that much higher."
The natural sciences have managed to some extent to overcome similar problems, but the same cannot be said for the social "sciences." Huemer writes,
Some of the questions to which we need answers just seem in principle non-empirical. For instance, by what experiment can we test whether justice demands that society redistribute wealth from the rich to the poor? Other questions are difficult to investigate because of the unavailability of controlled experiments. If we want to test whether fiscal stimulus cures recessions, we cannot prepare two identical societies, with identical recessions, and then apply fiscal stimulus in one society but not the other. Nor can we take a large collection of societies with recessions and randomly assign half to receive fiscal stimulus and half to receive no fiscal stimulus. Social scientists do not have the power to experiment with societies as natural scientists can experiment with inanimate objects in their laboratories. Finally, social phenomena are vastly more complex than the phenomena studied by physicists and chemists. Societies contain thousands or millions of individual human beings interacting with each other in myriad complex ways. And each of these human beings is himself an extremely complex entity, much more complex than the typical inanimate object.
Wrong predictions thus can be explained in such a way as to leave the theory intact. This is how Paul Krugman gets away with explaining the failure of stimulus spending: the theory is fine, but not enough money was spent!
All things considered, Huemer writes, we should opt for political "passivity." This means, as they say in medicine, "do no harm."
So, no get-out-the-vote campaigns! Huemer adds,
These campaigns are a terrible idea. Most voters have no idea what is going on — they may not even know who their leaders are, and certainly do not know who is the best candidate. Imagine that someone asks you for directions to a local restaurant. If you have no idea where the restaurant is, you should not make it up. You should not tell the person some guess that seems sort of plausible to you. You should tell them you don't know and let them get directions from someone more knowledgeable. Ignorant voting is even worse than ignorant giving of directions, because voting is an exercise of political power (albeit a very small one) — to vote for a policy is not only to make a recommendation, but to request that the policy be imposed on others by force.
Urging voters to do their homework is a waste of time, Huemer writes, because most will find that task prohibitively costly and, anyway, the question "Who is the best candidate" may have no answer. "In short, it is most plausible to say that individuals have no obligation to vote, and that if they are ill-informed (as nearly all citizens are), they are obligated not to vote."
Government, he says, should "neglect social problems" because anything it does is more likely to do harm than good. Society is complex, and "there is a kind of moral presumption against coercive interventions."
I'd translate this advice thusly: Politicians: Don't just stand there. Undo something!
This column originally appeared on the Future of Freedom Foundation.