And now, in the grand tradition of Frenchmen who explain us to ourselves, along comes Professor Piketty with a magisterial work whose painstaking empirical rigor is matched only by its vaulting theoretical ambition. Peering into centuries of income and wealth data for countries around the world, Piketty has found what he believes to be a fundamental law of capitalism: r > g.
There is an inherent tendency, he argues, for the return on capital (r) to exceed the rate of economic growth (g). As a result, the ratio of wealth to incomes rises over time with baleful consequences for the distribution of income and opportunity. The shape of things to come, Piketty warns, is a "patrimonial capitalism" in which the inherited wealth of an entrenched plutocracy dominates economic, social, and political life. An admiring Paul Krugman, writing in The New York Review of Books, proclaims that Piketty's bold thesis "amounts to a unified field theory of inequality."
Ah, but there's a catch—and Krugman, to his credit, spots it. In what Krugman calls "a sort of intellectual sleight of hand," Piketty offers an explanation for the rise of U.S. income inequality that is quite distinct from the purported relationship between r and g. "The main reason there has been a hankering for a book like this is the rise, not just of the one percent, but specifically of the American one percent," Krugman writes. "Yet that rise, it turns out, has happened for reasons that lie beyond the scope of Piketty's grand thesis."
So what has been driving the growing spread in U.S. incomes? According to Piketty, the main story has been, not the relentless accumulation of capital, but the vertiginous rise of labor incomes at the very top of the pay scale. Indeed, U.S. income trends over the past generation appear "fractal" in nature—that is, the same pattern repeats itself at progressively smaller scales. Thus, compare the top 10 percent of American earners to the other 90 percent and you'll see the high earners pulling away from the pack: Piketty's data show that their share of total income rose from below 35 percent in the 1970s to nearly 50 percent in the past decade.
But you'll see the same thing if you drill down an order of magnitude and focus just on the top decile, only this time it's the top 1 percent of earners pulling away from the rest of the top tenth. Drill down once more by comparing the top 0.1 percent of earners to the rest of the top centile and you'll see the same thing again. "Of the 15 additional points of national income going to the top decile," Piketty writes, "around 11 points, or nearly three-quarters of the total, went to 'the 1 percent' (those making more than $352,000 a year in 2010), of which roughly half went to 'the top 0.1 percent' (those making more than $1.5 million a year)."
Although the brilliance of Piketty's empirical work is widely acknowledged, its ultimate accuracy is not beyond dispute. In particular, Cornell economist Richard Burkhauser has used a different but plausible methodology to measure incomes and finds no rise in inequality since the 1980s. (Scott Winship of the Manhattan Institute offers a useful comparison of Piketty and Burkhauser's methodologies and results here.)
But for present purposes, let's assume Piketty's numbers are right. And if they are, then all the Sturm und Drang over rising U.S. income inequality boils down to a complaint about trends at the very tippy top of the income scale—those 150,000 or so top earners who, in any given year, comprise the top 0.1 percent. (Of course there is considerable turnover in that group from year to year, and thus the specific members of the club change over time.)
Who are these people? Piketty relies on the analysis in a 2012 working paper by Jon Bakija of Williams College, Adam Cole of the U.S. Treasury Department, and Bradley Heim of Indiana University. According to their work, roughly 60 percent of the top 0.1 percent are executives, managers, and financial professionals (41 percent in non-finance, 19 percent in finance). Lawyers, doctors, and real estate developers make up another 15 percent or so, while media and sports stars constitute under 4 percent. Piketty surveys these data and concludes that "the new US inequality has much more to do with the advent of 'supermanagers' than with that of 'superstars.'"
Here then is the crux of the matter, according to Piketty: "This spectacular increase in inequality largely reflects an unprecedented explosion of very elevated incomes from labor, a veritable separation of the top managers of large firms from the rest of the population." And indeed, executive compensation has skyrocketed in recent decades: according to one measure, average compensation for CEOs has risen (in inflation-adjusted constant dollars) from about $1.1 million in 1970 to $10.9 million in 2011 (down from a peak of $18.2 million in 2000). Although the escalation in CEO pay is probably the most dramatic, other senior corporate executives have also experienced whopping increases in remuneration.
What accounts this phenomenon? Piketty says there are two alternatives. One, which fairly reeks of moldy straw, is that "the skills and productivity of these top managers suddenly rose in relation to those of other workers." The other, favored by Piketty, is that "these top managers by and large have the power to set their own remuneration." "It may be excessive to accuse senior executives of having their 'hands in the till,'" Piketty writes, "but the metaphor is probably more apt than Adam Smith's metaphor of the market's 'invisible hand.'"
For Piketty, the only plausible explanation for skyrocketing executive pay is self-dealing: managers are taking advantage of weak corporate governance to benefit themselves at the expense of shareholders. This is certainly a popular view, and it has its scholarly defenders—most notably, Lucian Bebchuk and Jesse Fried at Harvard Law School. But there is one rather glaring problem with the theory: by all accounts corporate governance has improved considerably in recent decades, just as CEO pay has gone through the roof. Consequently, "none of the evidence that we have found suggests that the ability of executives to set their own pay can explain the dramatic increase in compensation over the century"—so conclude Carola Frydman of MIT and Raven Saks of the U.S. Federal Reserve in a 2010 paper that surveys trends in CEO compensation since the 1930s.
If abuse of managerial power isn't the answer, what is? The vexing fact of the matter is that nobody really knows: the long-term trends in executive compensation defy easy explanation. Increases in firm size, greater reliance on equity-based pay to better align managers' incentives with the interests of shareholders, the great bull market of 1983-2000, more intense competition for top talent as reliance on promotion from within the firm has lessened, the stimulus provided by lower income tax rates to bidding wars for top talent, adjustments for risk as executives' tenure has grown less secure, changing cultural norms about both loyalty to one's employer and the seemliness of huge pay packages, government interventions relating to compensation and their often unintended consequences—all these factors and others besides have likely played a role in the story.
In the bigger picture, there doesn't seem to be anything especially distinctive about corporate executives compared to other members of the top 0.1 percent. Top lawyers and surgeons, hedge fund managers, venture capitalists, media and sports stars—all have seen comparable increases in pay.
This isn't to say that executive compensation isn't problematic. There is no obvious way to tell in advance what corporate managers are really worth, nor is there any perfect compensation structure that optimizes the incentives that executives face. Meanwhile, the stakes are large: Product market competition may provide the ultimate discipline for wayward managers; but well-designed compensation systems hold out the promise of avoiding waste on a colossal scale.
Although solving, or at least not botching, the riddle of executive compensation is important, it is nonetheless a fairly narrow and technical issue. We're talking about figuring out the appropriate level and structure of remuneration for about 100,000 positions in a 140 million-worker economy.
This is the real sleight of hand in Piketty's magnum opus: call it the incredible shrinking inequality problem. Krugman, while noting that Piketty departs from his profundities about r and g to explain U.S. inequality, treats that departure as simply inelegant—a bit of necessary ad hockery to make sense of a messy world. But the way I see it, the contrast between Piketty's main theoretical edifice and the little outbuilding he constructs to account for the United States is of much greater significance.
I call it a bait and switch—perpetrated not by the author on his audience, but by his most admiring readers on themselves. Piketty is being celebrated for supposedly demonstrating that the deep structures of capitalism tend toward ever-greater inequality. But in the United States—the most unequal of all the advanced economies—the main explanation offered for the growing gap between rich and poor is that 100,000 or so corporate managers are being overpaid. What's getting all the attention is Piketty's depiction of rising inequality as the tragic flaw at the heart of the entire capitalist economic system. But what's really going on, at least according to Piketty, is a comparatively narrow and shallow problem of corporate governance.
Getting CEO pay right is surely a challenge, but does anybody on earth think it is the defining challenge of our time?
The post What Thomas Piketty Gets Wrong About Capitalism appeared first on Reason.com.
]]>By the time you make it through the fashionably prolix subtitle of Tyler Cowen's provocative new e-book, you'll have the gist of his argument. In The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better, Cowen, a George Mason University economist, argues that since colonial times the American economy has benefited from "low-hanging fruit"—i.e., bountiful opportunities for growth. He singles out three in particular: free land, technological breakthroughs, and "smart but uneducated kids."
"Yet during the last forty years," Cowen writes, "that low-hanging fruit started disappearing, and we started pretending it was still there. We have failed to recognize that we are at a technological plateau and the trees are more bare than we would like to think. That's it. That is what has gone wrong." Cowen identifies the exhaustion of that low-hanging fruit as the main culprit behind the slowdown in growth during recent decades, rising inequality, the nastiness of present-day politics, and even the recent global financial meltdown. Looking forward, he admits the possibility that innovation and growth will pick up again, perhaps catalyzed by the rise of China and India. Cultural change would help, he argues. Specifically, his chief prescription is that we somehow raise the social status of scientists.
Cowen is hardly the first boy to cry wolf. In previous periods of deep economic distress, other prognosticators have grabbed attention by claiming that innovation and growth are at long last winding down. During the Great Depression of the 1930s, the "secular stagnationists," led by Keynesian economist Alvin Hansen, argued that falling population growth and dwindling prospects for technological progress meant that "mature" economies could combat chronic underinvestment and unemployment only through massive government spending. And during the stagflation of the 1970s, the Club of Rome and many others warned that ecological constraints were finally imposing "limits to growth." So here we are: another macroeconomic crisis, another gloomy prophet.
We should remember, however, that at the end of this story the wolf actually does come. So could Cowen be right this time?
He certainly is correct in identifying a poorly understood phenomenon of fundamental importance: Innovation and economic growth are getting harder. During the last few decades, rapid growth in the number of scientists, engineers, and researchers has not resulted in a corresponding acceleration of economic growth or new inventions. Indeed, the number of patents per researcher has been falling steadily. "In each industry the most obvious ideas are discovered first," explained Paul Segerstrom of the Stockholm School of Economics in a 1998 American Economic Review article, "making it harder to find new ideas subsequently."
Cowen is also right that a big source of relatively easy growth during the 20th century—investment in education—has been exhausted. In 1900 only 6 percent of American kids graduated high school, and only 0.25 percent went to college. The high school graduation rate peaked at roughly 80 percent in the late 1960s and has slipped a bit since, while some 40 percent of college-age kids are now in college. Moving these numbers upward without cutting standards further may be possible, but it certainly won't be easy, and in any event the biggest gains in improving educational attainment are likely behind us.
But there is another side of the coin. Yes, pursuing any particular avenue of scientific research or technological development yields diminishing returns. But it is often the case that advances in one area open up entirely new avenues of progress in others. To put it another way, we keep discovering previously hidden orchards of low-hanging fruit. The microelectronics revolution of the last half-century is a spectacular case in point: Continuing exponential growth in information processing capacity has made possible sweeping innovations in a host of industries unrelated to silicon chips. Looking ahead, exciting developments in biotechnology, nanotechnology, and artificial intelligence promise future waves of revolutionary innovation.
Furthermore, even as growth gets harder, our institutions of wealth creation have improved. The number of scientists and researchers has grown, and their tools keep getting better. American corporations have undergone wrenching restructuring in recent decades to make them more innovative and responsive to change. On the whole, government policies today are much more favorable to entrepreneurship and innovation than they were a half-century ago. And continued progress on all these fronts remains possible.
How do these countervailing forces balance out? My reading of the evidence doesn't support Cowen's sweeping historical narrative that centuries of easy progress are now behind us. Much of the force of his argument comes from contrasting America's glittering economic performance in the decades following World War II with the decidedly less impressive record in recent decades. But if you zoom out and look at the larger historical record, Cowen's "Great Stagnation" more or less disappears. And if you zoom in and examine recent trends in detail, the numbers likewise belie the claim that we have hit a "technological plateau."
Cowen correctly points out that median family income rose smartly after World War II, only to fall off sharply in the 1970s. Per capita GDP figures reveal the same trend, albeit a little less dramatically (because of the rise in income inequality, which means most of the income gains have come among higher earners). Between 1950 and 1973, the average annual growth rate of real GDP per capita was 2.5 percent; for the period between 1973 and 2007, the corresponding figure was only 1.9 percent.
But what happens when you put these figures in a larger historical context? Using calculations by the late British economic historian Angus Maddison in his 2001 book The World Economy: A Millennial Perspective, combined with U.S. Census figures for the years after World War II, we see these annual growth rates:
1820–1870: 1.3 percent
1870–1913: 1.8 percent
1913–1950: 1.6 percent
1950–1973: 2.5 percent
1973–2007: 1.9 percent
From this broader perspective, what Cowen calls the Great Stagnation looks like a return to normalcy after a Great Boom. Indeed, recent growth rates are better than those of all other earlier periods. So yes, growth has cooled down since the postwar "Golden Age," and that fact poses real economic and political challenges. But the Golden Age, not our present era, is the outlier; it just doesn't make sense to talk about the present period as stagnant after centuries of easy growth.
Now let's focus on trends in recent decades—in particular, productivity growth. If we have reached a technological plateau, it should show up most clearly in a fall-off in labor productivity. But look at the growth of output per worker-hour, according to Bureau of Labor Statistics data for the nonfarm business sector:
1947–1973: 2.8 percent
1973–1995: 1.4 percent
1995–2007: 2.7 percent
Again, there was a big drop-off after the postwar boom. But then look what happened: Beginning in the mid-'90s, fueled by advances in information technology, productivity growth came roaring back, nearly equaling the record of the Golden Age. It's hard to look at these figures and conclude, with Cowen, that the trees in the orchard are becoming bare.
Granted, the productivity comeback offers no grounds for complacency. The numbers look better than the per capita GDP figures, in large part because the labor force participation rate peaked in the late '90s, fell during the dot-com bust, and only recovered to early '90s levels by 2007. (Superior growth in output per worker thus was partially canceled out by sluggish growth in the number of workers.) Meanwhile, the per capita GDP figures look better than the median income figures cited by Cowen because of the rise in inequality—that is, because income growth has been concentrated at the top of the socioeconomic ladder.
These facts point to real challenges for future growth. One reason the labor force participation rate has stalled is the aging of the population, a trend that will cause all kinds of economic and political headaches in coming years. And rising income inequality is due in significant part to slumping human capital formation. Educational attainment at both the secondary and postsecondary levels has stagnated since the '70s even as the demand for highly skilled workers has continued to climb.
I've focused on criticisms here, but I want to close by stressing what an interesting, intelligent, clearly written, and thought-provoking book this is. Growth has gotten harder, and there are mounting obstacles ahead. For pointing out these sobering facts, and doing so in such an engaging manner, The Great Stagnation deserves a wide readership.
Brink Lindsey (blindsey@kauffman.org) is a senior scholar in research and policy at the Ewing Marion Kauffman Foundation.
The post Not So Stagnant appeared first on Reason.com.
]]>So where should libertarians drop anchor and forge alliances within the famous four-sided Nolan Chart spectrum of political beliefs and groupings? In this exchange, Contributing Editor Brink Lindsey argues that it's time, once and for all, to sever the libertarian-conservative alliance that dates back to the New Deal while remaining skeptical about the illiberal populism of Tea Party activism. In response, a conservative writer—National Review Online Editor-at-Large Jonah Goldberg—disputes Lindsey's portrayal of the right and contends that the only major party giving free market economics the time of day is the GOP. Meanwhile, FreedomWorks President Matt Kibbe tells Lindsey and his think tank fellow travelers to climb down off that high horse and celebrate the most promising limited-government popular uprising in generations.
Right Is Wrong
Libertarians need to disengage from Republicans and conservatives once and for all.
By Brink Lindsey
By the waning years of the Bush administration, the old "fusionist" alliance between libertarians and social conservatives seemed to be on its last legs. After the inglorious collapse of Social Security reform, the political agenda of the right was more or less free of any contamination by libertarian ideas. The GOP sank into ruling-party decadence marked by borrow-and-spend fiscal incontinence and K Street Project cronyism. The broader conservative movement, meanwhile, expended its energy on gay-bashing, anti-immigrant hysteria, fantasies of World War IV, meddling in the Schiavo family tragedy, and redefining patriotism as enthusiasm for mass surveillance and torture.
Now, however, opposition to Barack Obama and the Democratic Congress has sparked a resurgence of libertarian rhetoric on the right, most prominently in the "Tea Party" protests that have erupted over the past year. "Libertarian sentiment has finally gone mainstream," wrote Chris Stirewalt, political editor of the conservative Washington Examiner, in a column this April. "After two wars, a $12 trillion debt, a financial crisis and the most politically tone-deaf president in modern history, Americans may have finally given up on big government."
Such talk gets many libertarians excited. Could a revival of small-government conservatism really be at hand? After the long apostasy of Bush père et fils, could the right really be returning to the old-time religion of Goldwater and Reagan? Could the withered fusionist alliance of libertarians and conservatives channel today's popular disgust with statist excess into revitalized momentum for limited-government reform?
In a word, no. Without a doubt, libertarians should be happy that the Democrats' power grabs have met with such vociferous opposition. Anything that can stop this dash toward dirigisme, or at least slow it down, is a good thing. Seldom has there been a better time to stand athwart history and yell "Stop!" So we should rejoice that at least some conservatives haven't forgotten their signature move.
That, however, is about all the contemporary right is good for. It is capable of checking at least some of the left's excesses, and thank goodness for that. But a clear-eyed look at conservatism as a whole reveals a political movement with no realistic potential for advancing individual freedom. The contemporary right is so deeply under the sway of its most illiberal impulses that they now define what it means to be a conservative.
What are those impulses?
First and foremost, a raving, anti-intellectual populism, as expressed by (among many, many others) Sarah Palin and Glenn Beck. Next, a brutish nationalism, as expressed in anti-immigrant xenophobia (most recently on display in Arizona) and it's-always-1938-somewhere jingoism. And, less obvious now but always lurking in the background, a dogmatic religiosity, as expressed in homophobia, creationism, and extremism on beginning- and end-of-life issues. The combined result is a right-wing identity politics that feeds on the red meat of us versus them, "Real America" versus the liberal-dominated coasts, faith and gut instinct versus pointy-headed elitism.
This noxious stew of reaction and ressentiment is the antithesis of libertarianism. The spirit of freedom is cosmopolitan. It is committed to secularism in political discourse, whatever religious views people might hold privately. And it coolly upholds reason against the swirl of interests and passions. History is full of ironies and surprises, but there is no rational basis for expecting an outlook as benighted as the contemporary right's to produce policy results that libertarians can cheer about.
Groupthink and Fever Dreams
Modern conservatism has always had an illiberal dark side. Recall the first great populist spasms of the postwar right—McCarthyism and opposition to desegregation—and recall as well that National Review founder William F. Buckley stoutly defended both. Any ideology dedicated to defending traditional ways of doing things is of necessity going to appeal to the reactionary as well as the prudently conservative. And since, going all the way back to Buckley's God and Man at Yale, the right's adversary was the nation's liberal intellectual elite, conservatism has always been vulnerable to the populist temptation.
But prior to the rise of the conservative counter-establishment—think tanks, talk radio, websites, and Fox News—the right's dark side was subject to a critical constraint: To be visible at all in the nation's public debate, conservatism was forced to rely on intellectual champions whose sheer brilliance and sophistication caused the liberal gatekeepers in mass media to deem them suitable for polite company. People such as Buckley, George Will, and Milton Friedman thus became the public face of conservative ideology, while the rabble-rousers and conspiracy theorists were consigned to the shadow world of mimeographs, pamphlets, and paperbacks that nobody ever reviewed. The handicap of elite hostility thereby conferred an unintended benefit: It gave conservatism a high-quality intellectual leadership that, to some extent at least, was able to curb the movement's baser instincts.
Now, however, the discipline of having to fight intellectual battles on the opponent's turf is long gone. Conservatism has turned inward, like the dog in the joke, because it can. The result is what reason Contributing Editor Julian Sanchez has called the movement's "epistemic closure." The quality of the right's intellectual leadership—the people who set the agenda, who define what "true" conservatism means at any given time—has consequently suffered a precipitous decline. What counts today isn't engaging the other side with reasoned arguments; it's building a rabid fan base by demonizing the other side and stoking the audience's collective sense of outrage and victimization. And that's a job best performed not by serious thinkers but by hacks and hucksters. Rush Limbaugh, Glenn Beck, Sean Hannity, Mark Levin, Joseph Farah, Ann Coulter, Michelle Malkin: they adorn the cathedral of conservatism like so many gargoyles.
Yes, there are still many bright and inquisitive minds on the right, but they are not the movement's stars and they don't call the shots. On the contrary, if they stray too far in challenging the conservative id, they find themselves cast out and castigated as heretics and RINOs (Republicans In Name Only). Bruce Bartlett and David Frum (who are friends of mine) are only two of the more prominent victims of that intolerant groupthink; both were sacked by conservative think tanks shortly after loudly expressing heterodox opinions.
As the worst get on top, they bring out the worst in their loyal followers. Goaded by the conservative message machine's toxic mix of intolerance and self-pity, mass opinion on the right has veered off into feverish self-delusion. Witness the "birther" phenomenon. According to Public Policy Polling, 63 percent of Republicans either believe Obama was born in a foreign country or aren't sure one way or the other. A more recent poll by the same outfit shows that 52 percent of Republicans believe that ACORN stole the 2008 election for Obama with voter fraud, while another 21 percent are undecided. This polling outfit is closely tied to the Democrats, so take the exact numbers with some grains of salt if you wish. But it is beyond doubt that paranoia is rampant in right-wing circles these days.
The return of small-government rhetoric does not signal a break from the right's illiberal commitments. Rather, those same commitments are simply being expressed in a different way to suit the changing times. We're in the midst of a deep slump, and economic issues always come to the fore during tough times. Furthermore, Washington is now under Democratic control. When their own gang was in power, conservatives rallied "us" against a grab bag of "thems," most notably gays, Mexicans, and "Islamofascists" and their liberal "appeasers." Now the us-versus-them game has gotten much simpler. Barack Obama—Harvard-educated, left of center, the son of a foreigner, a suspected Muslim who (according to Palin) "pals around with terrorists"—pulls together all the hated "thems" in one convenient package. Opposing Obama and his agenda may sound libertarian, but it's also the perfect outlet for the same old distinctly anti-libertarian mix of populism, nationalism, and dogmatism.
Let's look in particular at the Tea Party movement, whose sudden rise is what has sparked all the talk of a fusionist revival. In April The New York Times published a detailed survey of Tea Party supporters, and the results are telling. First, this movement is definitely a right-wing phenomenon. Of those polled, 73 percent said they are somewhat or very conservative, 54 percent called themselves Republicans (compared to only 5 percent who confessed being Democrats), and 66 percent said they always or usually vote for the GOP candidate. When asked to give their opinions of various public figures, they gave favorable/unfavorable splits of 59/6 for Glenn Beck and 66/12 for Sarah Palin (though a plurality said the latter would not be an effective president). And in the single most depressing result of the whole poll, 57 percent of Tea Party supporters expressed a favorable opinion of the big-government president George W. Bush—as compared to Americans overall, 58 percent of whom gave Bush an unfavorable rating.
It should come as no surprise, then, that Tea Partiers hold distinctly unlibertarian views on a wide variety of issues. According to the Times poll, 82 percent think illegal immigration is a very serious problem, and supporters of decreasing legal immigration outnumber those who want to liberalize immigration by 42 to 14 percent. Only 16 percent favor gay marriage (compared to 39 percent of the country at large), and 40 percent call for no legal recognition of same-sex unions. Meanwhile, 77 percent support either banning abortions outright or making them more difficult to obtain.
But at least the Tea Partiers are dedicated to reining in government spending, right? After all, it's the movement's defining issue. Well, put me down as a skeptic. If you really care about restraining the growth of government, the number one priority has to be restructuring the budget-busting Medicare program. Yet during the health care debate the GOP sank to shameless demagoguery in defending Medicare's sanctity. The short-term goal was to score points against ObamaCare, but the most likely long-term effect was to make needed reforms even more difficult to achieve. And how did Tea Partiers, and movement conservatives generally, respond to this irresponsible pandering? They scarcely said boo.
Authoritarian and Unpopular
Notwithstanding the return of libertarian rhetoric, the right today is a fundamentally illiberal and authoritarian movement. It endorses the systematic use of torture. It defends unchecked presidential power over matters of national security. It excuses massive violations of Americans' civil liberties committed in the name of fighting terrorism. It supports bloated military budgets, preventive war, and open-ended, nation-building occupations. It calls for repressive immigration policies. Far from being anti-statist, it glorifies and romanticizes the agencies of government coercion: the police and the military. It opposes abortion rights. It opposes marriage equality. It panders to creationism. It routinely questions the patriotism of its opponents. It traffics in outlandish conspiracy theories. If you're serious about individual freedom and limited government, you cannot stand with this movement.
In any event, conservatism in its current incarnation looks like a political dead end. Its wildly overheated rhetoric, with cries of socialism and dark hints of impending dictatorship, alienates the moderate center of American public opinion even as it thrills the hardcore base. That base, meanwhile, is in long-term demographic decline. White, married, churchgoing, with kids—all those categories associated with a right-of-center orientation have been shrinking as a percent of the population, and all are expected to continue shrinking. In analyzing the impact of demographic change on the 2008 election, the journalist Ron Brownstein looked at six basic groups: whites with college degrees, whites without degrees, African-Americans, Hispanics, Asians, and other minorities. If each of those group's share of the electorate had remained unchanged since 1992, McCain would have beaten Obama by 2 percentage points instead of losing by 7.
At the same time, younger Americans have decisively repudiated the contemporary right's illiberal social values. The Pew Research Center's 2007 survey of Americans aged 18–25, dubbed "Generation Next," is illustrative. Pew's polling reveals that young adults are dramatically less religious and less nationalist than their elders. Twenty percent say they are not religious, compared to only 11 percent of Americans 26 or older. They favor evolution over creationism by a 63 to 33 margin. Supporters of gay marriage in this age group narrowly outnumber opponents (47 to 46 percent), while among everyone older opponents carry the day by a 64–30 spread. Among young adults, 52 percent say immigrants strengthen our country, while 38 percent say they are a burden; by contrast, Americans 26 and up embrace the anti-immigrant view by a 42–39 margin. In the rising generation, only 29 percent agree that "using overwhelming force is the best way to defeat terrorism," while 67 percent think that "relying too much on military force leads to hatred and more terrorism." Among Americans 26 and older, though, hawks beat doves 49 to 41. God-and-country populism may still appeal to a large number of Americans (though certainly not a majority), but its future looks bleak.
Back in the Cold War, when socialism remained a living ideal and totalitarianism was a leading force in world affairs, an anti-socialist alliance between libertarians and social conservatives may have made sense. It doesn't anymore.
Does that mean I think that libertarians should ally with the left instead? No, that's equally unappealing. I do believe that libertarian ideas are better expressed in the language of liberalism rather than that of conservatism. But it's clear enough that for now and the foreseeable future, the left is no more viable a home for libertarians than is the right.
The blunt truth is that people with libertarian sympathies are politically homeless. The best thing we can do is face up to that fact and act accordingly. That means taking the libertarian movement in a new direction: attempting to claim the center of American politics. If that move were successful, ideas of a distinctly libertarian cast would define the views of a critical swing constituency that politicians on the left and right would have to compete for.
Make no mistake, though: relocating to the center would make for a very different movement than the one we've got now. The organized libertarian movement began with the goal of offering a radical alternative to conservatism and liberalism. But ever since the main vehicle of that aspiration, the Libertarian Party, fizzled into irrelevance in the 1980s, the movement has tilted heavily to the right. However much individual libertarians like to think they transcend the left-right divide, the actual operating strategy of organized libertarianism has been fusionism.
In particular, a great deal of libertarian talent and energy has gone into building a "free market" movement of organizations that focus more or less exclusively on economic issues. These organizations include fundraising groups such as the Club for Growth, activist outfits such as FreedomWorks and Americans for Prosperity, legal shops such as the Institute for Justice, and state-level think tanks such as the Mackinac Center and the Goldwater Institute. By steering clear of social issues and foreign policy, the free-market movement has shunted aside the questions that divide libertarians from conservatives and instead institutionalized the ground they seem to share.
Expressly libertarian writers have spent much more time engaging conservative audiences than reaching out to liberals. They have written more frequently for right-wing outlets such as National Review, The Washington Times, and The Wall Street Journal than for their counterparts on the left. They have regularly identified with the Goldwater-Reagan current of conservatism, notwithstanding the profound differences between that strain and libertarian thinking on a number of fronts. And they have often couched libertarian arguments in conservative terms, venerating the timeless wisdom of America's founding principles while conveniently ignoring the fact that another set of founding principles included the enslavement of blacks, subjugation of women, and expropriation of Indian lands.
Declaring independence from the right would require big changes. Cooperation with the right on free-market causes would need to be supplemented by an equivalent level of cooperation with the left on personal freedom, civil liberties, and foreign policy issues. Funding for political candidates should be reserved for politicians whose commitment to individual freedom goes beyond economic issues. In the resources they deploy, the causes they support, the language they use, and the politicians they back, libertarians should be making the point that their differences with the right are every bit as important as their differences with the left.
The first step, though, is recognizing the problem. Right now, like it or not, the libertarian movement is a part of the vast right-wing conspiracy—a distinctive and dissident part, to be sure, but a part all the same. As a result, our ideals are being tainted and undermined through guilt by association. It's time for libertarians to break ranks and stand on our own.
Contributing Editor Brink Lindsey (blindsey@cato.org) is vice president for research at the Cato Institute.
The Non-Existent Center
Disparaging conservatives is no substitute for recognizing that only the right takes economic libertarianism seriously.
By Jonah Goldberg
Brink Lindsey is both brilliant and sensible. That's part of why I admire his work so much. But I must say I find those qualities largely missing in his case for Liberaltarianism 2.0.
Under Liberaltarianism 1.0, Lindsey endeavored to forge a new fusionism between liberals and libertarians. The old alliance between conservatives and libertarians was either ill-conceived from the outset or had reached the point of diminishing returns. An "honest survey of the past half-century shows a much better match between libertarian means and progressive ends," he famously wrote in December 2006 in The New Republic (the magazine that should be blamed for the un-euphonious moniker "liberaltarian," which, alas, has stuck). Lindsey proposed "a refashioned liberalism that incorporate[s] key libertarian concerns and insights" and "make[s] possible a truly progressive politics once again."
As flawed as I thought that project was, I wished Lindsey luck in at least some of his endeavors. While I think severing the fusionist bond with conservatism would be bad for libertarians, conservatives, and the country, at the same time I would like nothing more than to see libertarians convince liberals to become less statist and less culturally bullying. Moreover, his core point had much merit: The wealth and freedom created by libertarian policies are the best means toward "progressive" (at least in his benign use of the term) ends.
But that's all moot now because under Liberaltarianism 2.0, Lindsey doesn't call for a new "lib-lib" fusionism so much as a libertarian breakaway movement whereby libertarianism fashions itself as the "new center." This new move is apparently necessary because Lindsey has realized how inhospitable progressive soil is to the flower of libertarianism. Suffused with deference to planners, reverence for the state, and a predilection for running other peoples' lives, contemporary liberalism is largely (though not entirely) liberalism in name only.
Lindsey concedes this fact in an awkward way when he writes: "I do believe that libertarian ideas are better suited to the language of liberalism rather than that of conservatism." Which is another way of saying that liberals talk a good game about freedom, but their policies have nothing to do with it. Meanwhile, maybe Lindsey is right that the language of conservatism needs to be reinvigorated with libertarianism, but it seems to me that's exactly what the Tea Partiers he so disdains are busy doing.
Many of Lindsey's core assumptions about conservatism's relationship with libertarianism are just wrong. For starters, why should libertarianism be so hostile to culturally conservative values? Isn't libertarianism about freedom, including the freedom to live conservatively if that's what people choose? Secularism in politics is a perfectly admirable and libertarian value, but using the state to impose secularism on society is not. One gets the sense from Lindsey that the greater threat to freedom in this country comes from conservatives imposing their "benighted" religious outlook on the citizenry, rather than from the state scrubbing society of religion while imposing narrow conceptions of "diversity" on every institution and hamlet. Which worldview has more state and corporate power behind it in America today, Christianity or—for want of a better term—political correctness? Lindsey is supposed to be making the case for freedom, and yet so much of his uncharacteristically intemperate essay simply reads like he has chosen sides in the culture war and thinks that a host of political and policy questions should therefore be settled.
Not all of Lindsey's complaints about the right and the GOP are without merit, but there's so much ill-willed tendentiousness and ad hominem embedded in his description of political reality, it's hard not to conclude that his emotions have gotten the better of him. Again and again, Lindsey grabs the most convenient, negative, and often clichéd, interpretations of Tea Parties, "birthers," rightwing paranoia and the usual parade of horribles (sorry: "gargoyles") in order to make his case that libertarians need to divorce themselves from conservatives. Worse, he singles out sins of the right as if they are not also sins of the left—and libertarians as well. (I would submit that the distribution of "outlandish conspiracy theories" is fairly uniform across the ideological landscape.)
For instance, I was particularly sorry to see him buy into this "epistemic closure" nonsense. I'd strongly argue that he's simply wrong on the facts about David Frum's departure from the American Enterprise Institute. But even if he weren't, are we really to believe that the Cato Institute is more accommodating of heterodox ideas within the framework of libertarian thought? I would be curious to see how long a scholar at Cato would endure after coming out in favor of, say, socialized medicine. And pray tell, when said scholar was given the heave-ho, would Lindsey decry the "intolerant groupthink" that led to the decision? I wouldn't call that "epistemic closure," but I am at a loss as to why Lindsey wouldn't. As for Bruce Bartlett's wildly overplayed plight, it's at least worth noting that the think tank he was cut loose from could just as easily be described as libertarian as conservative. It's hardly as if the free-market National Center for Policy Analysis has ever been a bastion of social conservatism.
Lindsey's telling insinuation that the libertarian position is de facto pro–abortion rights would draw objections from those many people who describe themselves as pro-life libertarians. More practically, I think Lindsey misapprehends the "libertarianism" of actual American voters. Even if the majority of people who (accurately) describe themselves as libertarians favor legalized abortion, it is quite clearly not the case that most care about the issue very much. Meanwhile, a great many of the conservatives who are willing to votefor libertarians do care about it very much. I don't know what Brink Lindsey thinks of Ron and Rand Paul, but it is quite obvious that their political fortunes would be nil were they not pro-life. Either their popularity with conservative Republicans suggests that the right isn't nearly so hostile to libertarianism as Lindsey thinks or it means that the Pauls have sold their souls to the party of Comstockish illiberalism.
There's real merit to Lindsey's claim that the "spirit of freedom is cosmopolitan." But today's champions of cosmopolitanism are hardly champions of freedom and devotees of the quintessentially cosmopolitan libertarian Albert J. Nock. Rather, they are the transnational progressive technocrats of Davos and the U.N. who, with increasing frequency, express contempt for democratic sovereignty because the people can't be trusted to handle such problems as climate change.
Lindsey makes a perfectly fine and correct observation that libertarians—at least true-blue ones—are politically homeless. But it's worth stressing that this is not the case where it actually matters most: economics.
I am perfectly willing to concede that the GOP's free- market record has been fraught and festooned with disappointments and betrayals. But at the intellectual level, even among most of the people Lindsey describes as "gargoyles," economic libertarianism remains largely synonymous with economic conservatism. The Mount Rushmore of libertarian economics—Hayek, Friedman, Mises, Hazlitt, et al—quite simply is the Mount Rushmore of conservative economics. Cato's economic prescriptions are respected by only one of the major political parties, and it's not the Democrats.
And yet, as a matter of practical politics, Lindsey would have libertarian spokesmen and advocates alienate conservatives in the hope that this would earn credibility with liberals. It seems far more likely that liberals would pocket libertarian attacks on the right—of the sort found in Lindsey's essay—while continuing to ignore libertarian arguments on economics and other key areas of public policy. Left-wing environmentalists will not suddenly embrace property rights because libertarians vilify the Christian Right. But the Christian Right may well stop listening to libertarians if they all started talking the way Lindsey does here.
Lastly, this talk of turning libertarianism into centrism is intriguing but no less ludicrous for it. Simply put, centrists aren't libertarians and libertarians aren't centrists. Ending the drug war is at the heart of contemporary libertarianism (and has long been the official position of the "benighted" National Review, by the way). But how does Lindsey plan on making that centrist? How will he make an open-borders immigration policy centrist? Social Security privatization? Free-market health care? I know Cato has invested heavily in arguing otherwise, but the reality is that centrists, just like almost everybody else, hold libertarian views on some issues and not on others. And many views held by libertarians simply are not centrist. Like it or not, in America, the more libertarian you are on most economic questions, the more "right wing" you are. Period. (But it is not always true that being libertarian on social issues makes you "left wing." Progressives embrace speech codes, racial quotas, state intrusions into the right of association, etc.)
If you take all of Lindsey's talk of being "centrist" and replace it with "popular," it clarifies his argument enormously. Basically, Lindsey wants full-blown libertarianism to be popular. I do too! But no amount of wordplay, poll-data-torturing, or bridge-burning will make this philosophy genuinely popular, never mind the new hinge for our two-party system. This is not an argument, it's a wish.
Wishful thinking also lurks under his claim that the right is dying away. This is not only untrue as a matter of public opinion (as of this writing, polls show women, independents, etc. moving back to the GOP), but it's untrue as a matter of policy as well. One of the main reasons conservatives have emphasized their "illiberal" policies on such issues as national security and abortion is that they are popular (even, dare I say it, centrist). Nowhere does Lindsey provide evidence that support for, say, military tribunals is unpopular, because he can't. The Obama administration has been learning this lesson the hard way. In fact, both parties have emphasized their more illiberal facades in recent years. Nonetheless, I would still dispute that the GOP is less libertarian today than it was, say, at the beginning of Bush's first term, when the libertarian-rebuking "compassionate conservatism" was all the rage.
I wish Lindsey had spent a lot less time disparaging conservatives and aping the punditry of The New York Times and more time concentrating on the philosophical argument behind Liberaltarianism 2.0. It's a fascinating topic with many avenues for agreement and disagreement. Personally, I think he has it wrong in his attitudes toward religion and social conservatism. From the founding, religion was a great engine for liberty. Our constitutional order rests on the conviction that we are endowed by our creator with certain rights. Both the abolitionist and civil rights movements were religious in nature.
As for social conservatism, I think the real way to deal with Lindsey's disdain for it is to pursue a more plausible and principled solution to the problems affecting both libertarianism and the country: federalism. As Thomas Jefferson knew, big cities will always be cosmopolitan. But there's no reason why one narrow definition of cosmopolitanism needs to be imposed across the land. Social conservatives and libertine libertarians—and some practical progressives—should be able to find common cause in a campaign that allows people to live the way they want to live in communities that reflect their values. But that is a subject for another day and, hopefully, Liberaltarianism 3.0.
Jonah Goldberg (JonahNRO@gmail.com) is editor-at-large of National Review Online and a visiting fellow at the American Enterprise Institute. He is the author of Liberal Fascism: The Secret History of the American Left from Mussolini to the Politics of Meaning (Doubleday).
Drink Your Tea
How could you not celebrate the spontaneous emergence of a decentralized movement aimed at rolling back big government?
By Matt Kibbe
I can't help but wonder what planet Brink Lindsey has been living on for the last 18 months. Lindsey's harangue against the good men and women who make up the Tea Party movement —utterly dismissive of their important work against an entrenched political establishment—seems disconnected from reality. This massive grassroots revolt against big government is the greatest opportunity that advocates of limited government have seen in generations, yet libertarian intellectuals like Lindsey seem content to sit on the sidelines and nitpick. While the Tea Party builds a whole new infrastructure to house a massive community organized in defense of individual liberty and constitutionally constrained government, Lindsey would rather quibble over the color palette of the wall tiles in the guest bathroom.
His attitude is too typical, I fear. Lindsey views the world from the rarified vantage point of someone perched in a perfectly calibrated, climate-controlled Ivory Tower. From that high up he can't possibly see what is actually happening on the ground.
Casually confusing the terms "conservative," "Republican," and "Tea Party," Lindsey borrows liberally from the left's caricature of knuckle-draggers to knock down one strawman at a time. He's made a hash of the whole thing, but I'll just make a few observations from the vantage point of someone who, as part of FreedomWorks, has been working with the Tea Party movement from its inception.
Lindsey grants some value in our opposition to government-run health care, allowing that "at least some conservatives haven't forgotten their signature move" as the Loyal Opposition to the Democrats' wild expansion of government. But where was he when this movement was being born out of principled disgust with Republican spending, with the corruption of earmarks as a source of campaign financing, and most notably in opposition to the TARP bailout? What is now called the Tea Party was forged during the first bailout, when angry citizens actually killed the first TARP proposal on the House floor by standing up and pushing back against a Republican president. We all could have used more help then, before the bill became law, opposing the most outrageous expansion of government power in my lifetime. That genie's not going back in the bottle. When it mattered most, many think tank intellectuals were scarcely seen or heard from.
Lindsey says that true libertarianism is far more "cosmopolitan" than the rabble-rousers he sees on the streets. That sounds more than a bit like a certain president I could name, a guy who wants America to be more like Europe. Lindsey even ridicules those of us who venerate "the timeless wisdom of America's founding principles." I for one hope we maintain our difference from Europe in continuing to live by the radical principles of individual rights and limits on collective government power. Is that trite? If so, I got my triteness from a guy named Howard Roark: "Our country, the noblest country in the history of men, was based on the principle of individualism, the principle of man's 'inalienable rights.' It was a country where a man was free to seek his own happiness, to gain and produce, not to give up and renounce; to prosper, not to starve; to achieve, not to plunder; to hold as his highest possession a sense of his personal value, and as his highest virtue his self-respect."
Call me provincial, but I always loved that speech. I suppose fictional characters are not serious intellectual leaders, though.
But who is, exactly? Practicing conservatism in the worst sense of the term, Lindsey pines for the days prior to the Internet and talk radio when network oligarchs and taxpayer-funded television forced the right to rely on a few "intellectual champions" of "sheer brilliance" who covered for the inelegance of the unwashed masses behind them.
Today, Lindsey worries, serious intellectuals "don't call the shots." The best of the bunch, like his friends Bruce Bartlett and David Frum, have been sacked by the enforcers of "intolerant groupthink." Bartlett, a former Reagan official, is quite popular these days in the White House and on the left because of his vocal support for a value added tax, which he defends on grounds that "the U.S. needs a money machine" to fund the spending requirements of big government. Frum, a former speechwriter for President George W. Bush, was particularly outraged by the recent vanquishing of the "perfectly good" conservative Sen. Robert Bennett (R-Utah) by the Tea Party hordes. Anticipating Bennett's defeat, state GOP delegates, mostly new to the political process, chanted "TARP, TARP, TARP!" from the convention floor. The now–lame duck senator had unapologetically voted for the Wall Street bailout, aggressively defended Senate appropriators' culture of earmarks, and introduced health care reform legislation requiring that all Americans buy government-approved health insurance.
It may be intolerant to say so, but these are all intolerable policy ideas, and the Tea Party movement isn't tolerating them.
Down here on terra firma, things look dramatically different from what Lindsey so dislikes. From my perspective, the Tea Party movement is a beautiful chaos, or as F.A. Hayek would put it, a spontaneous order. Ours is a leaderless, decentralized grassroots movement made up of people who believe in freedom, in the government not spending money it does not have, and in the specialness of our constitutional republic. They have arisen from their couches and kitchen tables and self-organized a potent countervailing force to the cozy collusion of political expediency, big government, and special interests.
One of the virtues of this decentralized world today is that citizens are no longer dependent on old-school institutions such as Congress, television networks, and even think tanks for information and good ideas. Like the Tea Party movement itself, access to information is completely decentralized by infinite sources online. Like the discovery process that determines prices in unfettered markets, these informal networks take advantage of what the philosopher Michael Polanyi called "personal knowledge." Bloggers and citizen activists on the Internet now gather these bits of knowledge and serve as the clearinghouse for the veracity of facts and the salience of good ideas.
Do Tea Partiers read? You bet they do, and with a focus and discipline fitting a peoples' paradigm shift away from big-government conservatism. One woman who marched in D.C. on September 12, 2009, had draped a big white banner, almost as big as she was, over the crowd control barricade. It stated, succinctly: "Read Thomas Sowell." They listen to Glenn Beck and study Saul Alinsky. They also read Rand, Friedman, and Mises. They even read the Constitution of the United States, as timeless as it is, risking the erudite wrath of their cosmopolitan betters.
The Tea Party movement, if sustained, has the potential to take America back from an entrenched establishment of big spenders, political careerists, and rent-seeking corporations. The values that animate us all—lower taxes, less government, and more freedom—is a big philosophical tent set at the very center of American politics.
Brink, you should come on down and join us. You might get your hands dirty, but the good people of the Tea Party could sure use the help.
Matt Kibbe (mkibbe@freedomworks.org) is president of FreedomWorks and co-author, with Dick Armey, of Give Us Liberty: A Tea Party Manifesto, to be published by HarperCollins in August.
The post Where Do Libertarians Belong? appeared first on Reason.com.
]]>The sentiment is nothing new. Political progressives such as Krugman have been decrying increases in income inequality for many years now. But Krugman has added a novel twist, one that has important implications for public policy and economic discourse in the age of Obama. In seeking explanations for the widening spread of incomes during the last four decades, researchers have focused overwhelmingly on broad structural changes in the economy, such as technological progress and demographic shifts. Krugman argues that these explanations are insufficient. "Since the 1970s," he writes, "norms and institutions in the United States have changed in ways that either encouraged or permitted sharply higher inequality. Where, however, did the change in norms and institutions come from? The answer appears to be politics."
To understand Krugman's argument, we can't start in the 1970s. We have to back up to the 1930s and '40s'"when, he contends, the "norms and institutions" that shaped a more egalitarian society were created. "The middle-class America of my youth," Krugman writes, "is best thought of not as the normal state of our society, but as an interregnum between Gilded Ages. America before 1930 was a society in which a small number of very rich people controlled a large share of the nation's wealth." But then came the twin convulsions of the Great Depression and World War II, and the country that arose out of those trials was a very different place. "Middle-class America didn't emerge by accident. It was created by what has been called the Great Compression of incomes that took place during World War II, and sustained for a generation by social norms that favored equality, strong labor unions and progressive taxation."
The Great Compression is a term coined by the economists Claudia Goldin of Harvard and Robert Margo of Boston University to describe the dramatic narrowing of the nation's wage structure during the 1940s. The real wages of manufacturing workers jumped 67 percent between 1929 and 1947, while the top 1 percent of earners saw a 17 percent drop in real income. These egalitarian trends can be attributed to the exceptional circumstances of the period: precipitous declines at the top end of the income spectrum due to economic cataclysm; wartime wage controls that tended to compress wage rates; rapid growth in the demand for low-skilled labor, combined with the labor shortages of the war years; and rapid growth in the relative supply of skilled workers due to a near doubling of high school graduation rates.
Yet the return to peacetime and prosperity did not result in a shift back toward the status quo ante. The more egalitarian income structure persisted for decades. For an explanation, Krugman leans heavily on a 2007 paper by the Massachusetts Institute of Technology economists Frank Levy and Peter Temin, who argue that postwar American history has been a tale of two widely divergent systems of political economy. First came the "Treaty of Detroit," characterized by heavy unionization of industry, steeply progressive taxation, and a high minimum wage. Under that system, median wages kept pace with the economy's overall productivity growth, and incomes at the lower end of the scale grew faster than those at the top. Beginning around 1980, though, the Treaty of Detroit gave way to the free market "Washington Consensus." Tax rates on high earners fell sharply, the real value of the minimum wage declined, and private-sector unionism collapsed. As a result, most workers' incomes failed to share in overall productivity gains while the highest earners had a field day.
This revisionist account of the fall and rise of income inequality is being echoed daily in today's public policy debates. Under the conventional view, rising inequality is a side effect of economic progress'"namely, continuing technological breakthroughs, especially in communications and information technology. Consequently, when economists have supported measures to remedy inequality, they have typically shied away from structural changes in market institutions. Rather, they have endorsed more income redistribution to reduce post-tax income differences, along with remedial education, job retraining, and other programs designed to raise the skill levels of lower-paid workers.
By contrast, Krugman sees the rise of inequality as a consequence of economic regress'"in particular, the abandonment of well-designed economic institutions and healthy social norms that promoted widely shared prosperity. Such an assessment leads to the conclusion that we ought to revive the institutions and norms of Paul Krugman's boyhood, in broad spirit if not in every detail.
There is good evidence that changes in economic policies and social norms have indeed contributed to a widening of the income distribution since the 1970s. But Krugman and other practitioners of nostalgianomics are presenting a highly selective account of what the relevant policies and norms were and how they changed.
The Treaty of Detroit was built on extensive cartelization of markets, limiting competition to favor producers over consumers. The restrictions on competition were buttressed by racial prejudice, sexual discrimination, and postwar conformism, which combined to limit the choices available to workers and potential workers alike. Those illiberal social norms were finally swept aside in the cultural tumults of the 1960s and '70s. And then, in the 1970s and '80s, restraints on competition were substantially reduced as well, to the applause of economists across the ideological spectrum. At least until now.
Stifled Competition
The economic system that emerged from the New Deal and World War II was markedly different from the one that exists today. The contrast between past and present is sharpest when we focus on one critical dimension: the degree to which public policy either encourages or thwarts competition.
The transportation, energy, and communications sectors were subject to pervasive price and entry regulation in the postwar era. Railroad rates and service had been under federal control since the Interstate Commerce Act of 1887, but the Motor Carrier Act of 1935 extended the Interstate Commerce Commission's regulatory authority to cover trucking and bus lines as well. In 1938 airline routes and fares fell under the control of the Civil Aeronautics Authority, later known as the Civil Aeronautics Board. After the discovery of the East Texas oil field in 1930, the Texas Railroad Commission acquired the effective authority to regulate the nation's oil production. Starting in 1938, the Federal Power Commission regulated rates for the interstate transmission of natural gas. The Federal Communications Commission, created in 1934, allocated licenses to broadcasters and regulated phone rates.
Beginning with the Agricultural Adjustment Act of 1933, prices and production levels on a wide variety of farm products were regulated by a byzantine complex of controls and subsidies. High import tariffs shielded manufacturers from international competition. And in the retail sector, aggressive discounting was countered by state-level "fair trade laws," which allowed manufacturers to impose minimum resale prices on nonconsenting distributors.
Comprehensive regulation of the financial sector restricted competition in capital markets too. The McFadden Act of 1927 added a federal ban on interstate branch banking to widespread state-level restrictions on intrastate branching. The Glass-Steagall Act of 1933 erected a wall between commercial and investment banking, effectively brokering a market-sharing agreement protecting commercial and investment banks from each other. Regulation Q, instituted in 1933, prohibited interest payments on demand deposits and set interest rate ceilings for time deposits. Provisions of the Securities Act of 1933 limited competition in underwriting by outlawing pre-offering solicitations and undisclosed discounts. These and other restrictions artificially stunted the depth and development of capital markets, muting the intensity of competition throughout the larger "real" economy. New entrants are much more dependent on a well-developed financial system than are established firms, since incumbents can self-finance through retained earnings or use existing assets as collateral. A hobbled financial sector acts as a barrier to entry and thereby reduces established firms' vulnerability to competition from entrepreneurial upstarts.
The highly progressive tax structure of the early postwar decades further dampened competition. The top marginal income tax rate shot up from 25 percent to 63 percent under Herbert Hoover in 1932, climbed as high as 94 percent during World War II, and stayed at 91 percent during most of the 1950s and early '60s. Research by the economists William Gentry of Williams College and Glenn Hubbard of Columbia University has found that such rates act as a "success tax," discouraging employees from striking out as entrepreneurs.
Finally, competition in labor markets was subject to important restraints during the early postwar decades. The triumph of collective bargaining meant the active suppression of wage competition in a variety of industries. In the interest of boosting wages, unions sometimes worked to restrict competition in their industries' product markets as well. Garment unions connived with trade associations to set prices and allocate production among clothing makers. Coal miner unions attempted to regulate production by dictating how many days a week mines could be open.
MIT economists Levy and Temin don't mention it, but highly restrictive immigration policies were another significant brake on labor market competition. With the establishment of countryspecific immigration quotas under the Immigration Act of 1924, foreign-born residents of the United States plummeted from 13 percent of the total population in 1920 to 5 percent by 1970. As a result, competition at the less-skilled end of the U.S. labor market was substantially reduced.
Solidarity and Chauvinism
The anti-competitive effects of the Treaty of Detroit were reinforced by the prevailing social norms of the early postwar decades. Here Krugman and company focus on executive pay. Krugman quotes wistfully from John Kenneth Galbraith's characterization of the corporate elite in his 1967 book The New Industrial State: "Management does not go out ruthlessly to reward itself'"a sound management is expected to exercise restraint." According to Krugman, "For a generation after World War II, fear of outrage kept executive salaries in check. Now the outrage is gone. That is, the explosion in executive pay represents a social change…like the sexual revolution of the 1960's'"a relaxation of old strictures, a new permissiveness, but in this case the permissiveness is financial rather than sexual."
Krugman is on to something. But changing attitudes about lavish compensation packages are just one small part of a much bigger cultural transformation. During the early postwar decades, the combination of in-group solidarity and out-group hostility was much more pronounced than what we're comfortable with today.
Consider, first of all, the dramatic shift in attitudes about race. Open and unapologetic discrimination by white Anglo-Saxon Protestants against other ethnic groups was widespread and socially acceptable in the America of Paul Krugman's boyhood. How does racial progress affect income inequality? Not the way we might expect. The most relevant impact might have been that more enlightened attitudes about race encouraged a reversal in the nation's restrictive immigration policies. The effect was to increase the number of less-skilled workers and thereby intensify competition among them for employment.
Under the system that existed between 1924 and 1965, immigration quotas were set for each country based on the percentage of people with that national origin already living in the U.S. (with immigration from East and South Asia banned outright until 1952). The explicit purpose of the national-origin quotas was to freeze the ethnic composition of the United States'"that is, to preserve white Protestant supremacy and protect the country from "undesirable" races. "Unquestionably, there are fine human beings in all parts of the world," Sen. Robert Byrd (D-W.V.) said in defense of the quota system in 1965, "but people do differ widely in their social habits, their levels of ambition, their mechanical aptitudes, their inherited ability and intelligence, their moral traditions, and their capacity for maintaining stable governments."
But the times had passed the former Klansman by. With the triumph of the civil rights movement, official discrimination based on national origin was no longer sustainable. Just two months after signing the Voting Rights Act, President Lyndon Johnson signed the Immigration and Nationality Act of 1965, ending the "un-American" system of national-origin quotas and its "twin barriers of prejudice and privilege." The act inaugurated a new era of mass immigration: Foreign-born residents of the United States have surged from 5 percent of the population in 1970 to 12.5 percent as of 2006.
This wave of immigration exerted a mild downward pressure on the wages of native-born low-skilled workers, with most estimates showing a small effect. Immigration's more dramatic impact on measurements of inequality has come by increasing the number of less-skilled workers, thereby increasing apparent inequality by depressing average wages at the low end of the income distribution. According to the American University economist Robert Lerman, excluding recent immigrants from the analysis would eliminate roughly 30 percent of the increase in adult male annual earnings inequality between 1979 and 1996.
Although the large influx of unskilled immigrants has made American inequality statistics look worse, it has actually reduced inequality for the people involved. After all, immigrants experience large wage gains as a result of relocating to the United States, thereby reducing the cumulative wage gap between them and top earners in this country. When Lerman recalculated trends in inequality to include, at the beginning of the period, recent immigrants and their native-country wages, he found equality had increased rather than decreased. Immigration has increased inequality at home but decreased it on a global scale.
Just as racism helped to keep foreign-born workers out of the U.S. labor market, another form of in-group solidarity, sexism, kept women out of the paid work force. As of 1950, the labor force participation rate for women 16 and older stood at only 34 percent. By 1970 it had climbed to 43 percent, and as of 2005 it had jumped to 59 percent. Meanwhile, the range of jobs open to women expanded enormously.
Paradoxically, these gains for gender equality widened rather than narrowed income inequality overall. Because of the prevalence of "assortative mating"'"the tendency of people to choose spouses with similar educational and socioeconomic backgrounds'"the rise in dual-income couples has exacerbated household income inequality: Now richer men are married to richer wives. Between 1979 and 1996, the proportion of working-age men with working wives rose by approximately 25 percent among those in the top fifth of the male earnings distribution, and their wives' total earnings rose by over 100 percent. According to a 1999 estimate by Gary Burtless of the Brookings Institution, this unanticipated consequence of feminism explains about 13 percent of the total rise in income inequality since 1979.
Racism and sexism are ancient forms of group identity. Another form, more in line with what Krugman has in mind, was a distinctive expression of U.S. economic and social development in the middle decades of the 20th century. The journalist William Whyte described this "social ethic" in his 1956 book The Organization Man, outlining a sensibility that defined itself in studied contrast to old-style "rugged individualism." When contemporary critics scorned the era for its conformism, they weren't just talking about the ranch houses and gray flannel suits. The era's mores placed an extraordinary emphasis on fitting into the group.
"In the Social Ethic I am describing," wrote Whyte, "man's obligation is…not so much to the community in a broad sense but to the actual, physical one about him, and the idea that in isolation from it'"or active rebellion against it'"he might eventually discharge the greater service is little considered." One corporate trainee told Whyte that he "would sacrifice brilliance for human understanding every time." A personnel director declared that "any progressive employer would look askance at the individualist and would be reluctant to instill such thinking in the minds of trainees." Whyte summed up the prevailing attitude: "All the great ideas, [trainees] explain, have already been discovered and not only in physics and chemistry but in practical fields like engineering. The basic creative work is done, so the man you need'"for every kind of job'"is a practical, team-player fellow who will do a good shirt-sleeves job."
It seems entirely reasonable to conclude that this social ethic helped to limit competition among business enterprises for top talent. When secure membership in a stable organization is more important than maximizing your individual potential, the most talented employees are less vulnerable to the temptation of a better offer elsewhere. Even if they are tempted, a strong sense of organizational loyalty makes them more likely to resist and stay put.
Increased Competition, Increased Inequality Krugman blames the conservative movement for income inequality, arguing that right-wingers exploited white backlash in the wake of the civil rights movement to hijack first the Republican Party and then the country as a whole. Once in power, they duped the public with "weapons of mass distraction" (i.e., social issues and foreign policy) while "cut[ting] taxes on the rich," "try[ing] to shrink government benefits and undermine the welfare state," and "empower[ing] businesses to confront and, to a large extent,crush the union movement."
Obviously, conservatism has contributed in important ways to the political shifts of recent decades. But the real story of those changes is more complicated, and more interesting, than Krugman lets on. Influences across the political spectrum have helped shape the more competitive more individualistic, and less equal society we now live in.
Indeed, the relevant changes in social norms were led by movements associated with the left. The women's movement led the assault on sex discrimination. The civil rights campaigns of the 1950s and '60s inspired more enlightened attitudes about race and ethnicity, with results such as the Immigration and Nationality Act of 1965, a law spearheaded by a young Sen. Edward Kennedy (D-Mass.). And then there was the counterculture of the 1960s, whose influence spread throughout American society in the Me Decade that followed. It upended the social ethic of group-minded solidarity and conformity with a stampede of unbridled individualism and self-assertion. With the general relaxation of inhibitions, talented and ambitious people felt less restrained from seeking top dollar in the marketplace. Yippies and yuppies were two sides of the same coin.
Contrary to Krugman's narrative, liberals joined conservatives in pushing for dramatic changes in economic policy. In addition to his role in liberalizing immigration, Kennedy was a leader in pushing through both the Airline Deregulation Act of 1978 and the Motor Carrier Act of 1980, which deregulated the trucking industry'"and he was warmly supported in both efforts by the left-wing activist Ralph Nader. President Jimmy Carter signed these two pieces of legislation, as well as the Natural Gas Policy Act of 1978, which began the elimination of price controls on natural gas, and the Staggers Rail Act of 1980, which deregulated the railroad industry.
The three most recent rounds of multilateral trade talks were all concluded by Democratic presidents: the Kennedy Round in 1967 by Lyndon Johnson, the Tokyo Round in 1979 by Jimmy Carter, and the Uruguay Round in 1994 by Bill Clinton. And though it was Ronald Reagan who slashed the top income tax rate from 70 percent to 50 percent in 1981, it was two Democrats, Sen. Bill Bradley of New Jersey and Rep. Richard Gephardt of Missouri, who sponsored the Tax Reform Act of 1986, which pushed the top rate all the way down to 28 percent.
What about the unions? According to the Berkeley economist David Card, the shrinking of the unionized labor force accounted for 15 percent to 20 percent of the rise in overall male wage inequality between the early 1970s and the early 1990s. Krugman is right that labor's decline stems in part from policy changes, but his ideological blinkers lead him to identify the wrong ones.
The only significant change to the pro-union Wagner Act of 1935 came through the Taft-Hartley Act, which outlawed closed shops (contracts requiring employers to hire only union members) and authorized state right-to-work laws (which ban contracts requiring employees to join unions). But that piece of legislation was enacted in 1947′"three years before the original Treaty of Detroit between General Motors and the United Auto Workers. It would be a stretch to argue that the Golden Age ended before it even began.
Scrounging for a policy explanation, economists Levy and Temin point to the failure of a 1978 labor law reform bill to survive a Senate filibuster. But maintaining the status quo is not a policy change. They also describe President Reagan's 1981 decision to fire striking air traffic controllers as a signal to employers that the government no longer supported labor unions.
While it is true that Reagan's handling of that strike, along with his appointments to the National Labor Relations Board, made the policy environment for unions less favorable, the effect of those moves on unionization was marginal.
The major reason for the fall in unionized employment, according to a 2007 paper by Georgia State University economist Barry Hirsch, "is that union strength developed through the 1950s was gradually eroded by increasingly competitive and dynamic markets." He elaborates: "When much of an industry is unionized, firms may prosper with higher union costs as long as their competitors face similar costs. When union companies face low-cost competitors, labor cost increases cannot be passed through to consumers. Factors that increase the competitiveness of product markets increased international trade, product market deregulation, and the entry of low-cost competitors'"make it more difficult for union companies to prosper."
So the decline of private-sector unionism was abetted by policy changes, but the changes were not in labor policy specifically. They were the general, bipartisan reduction of trade barriers and price and entry controls. Unionized firms found themselves at a critical disadvantage. They shrank accordingly, and union rolls shrank with them.
Postmodern Progress
The move toward a more individualistic culture is not unique to the United States. As the political scientist Ronald Inglehart has documented in dozens of countries around the world, the shift toward what he calls "postmodern" attitudes and values is a predictable cultural response to rising affluence and expanding choices. "In a major part of the world," he writes in his 1997 book Modernization and Postmodernization, "the disciplined, self-denying, and achievement-oriented norms of industrial society are giving way to an increasingly broad latitude for individual choice of lifestyles and individual self-expression."
The increasing focus on individual fulfillment means, inevitably, less deference to tradition and organizations. "A major component of the Postmodern shift," Inglehart argues, "is a shift away from both religious and bureaucratic authority, bringing declining emphasis on all kinds of authority. For deference to authority has high costs: the individual's personal goals must be subordinated to those of a broader entity."
Paul Krugman may long for the return of self-denying corporate workers who declined to seek better opportunities out of organizational loyalty, and thus kept wages artificially suppressed, but these are creatures of a bygone ethos'"an ethos that also included uncritical acceptance of racist and sexist traditions and often brutish intolerance of deviations from mainstream lifestyles and sensibilities.
The rise in income inequality does raise issues of legitimate public concern. And reasonable people disagree hotly about what ought to be done to ensure that our prosperity is widely shared. But the caricature of postwar history put forward by Krugman and other purveyors of nostalgianomics won't lead us anywhere. Reactionary fantasies never do.
Brink Lindsey (blindsey@cato.org) is vice president for research at the Cato Institute, which published the policy paper from which this article was adapted.
The post Nostalgianomics appeared first on Reason.com.
]]>Though perhaps surprising, given libertarians' historical Republican leanings, this development shouldn't be shocking, given what the last eight years of GOP rule have brought: an exploding federal budget, a hefty new entitlement, and an expansionist foreign policy. It doesn't help that McCain has been campaigning for more than a decade as a "national greatness" conservative, not as a small-government Republican in the tradition of his Senate predecessor, Barry Goldwater.
At the same time, there are plenty of reasons to worry about the prospect of an Obama presidency. The candidate who drew cheers from antiwar activists and civil libertarians by opposing the Iraq war and the PATRIOT Act from the beginning also supports an array of new economic regulations and some blurry but potentially significant tax increases. His not-exactly-pacific rhetoric about Iran and Darfur, combined with his vote for a bill granting telecommunications companies retroactive immunity for illegally assisting government surveillance, has some worried that his positions on foreign policy and civil liberties might be closer to his predecessor than they'd like. And then there's the fact that an Obama presidency will almost certainly mean the same party controls both the White House and Congress, with eight years' worth of pent-up ambitions and long overdue favors to pay back.
reason gathered together a clutch of libertarians and fellow travelers in August and asked them to share their hopes and fears regarding an Obama presidency.
-Katherine Mangu-Ward
Virginia Postrel
Barack Obama has not run as the typical candidate, selling specific policies, a worldview, experience, or executive competence. He has instead sold himself, a glamorous icon onto whom supporters project their hopes and dreams and, in many cases, their own identities. If elected, he will have not a policy mandate but an emotional one: to make Americans feel proud of their country, optimistic about the future, and warmly included, regardless of background, in the American story.
A President Obama could deliver just the opposite. He might stumble badly abroad, projecting weakness that invites aggression (think Jimmy Carter) or involving America in a humanitarian-driven war at least as long and bloody as Iraq (think Sudan). As for inclusiveness, you can get it two ways: by respecting individual differences—however eccentric, offensive, or hard to control—or by jamming everyone into a conformist collective. Obama's New Frontier-style rhetoric has a decidedly collectivist cast. NASA is great, prizes for private space flight are stupid, and what can we make you do for your country? A guy who thinks like that will not worry about what his health care plan might do to pharmaceutical research or physicians' incentives.
Obama's campaign draws enormous power from his rhetoric of optimism-"hope," "change," and "Yes, we can." But the candidate's memoir betrays a tragic vision. In Dreams From My Father, almost everyone winds up disappointed: Obama's father, his stepfather, his grandparents, the people he meets in Chicago. Only his naive and distant mother keeps on pursuing happiness. Then she dies of cancer. He may preach hope, but Obama is not a sunny FDR or JFK. He's not a Ronald Reagan, expecting a pony in a room of manure. He assumes that any pony will have died of suffocation and worries that the horseless carriage has thrown stable hands permanently out of work. Hope is audacious because, at least in this world, it's futile and absurd. Faceless "power" is always waiting to crush your dreams.
The president's power has a face, and Obama's most fervent supporters believe he can repair the world with his face alone. Perhaps they're right, at least for the first month or two. We can only hope that he will respect the multiplicity of American dreams and the unpredictable ways in which their pursuit provides the basis for a better future.
Virginia Postrel, editor of reason from 1989 to 2000, is a contributing editor and columnist for The Atlantic Monthly. She is writing a book on glamour for The Free Press, which also published her book The Future and Its Enemies. She blogs at dynamist.com/weblog and at deepglamour.net.
Brink Lindsey
I believe that risking new mistakes is better than repeating old ones. On that basis, I feel obliged to look on the bright side of an Obama presidency. After all, I voted for George W. Bush twice, and I supported the Iraq war.
Iraq today is a complicated mess, and how best to extricate ourselves is a tough problem. I don't know how well Barack Obama would handle that problem, but at least he sees it clearly: His goal is to get us out of there. John McCain's goal, on the other hand, is to keep us there as long as possible. That fundamental difference is reason enough in my mind to root for Obama.
The Iraq fiasco was just one consequence of a deeper misjudgment: a panicky overreaction to 9/11 that inflated the real and serious threat of terrorism into an apocalyptic fantasy of World War IV. Delusions of "existential" danger lay behind the Bush administration's resort to torture and its mad claims of absolute executive power as well as its blundering botch job in Iraq. I myself suffered from such delusions in the first years after 9/11, but the accumulation of countervailing evidence eventually freed me from them. Bush, of course, has proved incurable. And McCain's case of 1938-itis is, if possible, even worse.
Obama, to his great credit, resisted the urge to panic all along. After eight years of George W. Bush and all the damage he has done to American interests and influence in the world, it is vitally important for the next occupant of the White House to be able to face a messy and dangerous world with a clear head. Only Barack Obama is equipped to do that.
Alas, when it comes to domestic policy, Obama's inclinations on spending and regulatory issues are almost uniformly wrongheaded. My hope is that circumstances will constrain him from following those inclinations very far. But in foreign affairs, where the president has a much freer hand, he is the clearly superior alternative.
Brink Lindsey is vice president for research at Cato and author of The Age of Abundance (Collins Business).
Richard A. Epstein
The Obama campaign is rich in contradictions for those who approach politics as defenders of strong property rights and limited government. On the positive side, I applaud Obama for showing a willingness to improve the procedural protections afforded to persons detained at Guantanamo Bay, and to cut back on the hostility toward immigration into the United States. And I hope that on key matters of race relations, he would be able to defuse many lingering historical resentments.
Unfortunately, on the full range of economic issues, both large and small, I fear that his policies, earnestly advanced, are a throwback to the worst of the Depression-era, big-government policies. Libertarians in general favor flat and low taxes, free trade, and unregulated labor markets. Obama is on the wrong side of all these issues. He adopts a warmed-over vision of the New Deal corporatist state with high taxation, major trade barriers, and massive interference in labor markets. He is also unrepentant in his support of farm subsidies and a vast expansion of the government role in health care. Each of these reforms, taken separately, expands the power of government over our lives. Their cumulative impact could be devastating.
My friends at the University of Chicago pooh-pooh my anxieties. They insist Obama will be a "pragmatic" president whose intelligent economic advisers will steer him far from the brink of this regulatory folly. His liberal Senate voting record leaves me no confidence in their cheery view. I wish he would back off publicly from these unwise policies. I would be thrilled if he supported dismantling even one government regulatory program. But he is, unfortunately, a prisoner of our times. The large back story of this campaign is that both parties have abandoned any consistent defense of limited government.
Richard A. Epstein is a professor of law at the University of Chicago. His books include Takings (Harvard University Press) and Simple Rules for a Complex World (Harvard University Press).
Bruce Bartlett
In researching a recent article for The New Republic about "Obamacons," I communicated with a number of libertarians and conservatives who are supporting Barack Obama for president. While the degree of their support varied, they shared some common rationales.
First, Obama is not a member of George W. Bush's party. For many, this is enough. Bush has so debased the Republican brand during the last eight years (with a lot of help from Republicans in Congress) that many people, including me, cannot conceive of voting for anyone running on the Republican line.
Second is the Iraq War. Obama's opposition to the war is sometimes equivocal, but there is no question he is more opposed to it than is his principal opponent, John McCain. While McCain does criticize Bush on the war, it is only on the grounds that the president didn't prosecute it vigorously enough.
Finally, there is the issue of change. Obama at least promises that, while McCain will simply give us four more years of what 70 percent or more of Americans are disgusted with. Some of those changes will be bad—Obama's tax and trade policies will probably be worse than McCain's—but Obama's approach to the war and civil liberties will undoubtedly be better.
Libertarians have to decide which is more important to them. But they must also consider that Congress will be overwhelmingly Democratic regardless of who wins the presidency. I think it is more likely that Obama will restrain Congress's worst instincts, as the Clinton administration often did on issues such as the North American Free Trade Agreement, than that McCain will be able to do so with nothing but a veto pen. On balance, I think there's a better chance that an Obama presidency will end up being preferable to a McCain presidency from a libertarian point of view. To put it another way, I prefer another Bill Clinton to another Gerald Ford.
Bruce Bartlett was deputy assistant secretary of the treasury for economic policy from 1988 to 1993. His books include Imposter: How George W. Bush Bankrupted America and Betrayed the Reagan Legacy (Doubleday).
Jonathan Rauch
"Barack Obama? Not a chance," I said last year, when he announced his candidacy. "Too inexperienced." The last time I was so wrong about a politician was in 1980, when I had the excuse of being 20 years old. "Ronald Reagan? No way. A simpleton."
What I misjudged about Reagan was that he was a deeply substantive man. His ideas were the most important aspect of him. With my record on Obama predictions, I hesitate to try again, but the editors of this fine publication have offered me the price of lunch chez Denny's, so here goes: Obama is the un-Reagan, inasmuch as his ideas are the least important aspect of him.
The structure of his thinking on policy appears to be entirely conventional: orthodox center-left Democrat. Back in 1992, Bill Clinton was more creative. What matters about Obama, I think, is not what he believes so much as who he is, on the upside, and which party he belongs to, on the downside.
The upside: his subtle mind, silver tongue, moderate temperament, cool deftness, and magnetic charisma. The last time we saw those traits combined was in John F. Kennedy, who I think was a good president. Kennedy gets dinged by liberals for not doing much, but that was a feature, not a bug: He was personally charismatic enough to make the country feel ably led but politically shrewd enough to avoid overreaching. If I read Obama right, he may offer a similar blend of charisma and caution. The election of a black president, opening a new chapter in America's tormented racial history, only sweetens the deal.
The downside: Obama belongs to the same party that controls Congress, and if the last 15 years have taught anything, it is to be wary of one-party government. Unified control nearly sank Bill Clinton in 1993 and 1994, and it pretty much did sink George W. Bush in 2003-2006. Obama might carry it off better in 2009-2010, but I'd be surprised. One-party rule would force the Democrats to govern from the center of their party instead of the center of the country. The natural upshot would be a leftward lurch, followed by public disgruntlement and political backlash, followed by sad talk of a second consecutive uniter who turned out to be a divider—followed, perhaps, by the realization that unifying the government divides the country.
Jonathan Rauch is a senior writer for the National Journal and a guest scholar at the Brookings Institution.
Deirdre N. McCloskey
Since I live in Chicago, and anyway am a rational economist, I'm going to vote Libertarian, as usual. After all, why throw away my vote?
But I admit to hoping very much that Obama wins, if only to punish the neocons for their presumption, and worse. One big positive of getting a fellow who taught constitutional law into the Oval Office is that he's likely (isn't he?) to restore constitutional government. The United States under Cheney/Bush stands one "terrorist" attack away from fascism. Think of a ramped-up FBI, torture of suspects, a compliant press. Come to think of it, we already have it, eh?
Obama's characteristic pose is listening. I've heard that when McCain works a room he finds out who is powerful and goes to them ("Excuse me, but there's someone over there who matters more than you"), but Obama listens in an egalitarian way. Good on him. Remember, though, that we libertarian populists had similar hopes for Jimmy Carter, and we even thought Bill Clinton was listening.
The big negative is that Obama is after all a Lake Front liberal (as Chicagoans say). Americans would look forward to higher minimum wages, higher taxes on capital gains, higher corporate taxes, and a lot of other standard-issue Democratic Party symbolic silliness in economics. I'm praying (like Barack Hussein Obama II, I'm a churchgoing Christian) that he gets a bunch of Chicago School economists to advise him. And listens. That way he won't "renegotiate" the North American Free Trade Agreement, and he might even (faint hope) try to get the Doha round restarted: You poor countries allow us to send you some stuff, and in exchange we'll drop the farm programs. As I said, faint hope.
I wish I would grow up and stop expecting presidents to do good.
Deirdre N. McCloskey teaches at the University of Illinois at Chicago. Her latest book, with Stephen Ziliak, is The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives (University of Michigan Press).
The post Is There Any Hope For This Man? appeared first on Reason.com.
]]>Three days earlier and 1,500 miles away, in Tulsa, Oklahoma, a very different counterculture was holding its own coming-out party. About 18,000 people—far more than the 4,000 anticipated—gathered for the formal dedication ceremonies at Oral Roberts University. Oklahoma's governor, a U.S. senator, two members of Congress, and Tulsa's mayor were on hand. Delivering the dedication address, "Why I Believe in Christian Education," was Billy Graham, the dean of American evangelists.
The events in San Francisco and Tulsa that spring revealed an America in the throes of cultural and spiritual upheaval. The postwar liberal consensus had shattered. Vying to take its place were two sides of an enormous false dichotomy, both animated by outbursts of spiritual energy. Those two eruptions of millenarian enthusiasm, the hippies and the evangelical revival, would inspire a left/right division that persists to this day.
That split pits one set of half-truths against another. On the left gathered those who were most alive to the new possibilities created by the unprecedented mass affluence of the postwar years but at the same time were hostile to the social institutions—namely, the market and the middle-class work ethic—that created those possibilities. On the right rallied those who staunchly supported the institutions that created prosperity but who shrank from the social dynamism they were unleashing. One side denounced capitalism but gobbled its fruits; the other cursed the fruits while defending the system that bore them. Both causes were quixotic, and consequently neither fully realized its ambitions. But out of their messy dialectic, the logic of abundance would eventually fashion, if not a reworked consensus, then at least a new modus vivendi.
The Summer of Love
By 1967 the San Francisco Bay Area hippie phenomenon had been incubating for several years. The Beat presence had been strong there from the days of Allen Ginsberg's debut reading of his famous poem "Howl" at the Six Gallery in 1955. And since October 1, 1964, when Jack Weinberg was arrested in Sproul Plaza on trespassing charges—he was soliciting contributions for the Congress of Racial Equality without permission—student unrest had roiled the University of California's Berkeley campus. Romantic rebelliousness was in the air, but now it took a new twist, following the mental corkscrew turns triggered by LSD.
This cultural revolution was a largely underground affair until January 14, 1967, when "A Gathering of the Tribes for the First Human Be-In" grabbed national attention. The event was conceived as a show of unity between hippies and Berkeley radicals, just a few weeks after a glimpse of that union had been seen on the Berkeley campus. At an anti-war mass meeting, a sing-along of "Solidarity Forever" had faltered because too few knew the words. Then someone broke in with the Beatles' "Yellow Submarine," and the whole room joined in.
Held on a brilliant blue-sky Saturday at the Polo Field in Golden Gate Park, the Be-In was kicked off by Ginsberg and fellow Beat poet Gary Snyder. As 20,000 people gradually filled the park, the Diggers, a radical community action group, distributed turkey sandwiches and White Lightning LSD (both donated by the acid magnate Augustus Owsley). All the big San Francisco bands played, while the Hells Angels guarded the P.A. system's generator. Yippie leader Jerry Rubin gave a speech, and drug gurus Timothy Leary and Richard Alpert both made the scene. Leary eventually made his way to the microphone and tried out his new mantra: "Turn on, tune in, drop out."
The Be-In served as a coming-out party for the Love Generation, a term coined by San Francisco Police Chief Thomas Cahill. The organizers of the Summer of Love were reacting to the Be-In's fallout, and in the process they transformed the publicity boomlet into a full-fledged sensation. By the end of the summer, some 50,000 to 75,000 kids had made the trek to San Francisco (with or without flowers in their hair). In the process, the Haight's anarchic innocence was destroyed, as the district was overrun by gawking tourists, crass opportunists, and criminal predators. Its special magic never returned; instead, it dispersed throughout the country, and a thousand sparks began to blaze.
Civil Rights and Psychedelics
The '60s counterculture had its roots in the '50s —specifically, in Beat bohemianism and the larger youth culture of adolescent rebellion. But the Beats never imagined they were the vanguard of a mass movement. "In the wildest hipster, making a mystique of bop, drugs, and the night life, there is no desire to shatter the 'square' society in which he lives, only to elude it," wrote the Beat author John Clellon Holmes.
What begat the transformation from apolitical fringe to passionately engaged mass movement? First, a mass movement requires mass—in this case, a critical mass of critically minded young people. Between 1960 and 1970, the number of Americans between the ages of 18 and 24 jumped from 16.2 million to 24.4 million. Meanwhile, as capitalism's ongoing development rendered economic life ever more technologically and organizationally complex, the demand for educated managers and professionals grew. Consequently, among the swelling ranks of college-age young people, the portion who attended college ballooned from 22.3 percent to 35.2 percent during the '60s.
With their wider exposure to history, literature, philosophy, and science, recipients of higher education were more likely to see beyond the confines of their upbringing—to question the values they were raised to accept, to appreciate the virtues of other cultures, to seek out the new and exotic. By triumphing over scarcity, capitalism launched the large-scale pursuit of self-realization. Now, by demanding that more and more people be trained to think for themselves, capitalism ensured that the pursuit would lead in unconventional directions—and that any obstacles on those uncharted paths would face clever and resourceful adversaries. In the culture as in the marketplace, the "creative destruction" of competitive commerce bred subversives to challenge the established order.
So the tinder was there. But what sparks would set it ablaze? The primary catalysts were an odd couple: the civil rights struggle and the psychedelic drug scene. Both inducted their participants into what can fairly be called religious experience.
By the middle of the 20th century, belief in racial equality was de rigueur for liberals in good standing. Yet notwithstanding liberalism's towering intellectual and political dominance, progress toward full civil rights for blacks was exasperatingly modest. Despite their frustration, most liberals saw no alternative but steady, gradual gains. But patient advocacy by white liberals wasn't what gave the cause of civil rights its irresistible momentum. What made the movement move was the decision by African Americans, beginning with the Montgomery bus boycott, to push past liberal nostrums and take matters into their own hands. Moral suasion was not enough; confrontation, nonviolent but deliberately provocative, was needed. And to steel themselves for the struggle, African Americans called on sources of strength more profound than Gunnar Myrdal–style social science empiricism.
Black churches were therefore indispensable to the movement's success, not just because they provided organization and fostered solidarity but because the simple, powerful faith they propounded gave ordinary people the heart to do extraordinary things. Even those who lacked the consolation of literalist faith still found some lifeline beyond reason to cling to.
The resulting defiance was sublime in its absolute audacity. Protesters took the truly radical step of acting as if segregation did not exist—ordering lunch, getting on the bus, signing up to vote as if Jim Crow were already gone. With a movement grounded in such extreme commitment, religiosity was always in the air. Marches, stately and solemn, were redolent of religious ritual; beatings, jailings, water-cannon dousings, tear-gassings, and killings sanctified the movement by providing it with martyrs.
For America's liberal-minded young, the prophetic grandeur of the civil rights movement was electrifying. Many joined the movement; many more were inspired to take up other causes and make their own stands. "Without the civil rights movement, the beat and Old Left and bohemian enclaves would not have opened into a revived politics," concluded Todd Gitlin, a leader of Students for a Democratic Society, the premier organization of the student New Left.
While the civil rights movement fired young mindswith the possibilities of prophetic dissent, the emerging drug scene was blowing those minds with visions of mystical experience. Marijuana, which grew in popularity with the spread of the bohemian subculture during the '50s, served as the chemical gateway. Heightening sensory pleasures and lubricating free-associative thinking, it fit perfectly with the Beat cult of intense experience. Under its influence, consciousness seemed to expand; aggression melted away, and shared wonder and laughter took its place.
Psychedelic drugs, meanwhile, took consciousness expansion to an entirely new level. The phantasmagoric hallucinations they induced frequently led people into the realm of religious experience, and many of the leading lights of psychedelic culture, including Leary and Alpert, interpreted and sold the psychedelic experience that way. (Alpert eventually changed his name to the Hindu-derived Baba Ram Dass.)
Both the civil rights movement and the drug culture were outgrowths of mass affluence. In a society devoted to self-expression and personal fulfillment, African Americans found their second-class status intolerable and latched onto resistance as their path to self-realization. Their efforts succeeded in large part because one product of technological abundance—television—carried their struggle into America's living rooms. Meanwhile, the newly unrestrained pursuit of happiness led ineluctably to the pursuit of broadened experience, including the experience of altered states of consciousness. What made increasing numbers of young people eager to try drugs, and receptive to their pleasures, was the cultural shift wrought by the triumph over scarcity.
The struggle for civil rights showed that rapid social progress was possible, that entrenched evil could be uprooted, that social reality was more fluid than imagined, and that collective action could change the world. Likewise, pot and psychedelics revealed wildly different visions of reality from the "straight" one everybody took for granted. If our most basic categories of experience could be called into question, so could everything else.
Guided into those transcendent realms, many young and impressionable minds were set aflame with visions of radical change. One assault after another on conventional wisdom and authority gained momentum. Anti-war protesters, feminists, student rebels, environmentalists, and gays all took their turns marching to the solemn strains of "We Shall Overcome"; all portrayed themselves as inheritors of the legacy of Montgomery and Birmingham and Selma. And the scent of marijuana wafted around all their efforts.
The Counter-Counterculture
The quest for wider horizons and the fulfillment of higher needs, so exuberantly pursued during the '60s, relied on mass affluence, which was achieved and sustained only by a vast mobilization of social energies through an intricate division of labor. There could be no counterculture without capitalism. And capitalism requires discipline, deferred gratification, abstract loyalties, impersonal authority, and the stress of competition. With its hostility to the system that brought it into being, the counterculture created an opening for hostile worldviews that allied themselves with capitalism's titanic power. Conservative Protestantism took advantage of the opportunity and reclaimed a place on society's center stage.
The evangelical revival was the unlikeliest of comeback stories. In the middle years of the 19th century, the bourgeois Protestant worldview had enjoyed unquestioned cultural primacy and matchless self-confidence. The ensuing decades, however, hammered America's old-time religion with setback after setback. Darwin and German higher criticism shook belief in biblical inerrancy; mass immigration filled the country with rival faiths; urbanization bred cesspools of sin and temptation.
Yet the old-time religion did not die. In the South, in small towns and rural areas, among the less educated, the flame still burned. Shaking off their well-earned pessimism, a new generation of conservative religious leaders worked to rebuild dogmatic Protestantism as an active force in American life. Dissociating themselves from the now pejorative term fundamentalist, they called themselves evangelicals. On doctrine, the evangelicals toed the fundamentalist line. In their posture toward the outside world, however, they differed dramatically. Fundamentalists
hunkered down in a defensive crouch, refusing any association with mainline denominations. The new evangelicals were intent on expansion and outreach. Thus, when the National Association of Evangelicals was founded in 1942, it adopted as its motto "cooperation without compromise."
Evangelicals built up an entire parallel cultural infrastructure—a counterculture by any other name. One landmark was Billy Graham's 1957 crusade in New York City's Madison Square Garden. Kicking off on May 15 and running through September 2, the campaign attracted more than 2 million attendees, with 55,000 recorded "decisions for Christ." In June, ABC began televising Graham's Saturday night services live. Millions tuned in.
Evangelicals retooled their message to appeal to the unconverted, and they constructed a robust network of churches and parachurch institutions where believers could coalesce into a thriving community. Yes, they remained outsiders, looked down upon when not ignored by the nation's metropolitan elites. Only Graham, with his immense charisma and political skills, was a fully mainstream figure. Nevertheless, evangelicals were now a mass movement on the move. Though scorned by the cultural elite, they had consolidated their position in the nation's most economically dynamic region, and therefore the fulcrum of political change in the ensuing decades: the Sunbelt.
Conservative proselytizing found a receptive audience as countercultural chaos erupted around the country. Among what became known as the "great silent majority," including many Americans who considered themselves good liberals during the '50s, Aquarius and its tumults seemed like an outbreak of mass insanity. How could the most privileged children in history reject everything their parents held dear? The mainline Protestant denominations had thrived as bulwarks of the postwar liberal ascendancy, but they faltered in the face of the Aquarian challenge. The 1964 slogan for the evangelicals' bête noire, the ecumenical and progressive World Council of Churches, summed up the situation: "The world must set the agenda for the church." People who believed the world was going to hell thought that slogan had things precisely backward.
For Americans anxious to defend their way of life against cultural upheaval, evangelicalism provided the resources with which to make a stand. It imbued believers with a fighting faith, granting them access to the same kind of energies that animated the romantic rebellion —energies found only in the realms beyond reason. Exuberant worship, regular prayer, and belief in prophecy and present-day miracles were the spiritual fortifications that could stymie the radical onslaught.
Evangelicals vs. Aquarians
The audacious idea of founding a university had come to Oral Roberts in the middle of dinner with a young Pat Robertson. Roberts began scribbling on a napkin—not his own words, he believed, but words straight from God. "Raise up your students to hear My voice, to go where My light is dim," his inner voice instructed, "where My voice is small and My healing power is not known. To go even to the uttermost bounds of the earth."
In 1947 Roberts, who believed he had been healed of youthful tuberculosis directly by God via a faith healer, was a minister with his own little Pentecostal Holiness church in Enid, Oklahoma. He felt frustrated and trapped as a dirt-poor, small-town preacher with a pleasant but complacent congregation. One harried morning he picked up his copy of the Good Book, and his eyes fell on III John 1:2: "I wish above all things that thou mayest prosper and be in health, even as thy soul prospereth." It changed in an instant his whole understanding of God. God is good, Roberts now saw: God wants us to be healthy; God wants us to succeed; God wants us to be rich!
Roberts achieved great success as a revivalist and faith healer—which is to say, he became a central figure in a marginal movement. But his ministry transcended Pentecostalism's lowly origins. Not content with success as a traveling tent preacher, he built a far-flung empire of evangelical outreach, complete with television and radio programs, magazines, newspaper columns, even comic books. In 1967, as he was being sworn in as president of the university he built from scratch, Roberts knew he had brought his upstart faith into the American mainstream. There to pay their respects were not just government officials but representatives of 120 of the nation's colleges and universities.
Roberts' rapid ascent was only one spectacular example of the larger evangelical uprising. Between 1965 and 1975, while mainline denominations were shriveling, membership in the Church of the Nazarene increased by 8 percent. The Southern Baptists grew by 18 percent, and membership in the Seventh-Day Adventists and Assemblies of God leapt by 36 percent and 37 percent, respectively. Newsweek declared 1976 "the year of the evangelical" as Jimmy Carter, who identified himself as one, took the presidency. A Gallup poll that same year asked Americans, "Would you describe yourself as a 'born-again' or evangelical Christian?" More than a third said yes.
There is no point in mincing words: The stunning advance of evangelicalism marked a dismal intellectual regress in American religion. A lapse into crude superstition and magical thinking, credulous vulnerability to charlatans, a dangerous weakness for apocalyptic prophecy (see the massive popularity of the best-selling nonfiction book of the '70s, evangelical Hal Lindsey's The Late, Great Planet Earth), and blatant denial of scientific reality, resurgent conservative Protestantism entailed a widespread surrender of believers' critical faculties. The celebration of unreason on the left had met its match on the right.
But having beat their intellectual retreat, evangelicals summoned up the fortitude to defend a cultural position that was, to a considerable extent, worth defending. In particular, they upheld values that, after the Sturm und Drang of the '60s and '70s subsided, would garner renewed appreciation across the ideological divide: committed family life, personal probity and self-restraint, the work ethic, and unembarrassed American patriotism.
By no means were the evangelicals purely reactionary. Take race relations. Although many of them hailed from the South, the leaders of the evangelical revival dissented from the reigning regional orthodoxies of white supremacy and segregation. For years Billy Graham had waffled on race, but after the Supreme Court rejected school segregation in the 1954 case Brown v. Board of Education, he refused to tolerate segregated seating at his crusades. In his breakthrough 1957 crusade at Madison Square Garden, Graham invited Martin Luther King to join him on the podium, introducing him as one of the leaders of "a great social revolution" afoot. Graham was not alone. The Southern Baptist Convention strongly endorsed Brown and called for peaceful compliance. Pentecostalism, meanwhile, had begun as an integrated movement, led by the son of slaves.
Most important, evangelicalism aligned Christian faith with the Holy Grail of the affluent society: self-realization. Unlike the classic bourgeois Protestantism of the 19th century, whose moral teachings emphasized avoidance of worldly temptation, the revitalized version promised empowerment, joy, and personal fulfillment. A godly life was once understood as grim defiance of sinful urges; now it was the key to untold blessings. "Something good is going to happen to you!" was one of Oral Roberts' favorite catchphrases.
The New Synthesis
The evangelicals' therapeutic turn, like that of the counterculture, moved with currents of psychic need sprung loose by mass affluence. Indeed, the two opposing religious revivals overlapped. The Jesus Freaks, or Jesus People, emerged out of the hippie scene in the late '60s, mixing countercultural style and communalism with evangelical orthodoxy. As the hippie phenomenon faded in the '70s, many veterans of the Jesus Movement made their way into the larger, socially conservative evangelical revival.
The peculiar career of Arthur Blessitt illustrates evangelicalism's debt to the cultural left. In the late '60s, Blessitt hosted a psychedelic nightclub called His Place on Hollywood's Sunset Strip, an establishment whose logo combined a cross and a peace sign. "Like, if you want to get high, you don't have to drop Acid. Just pray and you go all the way to Heaven," Blessitt advised in his tract Life's Greatest Trip. "You don't have to pop pills to get loaded. Just drop a little Matthew, Mark, Luke, or John." In 1969 Blessitt began his distinctive ministry of carrying a 12-foot-tall cross around the country—and, later, around the world. On one of his countless stops along the way, at an April 1984 meeting in Midland, Texas, he received word that a local oilman, the son of a prominent politician, wanted to see him privately. The businessman told Blessitt that he was not comfortable attending a public meeting but wanted to know Jesus better and learn how to follow him. Blessitt gave his witness and prayed with him. The man, George W. Bush, subsequently converted to evangelical Christianity.
Evangelicals and Aquarians were more alike than they knew. Both sought firsthand spiritual experience; both believed that such experience could set them free and change their lives; both favored emotional intensity over intellectual rigor; both saw their spiritual lives as a refuge from a corrupt and corrupting world. That last point, of course, was subject to radically different interpretations. Aquarians rejected the establishment because of its supposedly suffocating restrictions, while the evangelicals condemned its licentious, decadent anarchy. Between them, they left the social peace of the '50s in ruins.
That peace deserved to be disturbed. Its cautious, complacent liberalism was ill-suited to coping with the emerging conflicts of mass prosperity. It frustrated the aspirations of blacks, of women, and of the affluent young. It suppressed and distorted economic energies by throttling competition. Its spiritual life tended to the bland and shallow.
But no new, improved social consensus emerged to replace the one that collapsed. Instead, with the culture wars and division between "red" and "blue" America, our ideological categories and allegiances continue to perpetuate the warring half-truths of the great spiritual upheavals of the '60s. Yet despite this confusion, a new modus vivendi has managed to emerge that contains within tolerable bounds the ideological dissatisfactions of both the countercultural left and the religious right.
As liberal dominance was shaken by successive blows of social and economic turmoil in the 1960s and '70s, a New Right energized by the evangelical counter-counterculture seized the opening and established conservatism as the country's most popular political creed by the '80s. Yet the conservative triumph was steeped in irony. Capitalism's vigor was restored, and the radical assault on middle-class values was repulsed. But contrary to the hopes of the New Right's traditionalist partisans, shoring up the institutions of mass affluence did not, and could not, bring back the old cultural certainties.
Instead, a reinvigorated capitalism brought with it a blooming, buzzing economic and cultural ferment that bore scant resemblance to any nostalgic vision of the good old days. This was conservatism's curious accomplishment: Marching under the banner of old-time religion, it made the world safe for the secular, hedonistic values of Aquarius.
The resulting cultural synthesis that prevails today, this accidental by-product of ideological stalemate, remains nameless. It could be called liberal, in the larger sense of the tradition of individualism and moral egalitarianism that America has always embodied. It could also be called conservative, if that same liberal tradition is understood to be the object of conservation. But the ideologies that pass for liberalism and conservatism today are too weighed down with authoritarian elements for either to lay claim to the real American center. Since American society today is committed to a much wider scope for both economic and cultural competition than was allowed before the '60s erupted, it makes most sense to call that center libertarian.
The post The Aquarians and the Evangelicals appeared first on Reason.com.
]]>But the recent scare about "offshoring" is just the latest twist on an inaccurate, decades-old complaint that global trade is stealing jobs and causing a "race to the bottom" in which corporations relentlessly scour the world for the lowest wages and most squalid working conditions. China and India have replaced 1980s Japan and 1990s Mexico as the most feared foreign threats to U.S. employment, and the old fallacy of job scarcity has once again reared its distracting head.
The truth is cheerier. Trade is only one element in a much bigger picture of incessant turnover in the American labor market. Furthermore, the overall trend is toward more and better jobs for American workers. While job losses are real and sometimes very painful, it is important—indeed, for the formulation of sound public policy, it is vital—to distinguish between the painful aspects of progress and outright decline.
Toward that end, and to counter protectionist "analysis" masquerading as fact, here are 10 core truths about global trade and American jobs.
As Figure 1 shows vividly, the total number of jobs in the American economy is first and foremost a function of the size of the labor force. As the population grows, the number of people in the work force grows; then market forces absorb that supply and deploy labor to different sectors of the economy.
Consider all the major events that have increased the supply of labor during the last half-century: the baby boom, the surge in work force participation by women, and rising rates of immigration after decades of restrictionist policies. Consider as well the key developments that have slashed demand for certain kinds of labor: the growing competitiveness of foreign producers and falling U.S. barriers to imports; the shift by American companies toward globally integrated production and the consequent relocation of many operations overseas; the deregulation of the transportation, energy, and telecommunications industries and the wrenching restructuring that followed; and, most important, the many waves of labor-saving technological innovations, from the containerization that replaced longshoremen to the dial phones that replaced switchboard operators to the factory-floor robots that replaced assembly-line workers to the automatic teller machines that replaced bank tellers.
Yet in the face of all this flux, no chronic shortage of jobs has ever materialized. Over those tumultuous five decades, a growing economy and functioning labor markets were all that was needed to accommodate huge shifts in labor supply and demand. Now and in the future, sound macroeconomic policies and continued flexibility in labor markets will suffice to generate increasing employment, notwithstanding the rise of China and India and the march of digitization.
The steady increase in total employment masks the frenetic dynamism of the U.S. labor market. Gross changes—total new positions added, total existing positions eliminated—are much greater in magnitude. Large numbers of jobs are being shed constantly, even in good times. Total employment continues to increase only because even larger numbers of jobs are being created.
According to economist Brad DeLong, a weekly figure of 360,000 new unemployment insurance claims is actually consistent with a stable unemployment rate. In other words, when the unemployment rate holds steady—that is, total employment grows fast enough to absorb the ongoing increase in the labor force—some 18.7 million people will lose their jobs and file unemployment insurance claims during the course of a single year. Meanwhile, even more people will get new jobs.
More detailed and dramatic evidence of job turnover can be found in Table 1. According to data compiled by the Department of Labor's Bureau of Labor Statistics, total private-sector employment rose by 17.8 million between 1993 and 2002. To produce that healthy net increase, a breathtaking total of 327.7 million jobs were added, while 309.9 million jobs were lost. In other words, for every one net new private-sector job created during that period, 18.4 gross job additions had to offset 17.4 gross job losses.
In light of those facts, it is impossible to give credence to claims that job losses in this or that sector constitute a looming catastrophe for the enormous and dynamic U.S. economy as a whole. It is as inevitable that some companies and industries will shrink as it is that others will expand. Localized challenges and problems should not be confused with national crises.
The ongoing growth in total employment is frequently dismissed on the ground that most of the new positions being created are low-paying, dead-end "McJobs." The facts show otherwise.
Managerial and specialized professional jobs have grown rapidly, nearly doubling between 1983 and 2002, from 23.6 million to 42.5 million. These challenging, high-paying positions have jumped from 23.4 percent of total employment to 31.1 percent.
And these high-quality jobs will continue growing in the years to come. According to projections for 2002-12 prepared by the Bureau of Labor Statistics, management, business, financial, and professional positions will grow from 43.2 million to 52 million, increasing from 30 percent of total employment to 31.5 percent.
Opponents of open markets frequently claim that unshielded exposure to foreign competition is destroying the U.S. manufacturing base. That charge is flatly untrue. Figure 2 sets the record straight: Between 1980 and 2003, American manufacturing output climbed a dizzying 93 percent. Yes, production fell during the recent recession, but it is now recovering: the industrial production index for manufacturing rose 2.2 percent in 2003.
It is true that manufacturing's share of gross domestic product has been declining gradually over time—from 27 percent in 1960 to 13.9 percent in 2002. The percentage of workers employed in manufacturing likewise has been falling, from 28.4 percent to 11.7 percent during the same period. But the primary cause of these trends is the superior productivity of American manufacturers. As shown in Figure 3, output per hour in the overall nonfarm business sector rose 50 percent between 1980 and 2002; by contrast, manufacturing output per hour shot up 103 percent. In other words, goods are getting cheaper and cheaper relative to services. Since this faster productivity growth has not been matched by a corresponding increase in demand for manufactured goods, the result is that Americans are spending relatively less on manufactures. Accordingly, manufacturing's shrinking share of the overall economy is actually a sign of American manufacturing prowess.
Exactly the same phenomenon has played out over a longer period in agriculture. In 1870, 47.6 percent of total employment was in farming. By 2002 the figure had fallen to 1.7 percent. In the future, manufacturing will in all likelihood continue down the trail blazed by agriculture. People who bemoan this prospect don't recognize economic progress when they see it.
International trade has had only a modest effect on manufacturing's declining share of the economy. It is true that imports displace some domestic production. On the other hand, exports boost sales for American manufacturers. The U.S. has been running a manufacturing trade deficit in recent years, but even if trade had been in balance between 1960 and 2002 the manufacturing share of GDP still would have fallen sharply, down to an estimated 16 percent (as opposed to the actual 13.9 percent). Innovation creates a steady, relentless drop in manufacturing's share of economic activity.
Employment in the manufacturing sector has taken a beating in recent years. Between 1965 and 1990, the total number of manufacturing jobs fluctuated in a stable band between 16 million and 20 million; during the 1990s, the upper limit dropped to around 18 million; but between July 2000 and October 2003 jobs plummeted 16 percent, from 17.32 million to 14.56 million.
Although the losses have been severe, the charge that those jobs were eliminated by foreign competition simply doesn't square with the facts. As shown in Table 2, manufacturing imports rose only 0.6 percent between 2000 and 2003. By contrast, manufacturing exports fell by 9.6 percent. In other words, during this period the drop in exports accounted for 91 percent of the growth in the manufacturing trade deficit.
Accordingly, imports played at best a trivial role in the recent sharp decline in manufacturing employment. The main culprit was the worsening domestic market for manufactures during the recent recession—in particular, a big drop in business investment. Between the fourth quarter of 2000 and the third quarter of 2002, total fixed nonresidential investment fell by 14 percent. Looking abroad, it was softening overseas markets, much more than stiffening import pressure, that added further downward pressure on domestic manufacturing jobs. Consequently, anti-trade activists who cite manufacturing job losses as a reason to turn away from trade liberalization couldn't be more wrong. Expanding overseas markets and commercial opportunities for American exporters would be a shot in the arm for manufacturing employment.
In recent months, historical fears about vanishing manufacturing jobs have been compounded by growing anxiety about trade-related job losses in the service sector. Advances in information and communications technologies now make it possible for many jobs—from customer service calls to software development—to be performed anywhere.
In particular, the offshoring of information technology (I.T.) jobs to India and other low-wage countries has received a flurry of attention. According to a survey of hiring managers conducted by the Information Technology Association of America, 12 percent of I.T. companies already have outsourced some operations abroad. As for future trends, Forrester Research predicted in a widely cited study that 3.3 million white-collar jobs—including 1.7 million back-office positions and 473,000 I.T. jobs—will move overseas between 2000 and 2015.
Adding to the fear, I.T. employment has experienced a significant recent decline. In 2002, according to the Department of Commerce, the total number of I.T.-related jobs stood at 5.95 million, down from a 2000 peak of 6.47 million. Although some of those jobs were lost because of offshoring, the major culprits were the slowdown in demand for I.T. services after the Y2K buildup, followed by the dot-com collapse and the broader recession. Moreover, it should be remembered that the recent drop in employment took place after a dramatic buildup. In 1994, 1.19 million people were employed as mathematical and computer scientists. By 2000 that figure had jumped to 2.07 million—a 74 percent increase. As of 2002, the figure had decreased only slightly to 2.03 million, still 71 percent higher than in 1994.
Despite the trend toward offshoring, I.T.-related employment is expected to see healthy increases in the years to come. According to Department of Labor projections, the total number of jobs in computer and mathematical occupations will jump from 3.02 million in 2002 to 4.07 million in 2012—a 35 percent increase. Of the 30 specific occupations projected to grow fastest during those 10 years, seven are computer-related. (See Figure 4 for the fastest-growing computer-related occupations.) Thus, the recent downturn in I.T. is likely only a temporary break in a larger trend of robust job growth.
The wild claims that offshoring will gut employment in the I.T. sector are totally at odds with reality. I.T. job losses projected by Forrester amount to fewer than 32,000 per year—relatively modest attrition in the context of 6 million I.T. jobs. These losses, meanwhile, will be offset by newly created jobs as computer and mathematical occupations continue to boom. The doomsayers are confusing a cyclical downturn with a permanent trend.
Offshoring of I.T. services to India and elsewhere has been made possible by ongoing advances in computer and communications technologies. If those advances indeed pose a threat to domestic I.T. services industries, then it should be possible to trace the emergence of that threat in trade statistics, since offshoring registers as an increase in services imports.
Yet the fact is that the U.S. runs a trade surplus precisely in the I.T. services most directly affected by offshoring. In the categories of "computer and data processing services" and "database and other information services," American exports rose from $2.4 billion in 1995 to $5.4 billion in 2002, while imports increased from $0.3 billion to $1.2 billion. Thus, the U.S. trade surplus in these services has expanded from $2.1 billion to $4.2 billion.
Meanwhile, the same technological advances that have given rise to offshoring are facilitating the international provision of all kinds of services—banking, accounting, legal assistance, engineering, medicine, and so on. The United States is a major exporter of services generally and runs a sizable trade surplus in services. In 2002, for example, service exports accounted for 30 percent of all U.S. exports and exceeded service imports by $64.8 billion. Accordingly, the increasing ability to provide services remotely is a commercial boon to many U.S.-based service industries. Although some jobs are doubtless at risk, the same trends that make offshoring possible are creating new opportunities, and new jobs, throughout the domestic economy.
Although offshoring does eliminate jobs, it also yields important benefits. To the extent that companies can reduce costs by shifting certain operations overseas, they are increasing productivity. The process of competition ultimately passes the resulting cost savings on to consumers, which then spurs demand for other goods and services. Whether caused by the introduction of new technology or by new ways to organize work, productivity increases translate into economic growth and rising overall living standards.
In particular, offshoring encourages the diffusion of I.T. throughout the American economy. According to Catherine Mann at the Institute for International Economics, globalized production of I.T. hardware—that is, the offshoring of computer-related manufacturing—has accounted for 10 percent to 30 percent of the drop in hardware prices. The resulting increase in productivity encouraged the rapid spread of computer use and thereby added some $230 billion in cumulative additional GDP between 1995 and 2002.
Offshoring offers the potential to take a similar bite out of prices for I.T. software and services. Those price reductions will promote the further spread of I.T. and new business processes that take advantage of cheap technology. As Mann notes, health services and construction are two large and important sectors that today feature low I.T. intensity (as measured by I.T. equipment per worker) and below-average productivity growth. Diffusion of I.T. into these and other sectors could prompt a new round of productivity growth such as that provoked by the globalization of hardware production during the 1990s.
The attention now being paid to offshoring creates the impression that it is an utterly unprecedented phenomenon. But the very same technological advances that are making offshoring possible have been eliminating large numbers of white-collar jobs for many years now.
The diffusion of I.T. throughout the economy has caused major shakeups in the job market during the last decade. Voicemail has replaced receptionists; back-office record-keeping and other clerical jobs have been supplanted by computers; layers of middle management have been eliminated by better internal communications systems. In all these cases, jobs are not simply being transferred overseas; they are being consigned to oblivion by automation and the resulting reorganization of work processes.
The increased churn in white-collar jobs shows up in the Department of Labor's statistics on displaced long-tenured workers, defined as workers who have lost jobs they held for three years or more (Figure 5). During the 1981–82 recession blue-collar workers bore the brunt of long-tenured displacement, but by 1991-92 more than half of the long-held jobs lost were white-collar. Even in the better years that followed, innovation and job churn continued to displace white-collar workers at a higher rate than during the 1981-82 recession.
Offshoring is merely the latest manifestation of a well-established process. The only difference is that, with offshoring, I.T. is facilitating the transfer of jobs overseas. In either case, domestic jobs are lost to technological progress and rising productivity. Why is this downside taken in stride when jobs are eliminated entirely yet considered unbearable when the jobs are taken as hand-me-downs by Indians and other foreigners?
Because of the recent recession, the U.S. economy has suffered from a shortage of jobs, as evidenced by the rise in the unemployment rate. There is a natural temptation under these conditions to fear that this temporary setback is the beginning of some permanent reversal of fortune, that the shortage of jobs is here to stay and will only grow worse.
To calm such fears, it is useful to recall that similar anxieties have surfaced before. Again and again, over many decades, cyclical downturns in the economy have prompted predictions of permanent job shortages. And each time, those predictions were belied by the ensuing economic expansion.
Back in the 1930s, the brutal and persistent unemployment caused by the Great Depression gave rise to theories of "secular stagnation." A number of leading economists—including, most prominently, Harvard's Alvin Hansen—argued that declining population growth and the increasing "maturity" of the industrial economy meant that we could no longer rely on private-sector job creation to provide full employment. The stagnationist thesis eventually fell out of fashion once the postwar economic boom gathered steam.
The return of higher unemployment in the late 1950s and early '60s led to a revival of the stagnationist fallacy, this time in the guise of an "automation crisis." The ongoing progress of factory automation, combined with the growing visibility of electronic computers, led many Americans to believe, once again, that the economy was running out of jobs. During the 1960 presidential campaign, John F. Kennedy, who ran on a pledge to "get the country moving again," warned that automation "carries the dark menace of industrial dislocation, increasing unemployment, and deepening poverty." The American Foundation on Automation and Unemployment, a joint industry-labor group created in 1962, claimed breathlessly that automation was "second only to the possibility of the hydrogen bomb" in its challenge to America's economic future. For the record, U.S. employment in 1962 stood at 66.7 million jobs—roughly half the current total.
In the early 1980s, the coincidence of a severe recession and a string of competitive successes by Japanese producers at the expense of high-profile American industries sparked predictions of the imminent "deindustrialization" of the American economy. As financier Felix Rohatyn complained, in a fashion typical of the time, "We cannot become a nation of short-order cooks and saleswomen, Xerox-machine operators and messenger boys….These jobs are a weak basis for the economy." Along similar lines, Sen. Lloyd Bentsen (D-Texas) fretted that "American workers will end up like the people in the biblical village who were condemned to be hewers of wood and drawers of waters." It should be noted that U.S. manufacturing output has roughly doubled since 1982.
In the early 1990s, another recession resulted in yet another job shortage scare. Ross Perot won 19 percent of the presidential vote in 1992 with a campaign that, among other things, railed against the "giant sucking sound" of jobs lost to Mexico and other foreign countries. That same year, Pulitzer Prize-winning journalists Donald L. Barlett and James B. Steele published a widely discussed jeremiad, America: What Went Wrong?, about the decline and fall of the country's middle class. That hand wringing was followed in short order by one of the most remarkable expansions in American economic history.
Again and again, serious and influential voices have raised the cry that the sky is falling. It never does. The root of their error is always the same: confusing a temporary, cyclical downturn with a permanent reduction in the economy's job-creating capacity.
In recent years, many Americans have lost their jobs and suffered hardship as a result. Many more have worried that their jobs would be next. There is no point in denying these hard realities, but just as surely there is no point in blowing them out of proportion. The U.S. economy is not running out of good jobs; it is merely coming out of a recession. And regardless of whether economic times are good or bad, some amount of job turnover is an inescapable fact of life in a dynamic market economy.
This fact cannot be wished away by blaming foreigners, and it cannot be undone by trade restrictions. The innovation and productivity increases that render some jobs obsolete are also the source of new wealth and rising living standards. Embracing change and its unavoidable disruptions is the only way to secure the continuing gains of economic advancement.
The post 10 Truths About Trade appeared first on Reason.com.
]]>Is such a preventive war justified? In late October, we asked John Mueller and Brink Lindsey to argue the issue on reason online. Mueller, who makes the case against war, holds the Woody Hayes Chair of National Security Studies at Ohio State University. He is the author of Policy and Opinion in the Gulf War (1994), Quiet Cataclysm: Reflections on the Recent Transformation of World Politics (1997), and Retreat from Doomsday: The Obsolescence of Major War (1989). reason Contributing Editor Brink Lindsey makes the case for war. He's a senior fellow at the Cato Institute and author of Against the Dead Hand: The Uncertain Struggle for Global Capitalism (2002). He also publishes www.brinklindsey.com.
The debate unfolded over the week of October 28-November 1, with each participant responding within hours of the other's posting. Readers interested in more information can visit reason.com/debate/ai-debate1.shtml, which includes links to reader responses and reason's archive of 9/11-related coverage.
The devil du jour is a feeble tyrant.
John Mueller
In preparing for a war against Iraq, military planners seem to anticipate a walkover. The Iraqi military performed badly in the Gulf War of 1991: Saddam Hussein promised the mother of all battles, but his troops delivered instead the mother of all bugouts. And the planners note that Iraq is even weaker now.
Moreover, the regime appears to enjoy very little support. Saddam Hussein lives in such fear of his own military forces that he keeps them out of Baghdad. It is generally anticipated that most of the military will not fight for him—indeed, that there may be substantial defections to the invaders even among the comparatively coddled Republican Guard.
In addition, the regime really controls only a shard of the country. The Kurds have established a semi-independent entity in the north, and the hostility toward Saddam's rule is so great in the Shiite south that government officials often consider the region hostile territory.
Advocates of a war with Iraq insist such a venture is necessary because Iraq's feeble, wretched tyranny is somehow a dire and gathering threat to the entire area and even to the United States. Saddam's inept, ill-led, exhausted, and thoroughly demoralized military force, it is repeatedly argued, will inevitably be used by its leader for blackmail and regional dominance, particularly if it acquires an atomic bomb or two.
Exactly how this might come about is not spelled out. The notion that Israel, with a substantial nuclear arsenal and a superb and highly effective military force, could be intimidated out of existence by the actions or fulminations of this pathetic dictator can hardly be taken seriously. And the process by which Saddam could come to dominate the oil-producing states in the Middle East is equally mysterious and fanciful. Apparently, he would rattle a rocket or two, and everyone would dutifully jack up the oil price to $90 a barrel.
Saddam's capacity for making daffy decisions is, it is true, considerable. But he seems mostly concerned with self-preservation—indeed, that is about the only thing he is good at. And he is likely to realize that any aggressive military act in the region is almost certain to provoke a concerted, truly multilateral counterstrike that would topple his regime and remove him from existence. Even if he ordered some sort of patently suicidal adventure, his military might very well disobey, or simply neglect to carry out, the command. His initial orders in the Gulf War, after all, were to stand and fight the Americans to the last man. When push came to shove, his forces treated that absurd order with the contempt it so richly deserved.
During the last half-century American policy makers have become hysterical over a number of Third World dictators, among them Egypt's Nasser, Indonesia's Sukarno, Cuba's Castro, Libya's Qaddafi, and Iran's Khomeini. In all cases, the threat these devils du jour posed to American interests proved to be highly exaggerated. Nasser and Sukarno are footnotes, Castro a joke, and Qaddafi a mellowed irrelevance, while Khomeini's Iran has become just about the only place in the Middle East where Americans are treated with popular admiration and respect.
Significantly, Iran is also just about the only place in the area where the United States has been unable to meddle during the last 20 years. And it is possible there is a lesson here.
With characteristic self-infatuation, American leaders like to declare their country to be "the world's only remaining superpower" or "the indispensable nation." But this self-proclaimed status doesn't mean that it is obligatory or possible or wise for the United States to seek to run the world.
Or even the Middle East. American interests there are limited. There is a romantic and sentimental attachment to Israel, of course, but that country seems fully capable of taking care of itself. In time, perhaps, and probably after a change of leadership on both sides, mediation efforts between Israel and the Palestinians can become productive again. But for now at least the conflict is so deep that there is little any outsider (even an "indispensable" one) can do about it.
Quite a bit of oil comes from the Middle East, of course, but discussions of the American interest on that score tend to ignore simple economics. The area already is dominated by an entity, OPEC, which would dearly love to hike the price for the commodity. It is constrained from doing so not by warm and cuddly feelings toward its customers but by the grim economic realization that such a policy would reduce demand, intensify the search for new petroleum sources, and bring about a worldwide inflation that would raise the prices of imported commodities even more than any gains obtained by an increase in the oil price. Whatever happens in the region, this fundamental market reality is likely to mellow and correct incidental distortions.
In the meantime, monarchs in a number of countries may gradually be coming to the realization that they are out of date, rather in the way Latin American militarists more or less voluntarily decided during the last quarter century to relinquish control to democratic forces. If this does happen, however, the process will be impelled, as in Latin America, primarily by domestic forces, not outside ones.
A humanitarian argument could be made for a war against Iraq—to liberate its people from a vicious tyranny and from the debilitating and destructive effects of the sanctions which the United States apparently is incapable of relaxing while Saddam Hussein remains in power. Such a war would have to be kept inexpensive in casualties, and the United States would have to be willing to hang on for quite some time to help rebuild the nation, something experience suggests is unlikely.
But calls for war do not stress this argument. Instead, they raise alarms about vague, imagined international threats that, however improbable, could conceivably emanate from a miserable and pathetic regime. In due course, nature (there have been persistent rumors about cancer) or some other force will remove our devil du jour. The situation calls for patient watchfulness, not hysteria.
The case for invading Iraq
Brink Lindsey
John Mueller tries to make light of Iraq. Feeble, inept, pathetic, and daffy are some of the adjectives he uses to describe the blood-soaked, predatory regime now in power there. The implication is that only the paranoid could find in Saddam Hussein's buffoonery any cause for serious concern.
Well, I beg to differ. Iraq is no joke: The crimes that the Ba'athist regime there has committed and may intend to commit in the future are deadly serious business. Under the reign of Saddam Hussein, Iraq has invaded two of its neighbors, lobbed missiles at two other countries in the region, systematically defied U.N. resolutions that demand its disarmament, fired on U.S. and coalition aircraft thousands of times over the past decade, and committed atrocious human rights abuses against its own citizens, including the waging of genocidal chemical warfare against Iraqi Kurds. In short, this is a regime that is responsible for hundreds of thousands, perhaps millions, of deaths.
Meanwhile, Iraq has a long record of active support for international terrorist groups. Indeed, it apparently has staged terrorist attacks of its own directly against the United States. I am speaking of Iraq's likely involvement in the attempted assassination of former President Bush in Kuwait in 1993.
Most ominously, Iraq has been engaged for many years in the monomaniacal pursuit of weapons of mass destruction (WMD). It reportedly has significant stockpiles of biological weapons, and its aggressive, large-scale nuclear program is thought to be at most a few years away from success. The fact that Iraq has been willing to endure ongoing sanctions, and thus the loss of hundreds of billions of dollars in oil revenue, rather than dismantle its WMD programs shows the ferocity of its commitment to maximizing its destructive capabilities.
In light of the above, I would support military action against Iraq even if 9/11 had never happened and there were no such thing as Al Qaeda. After all, I supported the Gulf War back in 1991 in the hope of toppling Saddam Hussein's regime before it fulfilled its nuclear ambitions. Unfortunately, quagmire was plucked from the jaws of victory in that conflict, and so today we are faced with concluding its unfinished business. In my view, standing by with "patient watchfulness" while predatory, anti-Western terror states become nuclear powers is irresponsible and dangerous folly.
As for the headline question, "What's the rush?," my reply is: North Korea. In 1994 President Clinton, with the help of former President Carter, swept the Korean threat under the rug and trusted that "nature," or something, would deal with that "devil du jour." Now North Korea's psychopathic regime informs us that it has nuclear weapons, a fact that vastly complicates any efforts to prevent the situation from getting even worse. We can look forward to similar complications with Iraq unless we act soon.
The case for action against Iraq is further strengthened by the unfortunate facts that 9/11 did happen and Al Qaeda does exist. Here is the grim reality: Radical Islamism is in arms against the West, and its fanatical followers have pledged their lives to killing as many of the infidel as they possibly can. American office workers in New York and Washington, French seamen in Yemen, Australian tourists in Bali, Russian theatergoers in Moscow—nobody is safe. However exactly this conflict arose, it is now in full flame. And let there be no mistake: This is a fight to the death. Either we crush radical Islamism's global jihad, or thousands, even millions, more Americans will die.
Iraq occupies a strategic position in the war against Islamist terror along several dimensions. First, Iraq's WMD programs threaten to stock the armory of Al Qaeda & Company. Saddam Hussein's regime has a long and inglorious history of reckless aggression and grievous miscalculation. The decision to use terrorist intermediaries to unleash, say, Iraqi bioweapons against the United States strikes me as an entirely plausible scenario, assuming that Iraq's leadership can convince itself that the attack could be carried out with "plausible deniability." Given that more than a year has gone by since last fall's anthrax letter scare and we still have no idea who was responsible, the threat posed by Iraq's WMD programs is far from idle. It is, in fact, intolerable.
Second, the resolution—one way or another—of our longstanding conflict with Iraq will have vitally important repercussions in the larger war against terror. If we proceeded to remove the Ba'athist regime from power, we would make it clear that the United States means business in dealing with terrorism and its sponsors. All those countries that continue, more than a year after 9/11, to demonstrate their incapacity or unwillingness to root out the terrorists in their midst (e.g., Iran, Pakistan, Saudi Arabia, Syria, Lebanon, Yemen) would have newly strengthened incentives to do the right thing. If, on the other hand, all the tough talk against Iraq turned out to have been hot air, U.S. credibility would sustain a major blow. Al Qaeda would be emboldened by perceived American weakness, and countries that have to balance fear of the United States against fear of Islamists at home would be inclined to take U.S. displeasure less seriously.
Finally, regime change in Iraq offers the opportunity to attack radical Islamism at its roots: the dismal prevalence of political repression and economic stagnation throughout the Muslim world. The establishment of a reasonably liberal and democratic Iraq could serve as a model for positive change throughout the region. Of course, the successful rebuilding of Iraq will not be easy, but we cannot shrink from necessary tasks simply because they are hard. And we cannot simply assume that "nature" will bring freedom to a region that has never known it on a time scale consistent with safeguarding American lives.
Mueller's "What, me worry?" attitude captures perfectly the prevailing opinion about Afghanistan circa September 10, 2001. The Taliban were more a punch line than a serious foreign policy issue; only the most fevered imagination could see any threat to us in that miserable, dilapidated country. The next day, 3,000 Americans were dead.
We can't let that happen again.
Betting on Saddam's recklessness
John Mueller
It may be useful to parse the argument for a preventive war against Iraq as developed by Brink Lindsey into two considerations: the military threat Iraq presents or is likely to present, and the regime's connection to international terrorism.
The notion that Iraq presents an international military threat seems to be based on three propositions:
1) Iraq will have a small supply of atomic weapons in a few years.
2) Once it gets these arms, Saddam Hussein won't be able to stop himself from engaging in extremely provocative acts such as ordering the military invasion of a neighbor or lobbing missiles at nuclear-armed Israel—acts that are likely to trigger a concerted multilateral military attack upon him and his regime.
3) If Saddam issues such a patently suicidal order, his military—which he himself distrusts—will dutifully carry it out, presumably with more efficiency, effectiveness, and élan than it demonstrated in the Persian Gulf War.
I will leave it to those more expert in the field to assess the first proposition. At worst we have a window of a few years before the regime is able to acquire atomic arms. Some experts seem to think it could be much longer, while others question whether Saddam's regime will ever be able to gather or make the required fissile material. Effective weapons inspections, of course, would reduce this concern.
The second proposition rests on an enormous respect for what I have called Saddam's "daffiness" in decision making. I share at least part of this respect. Saddam does sometimes act on caprice, and he often appears to be out of touch—messengers bringing him bad news rarely, it seems, get the opportunity to do so twice. At the same time, however, he has shown himself capable of pragmatism. When his invasion of Iran went awry, he called for retreat to the prewar status quo; it was the Iranian regime that kept the war going. After he invaded Kuwait in 1990, he quickly moved to settle residual issues left over from the Iran-Iraq War so that he had only one enemy to deal with.
Above all, Saddam seems to be entirely nonsuicidal and is primarily devoted to preserving his regime and his own personal existence. His brutal killing (and gassing) of Kurds was carried out because they were in open rebellion against him and in effective or actual complicity with invading Iranians during the Iran-Iraq War. Much of his obstruction of arms inspectors seems to arise from his fear that agents among them will be used fatally to triangulate his whereabouts—a suspicion that press reports suggest was not exaggerated. If Saddam does acquire nuclear arms, accordingly, it seems most likely that he will use them as all other leaders possessing such weapons have since 1945: to deter an invasion.
The third proposition is rarely considered in discussions of the war, but it is important. One can't simultaneously maintain that Iraq's military forces will readily defect and can easily be walked over—a common assumption among our war makers—and also that this same pathetic military presents a serious international threat.
The argument connecting Iraq to terrorism is mostly based on arm waving. As Lindsey notes, international terrorists are based all over the world—in fact, just about everywhere except Iraq. Their efforts are hardly likely to be deflated if Iraq's regime is defeated. Indeed, it seems likely that an attack will supply them with new recruits, inspire them to more effort, and provide them with inviting new targets in the foreign military and civilian forces that occupy a defeated, chaotic Iraq. Lindsey suggests that a war is required to make it "clear that the United States means business in dealing with terrorism." I would have thought this was already extremely clear.
Terrorism, like crime, has always existed and always will. It cannot be "crushed," but its incidence and impact can be reduced, and some of its perpetrators can be put out of business. But this is likely to come about through patient, diligent, and persistent international police work rather than costly wars based on tenuous reasoning.
Evading them won't make us safe.
Brink Lindsey
John Mueller sees correctly that the Iraq problem has two aspects: 1) regional security and 2) global terrorism. Unfortunately, he fails to grasp the nasty realities of either.
Mueller's assessment of the regional threat posed by a nuclear Iraq is nothing short of fantastic. He pooh-poohs the possibility that Iraq might invade one or more of its neighbors and argues that Saddam Hussein "is primarily devoted to preserving his regime and his own personal existence." Huh? Try telling that to Iran and Kuwait.
Mueller needs to read Mark Bowden's superb, chilling profile of Saddam in the May 2002 issue of The Atlantic. Bowden makes clear that Saddam sees himself as a world-historical figure, a man destined to lead pan-Arabia back to greatness. Perversely, every brush with disaster and death "has strengthened his conviction that his path is divinely inspired and that greatness is his destiny." Why on earth should we suppose that a nuclear arsenal—built in reckless defiance of the United States and the world—would temper rather than inflame Saddam's raging megalomania?
Mueller blithely assumes that any future Iraqi aggression would be "likely to trigger a concerted multilateral military attack upon him and his regime" and thus "patently suicidal." Excuse me, but there was no multilateral response to Iraq's attack on Iran, and the world would have been all too happy to acquiesce in Kuwait's disappearance had the first President Bush not stepped in and forced the issue. What makes Mueller think the world would rush in to confront a nuclear-armed Iraq? That task, inevitably, would fall to the United States. Mueller's counsel boils down to this: The United States should avoid war with a relatively weak Iraq today so that it can tangle with a nuclear adversary tomorrow.
What about the nexus between Iraq and terrorism, which Mueller dismisses as so much "arm waving"? Allow me to quote Bowden's article once more, this time from a scene in which Saddam is addressing Iraqi military leaders who run terrorist training camps: "He told [them] that they were the best men in the nation, the most trusted and able. That was why they had been selected to meet with him, and to work at the terrorist camps where warriors were being trained to strike back at America. The United States, he said, because of its reckless treatment of Arab nations and the Arab people, was a necessary target for revenge and destruction. American aggression must be stopped in order for Iraq to rebuild and to resume leadership of the Arab world."
This meeting occurred back in 1996—before the recent heating up of the conflict. So much for Saddam's live-and-let-live foreign policy.
Bellicose rhetoric is one thing; the ability to back it up is something altogether more serious. Here is the ultimate threat, the one that Mueller can't even bring himself to discuss: Iraqi biological or nuclear weapons might someday be put in the hands of terrorist groups. If that were to happen, America could experience horrors that would dwarf those unleashed on September 11.
Opponents of action against Iraq argue that we can rely on deterrence to protect us from such atrocities: No country, not even one as rash as Iraq, would dare to use weapons of mass destruction against the United States because of the threat of overwhelming retaliation. That argument has considerable force with respect to a direct attack by Iraq, but it fails completely to confront the possibility that Iraq could use terrorist intermediaries to do its dirty work while masking its own involvement. How is deterrence supposed to work when WMD lack a return address?
Recall, again, last year's anthrax attacks. We still don't know who was responsible, or whether there was any foreign state involvement. Just this week, a Washington Post article cast considerable doubt on the FBI's favored theory that the murders were the work of a disgruntled American scientist—and suggested that an Iraqi role remains a live possibility.
Go back a few more years, to the 1993 plot to assassinate former President Bush in Kuwait. It appears that the attack was an Iraqi operation, but as Seymour Hersh showed in a 1993 New Yorker article in which he reviewed the less-than-airtight case in depth, the fact is we're not really sure.
Welcome to the shadowy world in which we now live. A world in which deterrence no longer suffices. A world in which the judicious use of American power to pre-empt looming threats may be all that stands between us and catastrophe.
Here is what we know about the current Iraqi regime: It has weapons of mass destruction and is actively seeking to add to its arsenal. It is rabidly hostile to the United States. It has an established track record of predatory conduct and a demonstrated willingness to take extreme risks in pursuing its predatory ambitions. There is not another country on earth that matches Iraq's combination of destructive capacity, anti-American animus, and recklessness in projecting power. In a shadowy world, this much is clear: We are not safe while the present regime rules Iraq.
War is not necessary to keep a street thug in check.
John Mueller
Brink Lindsey wants to argue that Saddam Hussein is reckless, but even he concedes that "no country, not even one as rash as Iraq, would dare to use weapons of mass destruction against the United States because of the threat of overwhelming retaliation." That is, it is entirely possible to deter Iraq. This deterrent would surely hold for an attack on Israel, which has an enormous retaliatory capacity and an even greater incentive to respond than the U.S. I would suggest that it holds as well for just about any substantial military provocation that Saddam might consider.
It is true that much of the world managed to contain its outrage when Iraq invaded Iran in 1980. But that was because the attack was directed against Khomeini's seemingly expansionary theocracy, which was seen to be a bigger threat at the time. It is simply not true that "the world" was "all too happy to acquiesce in Kuwait's disappearance" when Iraq invaded it in 1990. There was almost universal condemnation of the attack, even from Iraq's erstwhile friend and ally, the Soviet Union, and the debate was over tactics: whether to use war immediately to push back the aggression or to wait to see if sanctions could do the trick.
Reaction to a third Saddam adventure would surely follow the Kuwait pattern, except that the troops would now go all the way to Baghdad. Moreover, as I've suggested, Saddam's army, which even he finds unreliable, would be unlikely to carry out patently suicidal orders even if they were issued—as it showed in the Gulf War of 1991.
Lindsey's appreciation for Saddam's egomania is fully justified. It's just that egomania is standard equipment for your average Third World tyrant. Indonesia's Sukarno haughtily withdrew from the United Nations and set up his own competing operation in Djakarta (only China joined); Egypt's Nasser (Saddam's sometime inspiration), who planned to unite and dominate the Arab world, died quietly in bed after being humiliated by Israeli arms; Khomeini's global revolution has essentially been voted out even in its Iranian homeland; and Cuba's Castro probably still hopes to become the new Simón Bolívar of Latin America. Self-important street thugs like Saddam Hussein love to flail and fume in the company of sycophants, but that doesn't make them any less pathetic.
We are left with the warning that Saddam will give weapons of mass destruction to shadowy terrorists to deliver for him. Lindsey is unusual in suggesting that Saddam might do this with nuclear weapons (which, of course, he doesn't have and perhaps never will have). Most observers assume he would selfishly keep them himself to help deter an attack on Iraq.
The case is more plausible for chemical or biological weapons—which, however, have proven to be so difficult to deploy effectively that it is questionable whether they should be considered weapons of "mass destruction" at all, as Gregg Easterbrook pointed out in the October 7 issue of The New Republic. But terrorists may be after these weapons anyway, and the question is whether it is worth a war to eliminate one of many potential sources. Moreover, as Daniel Benjamin noted in the October 31 Washington Post, the best CIA assessment is that Saddam and Al Qaeda are most likely to bed together if his regime is imminently threatened by the preventive war (it would be in no reasonable sense an act of pre-emption) that Lindsey so ardently advocates.
There's no invisible hand to protect us.
Brink Lindsey
I argue that Iraq is a serious threat to the surrounding region and to us. John Mueller disagrees. I contend that toppling the current Iraqi regime will aid in the broader campaign against Islamist terrorism. Mueller worries that an invasion of Iraq will backfire.
Risks of action, risks of inaction: Which are greater? Solid facts are few and far between; we're forced to make our way based on hypotheticals and maybes and historical analogies. How can we have any confidence that we are weighing the risks intelligently?
One point in my favor is that I am actually weighing the risks. That's why I support military action against Iraq: I believe the risks of inaction outweigh the risks of action.
I am not a reflexive hawk. I opposed our recent military adventures in Panama, Haiti, Somalia, and the Balkans. I would not support military action against, say, Burma, merely because its government is despicable. Odious as it is, the Burmese regime poses no significant threat to its neighbors or to us. I would not have supported making war on China in the 1960s, even though its rulers were wildly anti-American and seeking to develop a nuclear arsenal. Despite the threat China posed to us, the risks of acting were far too great (especially the possibility of an escalation with the Soviets) and the price of victory against such a formidable and fanatical adversary would have been far too high. In that situation, deterrence and diplomacy (in particular, playing the Chinese and Soviets against each other) were the better options.
So on the general question of preventive war—whether to make war now in order to avoid a worse war later—my position is: It depends on the circumstances. The decision whether to go to war should turn on a pragmatic assessment of relative risks. Sometimes the balance will tilt in favor of action, sometimes not. In the particular case of Iraq in 2002, I believe the balance tilts strongly toward action.
Many who oppose invading Iraq (I won't ascribe this view to Mueller, since he did not spell out his general position clearly) reject the kind of pragmatic assessment that I think is called for. They believe that preventive war is just a bad idea, period—that it's wrong, or at least reckless, to fire the first shot unless you're absolutely sure the other guy is about to squeeze the trigger.
When I'm debating the Iraq question with someone like that, we're talking past each other. I'm explaining the reasons that led me to my conclusion. He's marshaling evidence in support of a predetermined conclusion.
Not that there's anything wrong, in general, with predetermined conclusions—they're called principles. But all principles aren't created equal. Some are sound, some are iffy, and some are downright worthless.
What about the principle of no preventive wars? Specifically, what is the basis for assuming that preventive wars always make matters worse? In economic policy, there are solid grounds for the principle of no government meddling with markets. Market competition has enormous advantages over government action in making use of and coordinating dispersed information, in encouraging innovation, in supplying appropriate incentive structures, and so on. Accordingly, anyone arguing that government intervention in the marketplace can improve economic performance has an extremely difficult case to make.
Many libertarians slide easily from noninterventionism in domestic affairs to noninterventionism abroad, believing they're on equally firm footing with both positions. But they're not, because the fact is that there's no invisible hand in foreign affairs. There are no equilibrating mechanisms or feedback loops in the Hobbesian jungle of predatory dictatorships and fanatical terrorist groups that give us any assurance that, if the United States were only to stand aside, things would go as well for us in the world as they possibly could.
Accordingly, it seems to me that a no-exceptions policy against preventive war rests ultimately on an untenable assumption: that unrousable passivity on the part of the greatest and most powerful country that ever existed will somehow yield the most favorable achievable conditions in the world—that, in an intricately interconnected world, leaving everything outside our physical borders to the wolves will ensure that everything turns out for the best.
I don't buy it. Hostile regimes bent on relentless expansion and pursuing weapons of mass destruction are a threat to global security. Hostile regimes that could put weapons of mass destruction into the hands of terrorists are a direct threat to the lives of Americans. If regimes fitting either of these descriptions don't change their ways, military action against them should be an option.
Iraq's current regime fits both descriptions. It is not going to change its ways. The risks of war are real but manageable. Let's act before it's too late.
The post Should We Invade Iraq? appeared first on Reason.com.
]]>Risks of action, risks of inaction—which are greater? Solid facts are few and far between; we're forced to make our way on the basis of hypotheticals and maybes and historical analogies. How can we have any confidence that we are weighing the risks intelligently?
One point in my favor is that I am actually weighing the risks. That's why I support military action against Iraq: I believe the risks of inaction outweigh the risks of action. I am not a reflexive hawk: I opposed our recent military adventures in Panama, Haiti, Somalia, and the Balkans. I would not support military action against, say, Burma, merely because its government is despicable. Odious as it is, the Burmese regime poses no significant threat to its neighbors or to us. I would not have supported making war on China in the 1960s, even though its rulers were wildly anti-American and seeking to develop a nuclear arsenal. Despite the threat China posed to us, the risks of acting were far too great (especially the possibility of an escalation with the Soviets) and the price of victory against such a formidable and fanatical adversary would have been far too high. In that situation, deterrence and diplomacy (in particular, playing the Chinese and Soviets against each other) were the better options.
So on the general question of preventive war—whether to make war now in order to avoid a worse war later—my position is: It depends on the circumstances. The decision whether to go to war should turn on a pragmatic assessment of relative risks. Sometimes the balance will tilt in favor of action, sometimes not. In the particular case of Iraq in 2002, I believe the balance tilts strongly toward action.
Many who oppose invading Iraq (I won't ascribe this view to Mueller, since he did not spell out his general position clearly) reject the kind of pragmatic assessment that I think is called for. They believe that preventive war is just a bad idea, period—that it's wrong, or at least reckless, to fire the first shot unless you're absolutely sure the other guy is about to squeeze the trigger.
So when I'm debating the Iraq question with someone like that, we're talking past each other. I'm explaining the reasons that led me to my conclusion. He's marshalling evidence in support of a predetermined conclusion.
Not that there's anything wrong, in general, with predetermined conclusions—they're called principles. But all principles aren't created equal. Some are sound, some are iffy, and some are downright worthless.
What about the principle of no preventive wars? Specifically, what is the basis for assuming that preventive wars always make matters worse? In economic policy, there are extremely solid grounds for the principle of no government meddling with markets. Market competition has enormous advantages over government action in making use of and coordinating dispersed information, in encouraging innovation, in supplying appropriate incentive structures, etc. Accordingly, anyone arguing that government intervention in the marketplace can improve economic performance has an extremely difficult case to make.
Many libertarians slide easily from noninterventionism in domestic affairs to noninterventionism abroad, and believe that they're on equally firm footing with both positions. But they're not, because the fact is that there's no invisible hand in foreign affairs. There are no equilibrating mechanisms or feedback loops in the Hobbesian jungle of predatory dictatorships and fanatical terrorist groups that give us any assurance that, if the United States were only to stand aside, things would go as well for us in the world as they possibly could.
Accordingly, it seems to me that a no-exceptions policy against preventive war rests ultimately on an untenable assumption—on the implicit belief that unrousable passivity on the part of the greatest and most powerful country that ever existed will somehow yield the most favorable achievable conditions in the world. That, in an intricately interconnected world, leaving everything outside our physical borders to the wolves will ensure that everything turns out for the best.
I don't buy it. Hostile regimes bent on relentless expansion and pursuing weapons of mass destruction are a threat to global security. Hostile regimes that could put weapons of mass destruction into the hands of terrorists are a direct threat to the lives of Americans. If regimes fitting either of these descriptions don't change their ways, military action against them should be an available option.
Iraq's current regime fits both descriptions. It is not going to change its ways. The risks of war are real but manageable. Let's act before it's too late.
The post Weighing the Risks appeared first on Reason.com.
]]>Mueller's assessment of the regional threat posed by a nuclear Iraq is nothing short of fantastic. He pooh-poohs the possibility that Iraq might invade one or more of its neighbors and argues that Saddam Hussein "is primarily devoted to preserving his regime and his own personal existence." Huh? Try telling that to Iran and Kuwait.
Mueller needs to read Mark Bowden's superb—and chilling—profile of Saddam in the May 2002 issue of The Atlantic. Bowden makes clear that Saddam sees himself as a world-historical figure—a man destined to lead pan-Arabia back to greatness. Perversely, every brush with disaster and death "has strengthened his conviction that his path is divinely inspired and that greatness is his destiny." Why on earth should we suppose that a nuclear arsenal—built in reckless defiance of the United States and the world—would temper rather than inflame Saddam's raging megalomania?
Mueller blithely assumes that any future Iraqi aggression would be "extremely likely to trigger a concerted multilateral military attack upon him and his regime" and thus "patently suicidal." Excuse me, but there was no multilateral response to Iraq's attack on Iran, and the world would have been all too happy to acquiesce in Kuwait's disappearance had President Bush 41 not stepped in and forced the issue. What makes Mueller think that the world would rush in to confront a nuclear-armed Iraq? That task, inevitably, would fall to the United States. Mueller's counsel boils down to this: The United States should avoid war with a relatively weak Iraq today so that it can tangle with a nuclear adversary tomorrow.
What about the nexus between Iraq and terrorism, which Mueller dismisses as so much "arm-waving"? Allow me to quote Bowden's article once more, this time from a scene in which Saddam is addressing Iraqi military leaders who run terrorist training camps:
He told [them] that they were the best men in the nation, the most trusted and able. That was why they had been selected to meet with him, and to work at the terrorist camps where warriors were being trained to strike back at America. The United States, he said, because of its reckless treatment of Arab nations and the Arab people, was a necessary target for revenge and destruction. American aggression must be stopped in order for Iraq to rebuild and to resume leadership of the Arab world.
This meeting occurred back in 1996—before the recent heating up of the conflict. So much for Saddam's live-and-let-live foreign policy.
Bellicose rhetoric is one thing; the ability to back it up is something altogether more serious. Here is the ultimate threat, the one that Mueller can't even bring himself to discuss: Iraqi biological or nuclear weapons might someday be put in the hands of terrorist groups. If that were to happen, America could experience horrors that would dwarf those unleashed on September 11.
Opponents of action against Iraq argue that we can rely on deterrence to protect us from such atrocities. No country, not even one as rash as Iraq, would dare to use weapons of mass destruction against the United States because of the threat of overwhelming retaliation. That argument has considerable force with respect to a direct attack by Iraq, but it fails completely to confront the possibility that Iraq could use terrorist intermediaries to do its dirty work while masking its own involvement. How is deterrence supposed to work when WMDs lack a return address?
Recall, again, last year's anthrax attacks. We still don't know who was responsible, or whether there was any foreign state involvement. Just this week, a Washington Post article cast considerable doubt on the FBI's favored theory that the murders were the work of a disgruntled American scientist—and suggested that an Iraqi role remains a live possibility.
Go back a few more years—to the 1993 plot to assassinate former President Bush in Kuwait. It appears that the attack was an Iraqi operation—but the fact is we're not really sure. Read this 1993 New Yorker piece by Seymour Hersh for an in-depth review of the less-than-airtight case.
Welcome to the shadowy world in which we now live. A world in which deterrence no longer suffices. A world in which the judicious use of American power to preempt looming threats may be all that stands between us and catastrophe.
Here is what we know about the current Iraqi regime. It has weapons of mass destruction and is actively seeking to add to its arsenal. It is rabidly hostile to the United States. It has an established track record of predatory conduct and a demonstrated willingness to take extreme risks in pursuing its predatory ambitions. There is not another country on earth that matches Iraq's combination of destructive capacity, anti-American animus, and recklessness in projecting power. In a shadowy world, this much is clear: We are not safe while the present regime rules Iraq.
The post Nasty Realities appeared first on Reason.com.
]]>Well, I beg to differ. Iraq is no joke: The crimes that the Baathist regime there has committed and may intend to commit in the future are deadly serious business. Under the reign of Saddam Hussein, Iraq has invaded two of its neighbors, lobbed missiles at two other countries in the region, systematically defied U.N. resolutions that demand its disarmament, fired on U.S. and coalition aircraft thousands of times over the past decade, and committed atrocious human rights abuses against its own citizens—including the waging of genocidal chemical warfare against Iraqi Kurds. In short, this is a regime that is responsible for hundreds of thousands, perhaps millions, of deaths.
Meanwhile, Iraq has a long record of active support for international terrorist groups. Indeed, it has apparently staged terrorist attacks of its own directly against the United States—here I am speaking of Iraq's likely involvement in the attempted assassination of former President Bush in Kuwait in 1993.
Most ominously, Iraq has been engaged for many years in the monomaniacal pursuit of weapons of mass destruction. It reportedly has significant stockpiles of biological weapons, and its aggressive, large-scale nuclear program is thought to be at most a few years away from success. The fact that Iraq has been willing to endure ongoing sanctions—and thus the loss of hundreds of billions of dollars in oil revenue—rather than dismantle its WMD programs shows the ferocity of its commitment to maximizing its destructive capabilities.
In light of the above, I would support military action against Iraq even if 9/11 had never happened and there were no such thing as Al Qaeda. After all, I supported the Gulf War back in 1991 in the hope of toppling Saddam Hussein's regime before it fulfilled its nuclear ambitions. Unfortunately, quagmire was plucked from the jaws of victory in that earlier conflict, and so today we are faced with concluding its unfinished business. In my view, standing by with "patient watchfulness" while predatory, anti-Western terror states become nuclear powers is irresponsible and dangerous folly.
As to the headline question, "What's the rush?," my reply is: North Korea. In 1994 President Clinton, with the help of former President Carter, swept the Korean threat under the rug and trusted that "nature," or something, would deal with that "devil du jour." Now North Korea's psychopathic regime informs us that it has nuclear weapons—a fact that vastly complicates any efforts to prevent the situation from getting even worse. We can look forward to similar complications with Iraq unless we act soon.
The case for action against Iraq is further strengthened by the unfortunate facts that 9/11 did happen and Al Qaeda does exist. Here is the grim reality: Radical Islamism is in arms against the West, and its fanatical followers have pledged their lives to killing as many of the infidel as they possibly can. American office workers in New York and Washington, French seamen in Yemen, Australian tourists in Bali, Russian theatergoers in Moscow—nobody is safe. However exactly this conflict arose, it is now in full flame. And let there be no mistake: This is a fight to the death. Either we crush radical Islamism's global jihad, or thousands—or even millions—more Americans will die.
Iraq occupies a strategic position in the war against Islamist terror along a number of different dimensions. First, Iraq's WMD programs threaten to stock the armory of Al Qaeda & Company. Saddam Hussein's regime has a long and inglorious history of reckless aggression and grievous miscalculation. The decision to use terrorist intermediaries to unleash, say, Iraqi bioweapons against the United States strikes me as an entirely plausible scenario—assuming that Iraq's leadership can convince itself that the attack could be made with "plausible deniability." Given that more than a year has gone by since last fall's anthrax letter scare and we still have absolutely no idea who was responsible, the threat posed by Iraq's WMD programs is far from idle. It is, in fact, intolerable.
Second, the resolution—one way or another—of our longstanding conflict with Iraq will have vitally important repercussions in the larger war against terror. If we proceeded to remove the Baathist regime from power, we would make it extremely clear that the United States means business in dealing with terrorism and its sponsors. All those countries that continue, more than a year after 9/11, to demonstrate their incapacity or unwillingness to root out the terrorists in their midst (e.g., Iran, Pakistan, Saudi Arabia, Syria, Lebanon, Yemen, etc.) would have newly strengthened incentives to do the right thing. On the other hand, if all the tough talk against Iraq turned out to have been hot air, U.S. credibility would sustain a major blow. Al Qaeda would be emboldened by perceived American weakness, and countries that have to balance fear of the United States against fear of Islamists at home would all take a big shift toward taking U.S. displeasure less seriously.
Finally, regime change in Iraq offers the opportunity to attack radical Islamism at its roots: the dismal prevalence of political repression and economic stagnation throughout the Muslim world. The establishment of a reasonably liberal and democratic Iraq could serve as a model for positive change throughout the region. Of course, the successful rebuilding of Iraq will not be easy, but we cannot shrink from necessary tasks simply because they are hard. And we cannot simply assume that "nature" will bring freedom to a region that has never known it on a time scale consistent with safeguarding American lives.
Mueller's "What, me worry?" attitude captures perfectly the prevailing opinion about Afghanistan circa September 10, 2001. The Taliban were more a punch line than a serious foreign-policy issue; only the most fevered imagination could see any threat to us in that miserable, dilapidated country. The next day, three thousand Americans were dead.
We can't let that happen again.
The post No more 9/11s appeared first on Reason.com.
]]>There is nothing new about such attitudes. The belief that market competition alienates and atomizes was never expressed with more passionate ferocity than in Karl Marx and Friedrich Engels' The Communist Manifesto. Only two years after England repealed its protectionist Corn Laws and embraced full-fledged free trade, Marx was already proclaiming the socially corrosive effects of nascent globalization:
"The bourgeoisie…has left remaining no other nexus between man and man than naked self-interest, than callous 'cash payment.' It has drowned the most heavenly ecstasies of religious fervour, of chivalrous enthusiasm, of philistine sentimentalism, in the icy water of egoistical calculation. It has resolved personal worth into exchange value, and in place of the numberless indefeasible chartered freedoms, has set up that single, unconscionable freedom—Free Trade."
How does such thinking fit into today's historical context, now that Marx's dreamed-of future has come and gone? For a century, the collectivist, centralizing impulse worked to shape the goals and instruments of social policy. Now much of that work is coming into question. For partisans of social cohesion, the shoe is now on the other foot: Where once they fought in the name of alluring, untested possibilities, today they must defend existing and increasingly dilapidated structures from criticism and reform. They have transformed themselves from reformers and revolutionaries into conservatives and reactionaries.
The rearguard defense can be seen vividly in the fight over the centerpiece of the 20th century welfare state's attempts to centrally manage in the name of social cohesion: traditional social insurance programs. It is increasingly apparent that such policies are doomed to collapse and need fundamental rethinking. Blind resistance to that rethinking will only further rend the social fabric.
Behind the appealing rhetoric of unity, the contemporary anti-liberal agenda is deeply divisive: It pits the privileged beneficiaries of current policies against their more numerous but less visible victims. It sets current pensioners against the young and middle-aged whose hopes for retirement security are imperiled by the defects of current pension systems.
Critics of globalization argue that the spread of markets is undermining social cohesion by compromising national governments' ability to tax (and thereby fund) the social safety net. "The increasing mobility of capital has rendered an important segment of the tax base footloose, leaving governments with the unappetizing option of increasing tax rates disproportionately on labor income," according to Harvard University economist Dani Rodrik in Has Globalization Gone Too Far? (1997). "Yet the need for social insurance for the vast majority of the population that remains internationally immobile has not diminished. If anything, this need has become greater as a consequence of increased integration."
It is difficult even to take seriously the proposition that, whether because of globalization or otherwise, the governments of industrialized countries are hurting for tax revenue. Between 1965 and 1998, while globalization was supposedly eroding rich countries' tax bases, average total tax revenues as a percentage of GDP rose for Organization for Economic Co-operation and Development (OECD) member countries from just over 25 percent to well over 35 percent. There is, in short, no evidence whatsoever that national governments lack the resources to fund appropriate social policies.
Meanwhile, the notion that globalization has increased the need for social insurance does not square with the facts. The theory behind the notion is that international integration increases the risk of dislocation (and thus the need for the safety net) in those sectors of the economy exposed to international competition. But the majority of social spending goes to senior citizens who are retired from the work force; their exposure to the slings and arrows of foreign competition is nil.
It is undeniably the case that the welfare states of the advanced countries are now under severe fiscal strain. But if market forces are not the culprit, what is? The social safety net has been badly frayed, not by any pressures of globalization, but by the collectivized, top-down nature of traditional social insurance. At the heart of the problem are enormous, monolithic public pension systems that violate the most basic precepts of actuarial soundness. Those systems are primarily responsible for the welfare state's mounting financial woes.
The founding father of collectivized social insurance, German Chancellor Otto von Bismarck, was brutally candid about the political benefits of centralization. As ambassador to Paris in 1861, he had seen how Napoleon III used state pensions to buy support for the regime. "I have lived in France long enough to know that the faithfulness of most of the French to their government…is largely connected with the fact that most of the French receive a state pension," he recalled later. For Bismarck, the appeal of social insurance was that it bred dependency on, and consequently allegiance to, the state.
Social insurance was thus born of contemptuous disregard for liberal principles: What mattered was not the well-being of the workers but the well-being of the state. With that animating principle, social insurance necessarily assumed a collectivist character. In particular, it would clearly not do simply to compel workers to provide for their own retirement; funded pensions that actually belonged to the workers would not inspire the proper feelings of dependency and subservience. Far better was the "pay as you go" system in which the government would transfer funds directly from current taxpayers to current retirees.
When such ventures are attempted in the private sector, they go by the name of pyramid or Ponzi schemes and constitute criminal fraud. The essence of a pyramid scheme is that investors' money is never put to productive use; it is simply diverted to pay off earlier investors. As long as new victims can be found, everything seems to work fine. Eventually, though, the promoters of the scheme run out of new investors, and the whole house of cards collapses.
Pay-as-you-go public pension systems operate in precisely the same way. As long as the contributions of active workers are sufficient to cover payments to current retirees, the system is fiscally healthy.
Indeed, in the early decades of such programs, it appeared that the market had been outfoxed. Consider Nobel Prize-winning economist Paul Samuelson's smug optimism back in 1967: "The beauty of social insurance is that it is actuarially unsound. Everyone who reaches retirement age is given benefit privileges that far exceed anything he has paid in….How is this possible? It stems from the fact that the national product is growing at compound interest….Always there are more youths than old folks in a growing population….A growing nation is the greatest Ponzi game ever contrived."
Sooner or later, though, such hubris must receive its grim comeuppance. Shifting demographics impose the ultimate constraint. As populations age, the number of retirees begins to grow faster than the number of new workers, until at last the burden is unsustainable.
Meanwhile, the perverse incentive structure of collectivized social insurance works to accelerate the system's ultimate breakdown. In particular, workers have strong incentives to minimize or evade their contributions to the system, while retirees have an obvious stake in campaigning for higher benefits. Such dynamics steadily worsen the relationship between revenues and obligations and thereby hasten the eventual day of reckoning.
Today, with a global pension crisis that affects rich, developing, and postcommunist nations alike, the reckoning is at hand. Around the world, the ratio of active workers to retirees is shrinking. Promised benefits have spiraled out of control, while demographic changes and widespread evasion reduce the relative size of the contribution base. Consequently, the hopes for retirement security of hundreds of millions of workers are now in serious jeopardy.
The inevitable Ponzi endgame is now obvious in the rich countries of the industrialized world. In the United States, for example, average life expectancy at birth was only 61.7 years in 1935 when Social Security was established—lower than the original minimum retirement age. Today, U.S. life expectancy stands at 76.5 years, and is expected to climb to around 80 over the next 20 years. For most other industrialized countries, current and projected life expectancies are even higher. Meanwhile, fertility has dropped sharply. With the single exception of Ireland, birth rates in all the advanced countries are now below the replacement rate of 2.1 children per woman. In Japan, the fertility rate is only 1.68; in Austria, 1.45; in Italy, a mere 1.33. Continued declines in fertility are expected.
The upshot of these demographic trends is a steady erosion in the funding base for social insurance benefits. In 1950, there were 16 workers in the United States for every retiree; today the ratio is only 3 to 1, and in 20 years it will have fallen to 2 to 1. Elsewhere the outlook is even bleaker: By 2020, worker-to-retiree ratios are expected to fall to 1.8 in France and Germany, and 1.4 in Italy and Japan.
Social insurance in the advanced countries is caught in a squeeze between rising life expectancy on one flank and falling fertility on the other. In that tightening vise, what once seemed so clever is now a catastrophe in the making. "When population growth slows down, so that we no longer have the comfortable Ponzi rate of growth or we even begin to register a decline in total numbers," a chastened Paul Samuelson wrote in 1985, "then the thorns along the primrose path reveal themselves with a vengeance."
Already today, public pension spending in the rich member countries of the OECD averages 24 percent of the total government budget, or 8 percent of GDP. To fund these enormous outlays, the tax burden imposed on current employees has reached punishing levels: In Italy, Germany, and Sweden, for example, the combination of employer and employee contributions and personal income taxes now averages around 50 percent of gross labor costs. And while workers put more and more into the system, they can expect to receive less and less. In Sweden, the average rate of return for the generation retiring 25 years after the establishment of the public pension system approached 10 percent per year; for the generation retiring 20 years later, the rate of return had dropped to 3 percent. In the United States, real rates of return for two-earner couples now range from -0.45 percent to 2.13 percent, depending on income.
Even with rising tax rates and declining returns, pay-as-you-go systems throughout the advanced nations are heading toward financial collapse. In the United States, Social Security revenues currently exceed expenses, but the system is expected to begin running deficits in 2016. The annual shortfall is projected to be $1.3 trillion by 2030, a figure that represents more than two-thirds of the entire federal budget for 2001. Over the next 75 years, Social Security's total unfunded liabilities have an estimated present value of $9 trillion—as compared to the current national debt of $5.7 trillion. In Germany and Japan, the current unfunded liabilities of the public pension system are well over 100 percent of GDP; in France and Italy, they exceed 200 percent.
Since developing countries still have relatively young populations, one might expect that the problems with their pension systems remain in the distant future. One would be wrong. First of all, developing countries are making the transition from high birth and death rates to low fertility and mortality much faster than did the advanced nations. It took France 140 years to double the share of the population over 60 years of age (from 9 to 18 percent), while Belgium needed nearly 120 years; China, on the other hand, will repeat the feat in 34 years, and Venezuela will do it in 22. Between 1990 and 2030, the percentage of the world's population over 60 years of age is expected to increase from 9 percent to 16 percent, and most of that growth will occur in poorer countries.
In addition, administering public pension systems in poor countries is severely complicated by the large informal sectors endemic to those societies. A vicious circle is often triggered. Because many people work in the informal sector, payroll taxes (collected only in the formal sector) have to be higher than would otherwise be necessary. High payroll taxes, though, create incentives for even more people to retreat into the informal sector, thus necessitating even higher rates, which push more people into tax evasion, and so forth. Rising payroll tax rates in Uruguay, for example, caused the proportion of workers contributing to the system to fall from 81 percent in 1975 to 67 percent in 1989. In Brazil, evasion cut contribution revenues by more than a third during the 1980s.
The transitional economies of the former Soviet empire have inherited no end of problems from the Communist era, including tottering public pension systems. During Soviet rule, dependence on state pensions was nearly total, since occupational pensions and private saving were virtually nonexistent. With communism's collapse, the folly of that dependence has become abundantly clear. To begin with, the countries in question have populations that are nearly as old as those in the advanced nations: As of 1990, over 15 percent of people in former Communist bloc countries were over 60, as compared to 18 percent in the OECD. Like developing nations, though, they also have large informal sectors that erode the contribution base.
By the mid-1990s the pension systems of the transitional economies were saddled with cripplingly high dependency ratios. In Poland, pensioners totaled 61 percent of active workers by 1996; in Ukraine, the figure was 68 percent; in Bulgaria, 79 percent. To cope with this crushing burden, contribution rates were forced to remain at the punitive levels that had been set during Communist rule: 26 percent in the Czech Republic, 30.5 percent in Hungary, and 42 percent in Bulgaria. With the demise of the command economy, though, such high rates only accelerated workers' flight into the informal sector, aggravating dependency ratios even further.
Government-provided social insurance is defended on the ground that it shields retirees from the market risks that attend private pension plans. Indeed it does, but only at the cost of subjecting current and future retirees to a far greater risk—the risk of living until the Ponzi scheme of pay-as-you-go pensions begins to break down. Over the past couple of decades retirees around the world have discovered, much to their chagrin, that substituting political risk for market risk has been a poor bargain indeed, as governments have been forced to renege on promises and slash benefits in order to stave off financial collapse.
The breach of faith has been especially severe in developing and transitional countries. Failure to adjust benefits for inflation was a favorite strategy in Latin America. The average real pension dropped 80 percent in Venezuela between 1974 and 1992 because of inflation; benefits fell 30 percent in Argentina between 1985 and 1992 for the same reason. In the transitional economies, a combination of inflation, explicit benefit cuts, and accumulation of arrears kept pension expenditures as a percentage of GDP more or less constant despite rapid growth in the number of pensioners. Consequently, in Romania, retirees' real per-capita income fell 23 percent between 1987 and 1994; in Hungary, the fall was 26 percent; in Latvia, 42 percent. In 1999, some four million elderly Russians were expected to survive on the minimum pension of 234 rubles (less than $10 dollars) a month. Millions more received nothing as the government simply failed to honor its obligations to its most vulnerable citizens.
On a less dramatic scale, chiseling has been occurring in rich countries as well. In the United States, a 1983 patch-job for Social Security included making benefits taxable for high-income recipients, skipping inflation indexation for one year, and gradually raising the retirement age from 65 to 67. Germany has scheduled an increase in the retirement age and reduced benefit levels by basing them on post-tax rather than pre-tax wages. Japan cut benefits back in 1986. Iceland shifted to a means-tested benefit in 1992, thereby eliminating payments altogether for thousands of retirees. While such moves and others like them may have been necessary under the circumstances, the fact remains that promises have been broken, repeatedly, and more infidelity is in store.
As the gap between promise and reality grows ever wider, countries around the world have begun to experiment with alternatives to the collectivized status quo. Leading the way was Chile, which in 1981 moved to phase out its pay-as-you-go system and replace it with privately owned individual retirement accounts. Instead of the old 26 percent payroll tax, workers are now required to deposit 10 percent of their wages into special savings accounts. Private companies, known as "administradoras de fondos de pensiones" (AFPs), manage the accounts. Workers are free to choose their AFP and switch their savings from one to another. Upon retirement, workers can either use their accumulated savings to purchase a lifetime annuity from an insurance company, or else leave the money in the account and make programmed withdrawals. Any money remaining in the account when the retiree dies can be passed on to heirs.
Workers who entered the labor force after the new system was in place were required to participate in the new system, while those who had already retired had their benefits under the old system guaranteed. Transitional workers were given the choice between sticking with the old system or switching to the new; if they switched, they were given a "recognition bond" to credit them for their prior contributions. The bond was placed in the worker's account and its amount was set so that, at retirement, it would be equal to the worker's accrued benefits under the old system.
Finally, the Chilean pension reform maintains a safety net in the form of a minimum pension guarantee. If for any reason a retiree's private benefits do not meet a minimum threshold, the government will supplement those benefits to bring them up to that threshold. Such supplemental payments are funded from general tax revenues, not a payroll tax.
Chile's pension reforms have been a spectacular success. Some 5.9 million workers owned private savings accounts by the end of 1998—up from 1.4 million at the end of 1981. More than 95 percent of the transitional workers who were given a choice have decided to join the new system. Assets in that system have grown to over 40 percent of GDP and are projected to reach 134 percent of GDP by 2020. The real rate of return on those assets averaged a gaudy 11.3 percent a year through 1999. A 1995 study found that pension benefits averaged 78 percent of a retiree's average salary over the last 10 years of his working life.
Meanwhile, the reforms have generated an impressive array of ancillary benefits. In conjunction with other market-oriented reforms, pension privatization has helped to raise Chile's national savings rate from around 10 percent in the late 1970s to over 25 percent at the beginning of the 21st century. Capital markets have deepened dramatically thanks to the accumulation of large private pension funds. Financial markets have grown in sophistication as well as size: Stock market liquidity has increased; new financial instruments like indexed annuities and mortgage-backed bonds have been developed; and transparency has improved with better disclosure and the emergence of credit-rating institutions. One econometric analysis credits the development of financial markets promoted by pension reform and related factors with increasing total factor productivity in Chile by 1 percentage point per year, or half the overall rate of increase.
Perhaps most important, pension reform has helped to end the class conflict that so convulsed Chile during the 1970s. "We recognized that when workers do not have property, they are vulnerable to demagogues," recalls José Piñera, who as minister of labor was the architect of Chile's pension privatization. (Full disclosure: Piñera and I are colleagues at the Cato Institute.) "The key insight of our pension reform was that, by allowing workers to acquire property in the form of financial capital, we could strengthen their commitment to the free market by aligning their interests with the health of the economy."
Piñera and his fellow reformers turned the tables on Marx: Workers became owners of the means of production, but through the expansion of the market system rather than its overthrow. In the process, Marxist-style collectivism lost much of its appeal. "Since our reforms we have had three center-left governments," observes Piñera, "and none of them has touched the core of our major free-market policies. And one reason for this is that nobody dares to threaten the value of the workers' retirement accounts."
A host of other countries have followed Chile's example in recent years. Argentina, Australia, Bolivia, Colombia, El Salvador, Hungary, Kazakhstan, Mexico, Peru, Poland, Sweden, Switzerland, the United Kingdom, and Uruguay have all instituted mandatory private savings plans that, to a greater or lesser extent, supplant the old pay-as-you-go approach. In most of these countries the new private system only partially replaces the pay-as-you-go system. In Hungary, for example, workers contribute 6 percent to private accounts while a 24 percent payroll tax continues to support the old system. In Sweden, a 16 percent payroll tax goes to maintain the old system, while 2.5 percent of a worker's salary now goes into a private account. The Bush administration is now considering a similar partial privatization for the United States.
Partial reforms are still only a partial solution. Private accounts will help to generate higher returns for future generations of retirees, but those generations will still be saddled with a dysfunctional, if somewhat shrunken, pay-as-you-go Ponzi scheme. The longer that thorough reform is delayed, the more unfavorable the demographic situation becomes and the more onerous the burdens of maintaining the old system are.
It must be acknowledged, though, that the path toward full-scale privatization—with government-provided benefits limited to ensuring some guaranteed minimum—is arduous and lined with hazards. The most obvious hurdle to overcome is financing the transition from the old to the new system. Phasing out the traditional system does not create any new costs; on the contrary, by preventing future unfunded liabilities from accruing, reform contains and ultimately cuts off the flow of red ink. But there is a temporary cash-flow problem: Benefits under the old system must be paid out to current retirees, but the contributions that formerly funded those benefits are now being directed into private accounts. Other sources of funds must be tapped to pay off the remaining liabilities—which can be staggeringly large.
The Chilean experience shows that this obstacle, though daunting, is not insuperable. The implicit debt of its pay-as-you-go system had grown in excess of 100 percent of GDP. But shifting most current workers out of the old system quickly slashed that figure. To deal with what remained, Chile used a variety of methods. It continued a portion of the payroll tax for a number of years, sold off state-owned enterprises to raise revenue, cut other government expenditures, issued new government bonds, and painlessly reaped the benefits of the additional tax revenues that came from a faster-growing economy. Together, these measures have sufficed to cover the transition's financing requirements, which have ranged from 1.4 to 4.4 percent of GDP per year.
Other risks lurk in designing a new system. While some measure of prudential regulation may be necessary, especially in countries with underdeveloped financial markets, excessive government meddling in how private accounts are to be invested can reduce returns for savers—possibly catastrophically. Chile, for example, still requires AFPs to guarantee a minimum return relative to other AFPs. Consequently there is little difference in the portfolios of the various AFPs, therefore denying savers the opportunity to choose different mixes of risk and return. Also, Chile has rigid restrictions on the commissions charged by AFPs that prevent discounts based on maintaining a specific balance or keeping an account for some specified amount of time. Thus prevented from competing effectively on product or price, the AFPs attempt to lure customers through marketing ploys—just as American banks in the days of interest rate controls offered toasters for new accounts. Such empty competition drives up administrative costs.
In Mexico, meanwhile, fund managers are required to invest a minimum of 65 percent of assets in government securities—a grievously wrongheaded mandate that risks turning the system into a dumping ground for government debt. A fiscal crisis, not a remote contingency in Mexico by any means, could wipe out the retirement savings of a generation. The Mexican system also prohibits investments in equities or any foreign assets. Such restrictions stifle the new sophistication in financial markets that is an enormous side benefit of privatization, as well as preventing prudent portfolio diversification. In poorer countries with underdeveloped financial markets, it is especially important that savers be allowed to invest in high-quality foreign assets.
Whether in the form of regulation or market participation, overweening government control over investments in a "privatized" system merely substitutes one form of hyper-centralization for another. Indeed, for decades many developing countries have pursued this variation on top-down control in a pure and explicit form. Rather than adopting pay-as-you-go systems, these countries, including India, Malaysia, Singapore, and a number of African nations, created retirement plans in which there is a single retirement fund or "provident fund" and the government manages all the investment assets.
These provident fund systems do avoid the perverse Ponzi-scheme dynamics of conventionally collectivized social insurance—but only to fall prey to other dysfunctions. Specifically, the government as investment-fund monopolist is immune from competitive pressure to earn a decent return; consequently, it is not constrained from investing in ways that are politically advantageous but economically dubious. Unsurprisingly, the performance of provident fund systems has ranged from lackluster to disastrous. In the latter category, Kenya's system averaged a negative 3.8 percent rate of return during the 1980s, while returns in Zambia averaged negative 23.4 percent.
Social insurance is not menaced by excessive reliance on markets. On the contrary, it is the systematic suppression of market principles that has put the retirement security of millions in jeopardy. Undoing past mistakes will require formidable resolve, as will resisting the continuing temptation to attempt control from above. But if the resolve can be found, the proper direction is clear: For the sake of retirement security, for the sake of true social cohesion, the growing movement for market-based reform in social insurance is the one best hope there is.
The post Social Insecurity appeared first on Reason.com.
]]>In 1913, merchandise trade as a percentage of gross output was about 12 percent for the industrialized countries. They did not match that level of export performance again until the 1970s. The volume of international capital flows relative to total output reached heights in the early 20th century that have not been approached since. In that earlier time, capital flows out of Great Britain rose as high as 9 percent of gross domestic product; by contrast, the seemingly staggering current account surpluses of Japan and Germany in the 1980s never surpassed 5 percent of GDP. It is fair to say that much of the growth of the international economy since World War II has simply recapitulated the achievements of the era prior to World War I.
The first world economy was made possible by the staggering technological breakthroughs of the Industrial Revolution. Most obviously, new forms of transportation toppled the age-old tyranny of distance. For inland transport, the significance of the railroad is difficult to overestimate. In 1830, a journey from New York to Chicago took three weeks; just one generation later, in 1857, that same trip took only two days. The second half of the 19th century witnessed an explosion of railroad construction around the world. Great Britain's railway mileage more than tripled, from 6,621 miles in 1850 to 23,387 miles in 1910; over the same period, mileage in Germany grew nearly tenfold, from 3,637 miles to 36,152 miles; the United States, astonishingly, experienced a nearly thirtyfold increase, from 9,021 miles in 1850 to 249,902 miles in 1910. The railroads knitted countries into truly integrated national markets and facilitated the penetration of foreign goods from port cities into the interior.
Meanwhile, another technology was uniting those national markets into a global whole. Although the steamship was first developed early in the 19th century, further innovations in subsequent decades—the screw propeller, steel hulls, the compound engine—transformed what had been primarily a river vessel into cheap and reliable ocean transport. The effect on freight costs was nothing short of spectacular: An index of freight rates along Atlantic export routes fell by 70 percent in real terms between 1840 and 1910.
The Industrial Revolution's burst of technological creativity thus demolished the natural barriers to trade posed by geography. At the same time, it created entirely new possibilities for beneficial international exchange. In the core of the new global economy, the factories of the North Atlantic industrializing countries pumped out an ever-widening stream of manufactured goods desired around the world. Those factories, in turn, relied on access to cheap natural resources and raw materials. And in the less advanced periphery of Asia, Africa, and Latin America, new technologies allowed those natural resources and raw materials to be grown or extracted more cheaply than ever before.
So arose the initial grand bargain on which the first global division of labor was based: The core specialized in manufacturing, while the periphery specialized in primary products. For Great Britain, the first industrial power, manufactured goods constituted roughly three-quarters of its exports. The sprawling United States, on the other hand, straddled both core and periphery. The urbanized East took industrialization to a new level and carried America past Great Britain in economic development. The West, meanwhile, followed the path of other temperate "regions of European settlement" (Canada, Australia, New Zealand, and Argentina) and specialized in the production of grains, meats, leather, wool, and other high-value agricultural products. Finally, the South roughly followed the tropical pattern of development, focusing on such products as rubber, coffee, cotton, sugar, vegetable oil, and other low-value goods.
While far-flung foreign trade is as old as human history, this was something new. No longer was such commerce a marginal matter, limited to a few high-value luxuries. Now, for the first time, specialization of production on a worldwide scale was a central element of economic life in all the countries that participated. Between 1870 and 1913, exports as a percentage of national income doubled in India and Indonesia, and more than tripled in Thailand and China. Japan's transformation was especially dramatic. After Commodore Perry's black ships arrived in 1858, Japan turned from almost total isolation to free trade. In a mere 15 years, its export share multiplied an astonishing 70 times, to 7 percent of gross domestic output.
But it was not to last. The global economic order that arose and flourished in the waning years of the 19th century was swept away by the great catastrophes of the 20th: world wars, the Great Depression, and totalitarian dictatorships. Only in the past couple of decades has a truly global division of labor been able to reemerge.
What happened? Why did the first episode of globalization end so badly? These questions are more than mere historical curiosities. They have a vital bearing on the controversies that swirl around globalization today. According to contemporary critics of global trade, the sad fate of that earlier epoch reveals the inherent dangers of unregulated markets. Then as now, they argue, economic forces had slipped all proper constraints; then as now, the ideology of laissez-faire ran roughshod over social needs. The consequences in the past were tragic: The excesses of unchecked markets, with their brutality and volatility, ultimately triggered the catastrophes of totalitarianism, depression, and war. Today, the resurgence of utopian faith in markets threatens a new cycle of disasters.
William Greider adopts this line in his book One World, Ready or Not (1997). In particular, Greider cites the historical analysis of Karl Polanyi, author of the 1944 book The Great Transformation. Polanyi argued that the catastrophes of his time could ultimately be traced back to the evils of laissez-faire: "The origins of the cataclysm lay in the utopian endeavor of economic liberalism to set up a self-regulating market system." Greider contends that we are once again on the road to ruin: "Today, there is the same widespread conviction that the marketplace can sort out large public problems for us far better than any mere mortals could. This faith has attained almost religious certitude, at least among some governing elites, but, as Polanyi explained, it is the ideology that led the early twentieth century into the massive suffering of global depression and the rise of violent fascism."
Greider is by no means alone in resurrecting Polanyi: He has emerged in recent years as a kind of patron saint of globalization's critics. George Soros notes his intellectual debt in his acknowledgments at the beginning of The Crisis of Global Capitalism. Dani Rodrik, of Harvard University and author of Has Globalization Gone Too Far?, refers to him frequently. John Gray, a professor at the London School of Economics who wrote False Dawn: The Delusions of Global Capitalism, titled his first chapter "From the Great Transformation to the Global Free Market."
These arguments are an almost perfect inversion of the truth. The tragedies of the 20th century stemmed, not from an over-reliance on markets, but from a pervasive loss of faith in them. In the wake of the Industrial Revolution and the arrival of mechanized mass production, a powerful new idea began to take hold and remake the world in its image. That idea, reduced to its bare essence, was that the economic revolution of industrialization both enabled and required a revolution in social organization: the eclipse, partial or total, of markets and competition by centralized, top-down control. The intellectual and political movements spawned by this idea emerged in the last quarter of the 19th century and utterly dominated the first three-quarters of the 20th. This 100-year historical episode, though composed of diverse and widely varying elements, possesses enough coherence to merit a name, and the one I suggest is the Industrial Counterrevolution. (I first discussed the idea of an Industrial Counterrevolution in "Big Mistake" [February 1996], which focused exclusively on the American history of this global phenomenon.)
The Industrial Counterrevolution was protean, and in its many guises captured minds of almost every persuasion. It transcended the conventional left-right political spectrum: Both progressives who welcomed the social transformations wrought by industrialization and conservatives who feared them were united in their calls for a larger state with expanded powers. The Industrial Counterrevolution swept up reformers and revolutionaries, the religious and the anticlerical, social activists and big businessmen, workers and capitalists. The political forms that bore its imprint were many and varied: the welfare and regulatory state; the mixed economy of social democracy; the business-led associative state; Keynesian fine-tuning; the Galbraithean new industrial state; the developmental states of the Third World; and the totalitarian states, whether communist, fascist, or Nazi.
The name "Industrial Counterrevolution" is fitting on two levels. First, as a matter of historical development, the movements grouped together under this common heading were both inspired by and reacting against the economic and social transformations effected by industrialization. In the United States and Europe, the centralizing impulse first began to register during the 1870s, just as modern technological society was bursting onto the scene. In later-developing countries, the ideologies of centralization almost invariably supplied the matrix for modernization.
Second, in analytical terms, the common intellectual thread that runs through all of these movements—namely, the rejection or demotion of market competition in favor of top-down control—represents a direct assault on the principles of social order that gave rise to industrialization and are truest to its full promise. Of course, the partisans of the Counterrevolution thought quite the opposite: They believed that their political programs and industrialization rode together on the same great wave of history.
It is impossible to understand the collapse of the first world economy, or the rise of the present one, except in relation to the Counterrevolution's centralizing impulses. For the story of globalization and the story of the Industrial Counterrevolution are mirror images of one another: In the early decades of the 20th century, the rise of collectivism spelled the demise of the global economy; in the past couple of decades, the loss of faith in the collectivist dream has allowed globalization to resume its course.
Here I examine the first half of that cycle: the destruction of the global economy by the forces of runaway centralization. In particular, I want to focus on the critical decades leading up to World War I. For it is clear enough that the final breakdown of international economic integration during the calamitous 1930s was an extended consequence of the Great War. What is less well known is how the collectivist delusion helped to lead the world toward that awful conflict—and thus toward all the horrors that followed in its wake.
At the midpoint of the 19th century, a very different future appeared to be on the horizon. The liberal creed of cosmopolitanism, free trade, and peace promised to define the shape of things to come. As in so much else, Great Britain led the way. In the decades after Waterloo, it made gradual but significant progress in dismantling its protectionist policies. Seizing this political opening, a pair of textile manufacturers, Richard Cobden and John Bright, led their country to bolder action, organizing the Manchester-based Anti?Corn Law League into a national mass movement of middle-class urban interests against the landed elite. Their seven-year campaign achieved victory in 1846 with the repeal of the Corn Laws and the elimination of all duties on imported grains.
From its testing ground in Great Britain, free trade began to spread into continental Europe. The major breakthrough, again featuring Richard Cobden, was the Cobden-Chevalier treaty of 1860 between Great Britain and France. A flurry of European trade agreements followed. Building on its tradition of the Zollverein, a customs union of German states, the newly unified Germany steadily pursued a liberal trade policy. By the mid-1870s, average tariffs on manufactured goods had fallen to between 9 and 12 percent on the continent—compared to effective rates of 50 percent or more at the close of the Napoleonic Wars.
The liberal champions of free trade did not view their cause solely or even primarily as a commercial matter. In their view, free trade carried profound implications for the whole field of international relations. Free trade, they believed, could pave the way toward a new and modern form of international order—one that would replace the pointless and destructive dynastic struggles foisted upon the people by kings and aristocracies. Peaceful cooperation among nations, not mere economic efficiency, was the grand prize for which they strove.
Cobden outlined this larger vision in a speech in Manchester on the eve of the Corn Laws' repeal: "I believe that the physical gain will be the smallest gain to humanity from the success of this principle….I have speculated, and probably dreamt, in the dim future—ay, a thousand years hence—I have speculated on what the effect of the triumph of this principle may be….I believe that the desire and the motive for large and mighty empires; for gigantic armies and great navies—for those materials which are used for the destruction of life and the desolation of the rewards of labour—will die away; I believe that such things will cease to be necessary, or to be used when man becomes one family, and freely exchanges the fruits of his labour with his brother man."
Cobden and other Victorian free traders are often faulted for their naive faith in the healing powers of commerce. And indeed, some in that camp did fall prey to the facile assumption that major wars were no longer possible in the new global economy. But Cobden himself, as the above passage makes clear, was under no illusions as to the difficulty of subduing the powers of destruction. He saw the task as a monumental and centuries-long project.
However tempered by realism, though, the Cobdenite vision of the future was clearly optimistic. Though the challenges ahead were still daunting, the remaking of the world had begun. The sterile futility of conflict among nations was slowly but surely giving way to interdependence, peace, and prosperity—with commerce the steam-powered engine of that beneficent change.
The free traders' sunny cosmopolitanism all too quickly gave way to a very different vision of the international scene. As the Industrial Counterrevolution began to gather momentum, the prospect of a world at peace started to recede. A new prospect, dark and menacing, came in its stead to the fore—one of rival nations, rival races, pitted in fundamental and irresolvable conflict, engaged in a grim and merciless struggle for supremacy or submission. This radical and ruinous shift of perspective did not merely coincide with the spreading enthusiasm for centralization and top-down control; rather, the two developments were interconnected and mutually reinforcing.
First, the momentum of the Industrial Counterrevolution pushed inexorably toward expanding the power of the national state. This was true despite the fact that the most potent and influential of all the counterrevolutionary movements—Marxist socialism—was deeply internationalist in orientation. Marx himself was thoroughly cosmopolitan: He conceived of the coming socialist revolution and the workers' paradise it would establish as worldwide phenomena that would overwhelm dynastic, national, and racial distinctions as thoroughly as they did the historically fundamental distinctions of class. He had no interest in augmenting the strength of current states, which he condemned as tools of capitalist oppression.
Recall, however, that Marx's great contribution was a powerful theoretical and historical conception of why collectivism was inevitable. Marx had little to say as to how collectivism would actually work in practice, and he had even less influence over the ultimate course of events. The worldwide proletarian uprising never came, and in the absence of that hoped-for event, the overwhelming drive toward centralization that Marx did so much to engender fastened itself upon the instrumentality at hand: the national state.
Consider, for example, the fate of the German Social Democrats, Europe's first socialist party of political significance. Their original leaders were orthodox Marxists who preached international revolution, not domestic statism. Over time, though, electoral success spoiled the Social Democrats' doctrinal purity. In the 1890s, after their stunning gains in the Reichstag precipitated Bismarck's fall and the repeal of the repressive Socialist Law, new leaders like Georg Vollmar and Eduard Bernstein pushed the party toward "revisionism," or support for gradual reform and cooperation with the existing state. The domestication of the Social Democrats culminated in August 1914, when every single party member in the Reichstag voted in favor of war credits for the Kaiser's army.
Many other emerging centralizing movements embraced an expanded national state from the outset. Edward Bellamy, American author of the utopian fantasy Looking Backward and a major influence on subsequent Progressive and New Deal intellectuals, called his philosophy "nationalism" to distinguish it from Marxist-style socialism. In Great Britain, the Fabians advocated incremental reform and a political strategy of "permeation," or working through established political parties. And in Germany, the conservative, Bismarckian "state socialists" were unabashed in their devotion to the national state. Characteristic in this regard was the economist Gustav Schmoller, who proclaimed the state to be "the most sublime ethical institution in history."
Furthermore, the growing enthusiasm for national economic planning was fundamentally at odds with the new international division of labor. After all, if centralized decision making is more efficient than markets, why allow international markets to persist? Inflows and outflows of goods and capital, if unregulated, will only disrupt the best-laid plans of the national authorities. What good is it to set minimum wages in a particular industry if the workers who are supposed to benefit then lose their jobs because of competition from cheaper foreign goods? Or what if the authorities seek to encourage downstream processing industries, but the domestic producers of the raw inputs prefer exporting them at a high price to selling them cheaply at home?
A new collectivist case for protectionism thus began to emerge. If a nation's economic life is to come under central control, that control must extend to the nation's connections with the outside world. In outlining his vision for a "nationalist" utopia, Edward Bellamy was quite clear on this point: "A nation simply does not import what its government does not think requisite for the general interest. Each nation has a bureau of foreign exchange, which manages its trading. For example, the American bureau, estimating such and such quantities of French goods necessary to America for a given year, sends the order to the French bureau, which in turn sends its order to our bureau. The same is done mutually by all the nations." George Bernard Shaw, a Fabian pamphleteer as well as a playwright, took a similar view. In Fabianism and the Fiscal Question, he wrote that if protectionism means "the deliberate interference of the State with trade" and "the subordination of commercial enterprise to national ends, Socialism has no quarrel with it." On the contrary, Shaw asserted, socialism must be considered "ultra-Protectionist." And in Germany, the state socialists waged a blistering attack on free trade as a part of their larger campaign against laissez-faire and "Manchesterism."
It is true that many partisans of centralization, especially on the Left, resisted the protectionist logic of their position. Free trade appealed to their internationalist sympathies; also, a low-tariff policy was generally associated with cheap bread and thus was widely considered favorable to the working class. (How times have changed!) The momentum of centralization, though, generally prevailed over tradition and class interests. In the end, the fortunes of collectivism and protectionism rose together. In the middle of the 19th century, enlightened opinion was almost uniformly in favor of free trade; by the end of the century, protectionism had once again become intellectually respectable.
With that renewed respectability came a significant retreat from free trade in actual practice. In Germany, the breakthrough came in 1879, with Bismarck's "iron and rye" tariff. In France, the Meline Tariff raised duties to the equivalent of 10 to 15 percent for agricultural goods and over 25 percent for industrial products. Tariffs also climbed in Sweden, Italy, and Spain during the 1880s and 1890s. In the United States, tariff rates rose during the Civil War and stayed high for the rest of the century. They got a further boost with the McKinley Tariff of 1890. In Latin America, rates of protection ascended steadily during the final quarter of the 19th century. Tariffs in Russia were punishingly high and never came down.
The direct impact of resurgent protectionism on the new world economy should not be overestimated. Average tariff rates rose, but were still relatively modest on the eve of World War I: under 10 percent in France, Germany, and Great Britain; between 10 and 20 percent in Italy; between 20 and 30 percent in the United States; and between 20 and 40 percent in Russia and Latin America. Such non-tariff barriers as quotas or exchange controls were barely in evidence. Protectionist measures did slow the pace of globalization (and blocked it for certain regions and sectors), but did not stop it. Despite increasing obstacles, the internationalization of economic life flourished in the decades before World War I.
Nevertheless, the drift toward protectionism did contribute to a new international atmosphere of conflict and tension. In Bellamy's utopia, national planners could somehow control their imports and exports without so much as a cross word from abroad. But in reality, restrictions on trade inevitably set nations against each other. When governments interfere with their citizens' ability to do business with the citizens of other nations, they must expect such acts to be seen abroad as provocative. They are, after all, reducing the prosperity that other countries might otherwise enjoy. High tariffs in one country throttle export industries abroad; embargoes deprive other nations of needed raw materials, products, and capital. These restrictions can be matters of life and death if the dependence on foreign products or markets is great enough.
The implications of trade barriers for international relations are thus enormous. In a world of free trade, citizens of one country can exploit the benefits of a broader division of labor through peaceful commerce. But in a world where severe trade restrictions are endemic, such benefits can be attained only through warfare—through defeat of the foreign sovereignty that blocks access to the desired products or markets. Free trade makes war economically irrational; protectionism, carried far enough, makes it pay.
These grim implications were abundantly clear in the circumstances of the late 19th century. The enriching possibilities of international specialization had never been greater, and were increasing daily due to incessant technological breakthroughs. At the same time, however, countries were beginning to close their borders. While the level of protectionism was still within reasonable limits, it was widely believed that barriers would only increase with time. Making matters worse, the great powers of the core were rapidly consolidating political control over the periphery in a mad rush of imperial land-grabs. The world appeared to be fracturing into great imperial blocs, each one more or less closed off from the others. It seemed as though the countries that controlled these blocs would reign supreme; those without enough territory to combine self-sufficiency with prosperity would be doomed.
Under these conditions the Cobdenite cosmopolitan vision looked hopelessly outmoded. Expanding opportunities for a far-flung division of labor were not ushering in an age of peace; on the contrary, they were propelling nations toward inevitable and bloody conflict. What had wrought this dreadful turn of events? It was the expectation that countries would find it in their interest to close their economies to the outside world. And what created that expectation? It was the growing sense that national economic planning was the wave of the future. The drive toward centralization had thus transformed the legacy of the industrial revolution from that of world peace to one of a world at war. It is indeed fitting to call this transformation an Industrial Counterrevolution in international affairs.
The result was that collectivism and militarism became mutually reinforcing. Aggressive nationalism was needed to secure and safeguard the full blessings of collectivism; at the same time, collectivization was needed to render the nation fit for military conflict. From this basic feedback loop issued the great tragedies of dictatorship and total war.
The links that connected the dreams of central planning and the nightmares of the 20th century were forged, to a greater or lesser extent, by many of the disparate movements of the Industrial Counterrevolution. But those who pursued this fatal logic most explicitly and consistently, and to the most powerful historical effect, were the state socialists of Imperial Germany. The Bismarckian program integrated all the necessary elements: collectivism in domestic affairs, protectionism in commercial policy, and aggressive nationalism and militarism in matters of state. William Dawson, a sympathetic English observer of the German scene, distilled the essence of the new Reich into a single sentence: "As State Socialism is the protest of Collectivism against Individualism, so it is the protest of Nationality against Cosmopolitanism."
The leading theorists of state socialism, the so-called Kathedersozialisten, were fervent supporters of belligerent nationalism. Gustav Schmoller, perhaps their brightest light, was emphatic in his rejection of the Cobdenite vision. For him, the international sphere was inevitably and properly a zone of never-ending conflict: "All small and large civilized states have a natural tendency to extend their borders, to reach seas and large rivers, to acquire trading posts and colonies in other parts of the world. And there they constantly come into contact with foreign nations, with whom they must, quite frequently, fight. Economic development and national expansion, progress in trade and an enhancement of power are in most cases inextricably connected."
Adolf Wagner, another prominent voice, was even more truculent. Wagner asserted that the "decisive fact" in international relations was "the principle of power, of force, the right of power, the right of conquest." Weaker nations, he contended, would meet "the fate of all lower organisms in the Darwinian struggle for existence."
Schmoller and Wagner called upon Germany to steel itself for the coming struggle of nations. To that end, they were ardent supporters of a protectionist trade policy. Wagner, in particular, stressed the link between national security and protecting German agriculture. Dependence on foreign food supplies could be crippling in the event of war, he noted; furthermore, protectionism would preserve the large peasantry that supplied the backbone of a strong army.
The two scholars also urged an aggressive program of territorial expansion. Germany, they wrote, needed more space to ensure a high standard of living in an age of vast and autarkic empires—and to settle the country's rapidly increasing population. Schmoller called for creating a German country with 20 to 30 million inhabitants in southern Brazil. Wagner dismissed "idle pretensions like the American Monroe Doctrine" as an obstacle to German colonization. In addition to overseas adventures, Schmoller and Wagner foresaw a dominant German role in European affairs. Both expressed the view that German hegemony should extend throughout what came to be referred to in pan-German circles as Mitteleuropa.
To assume its rightful station, Germany would have to rely ultimately on its military prowess. Schmoller wrote that "the high standard of living of the English worker would be unthinkable without Great Britain's sea power," and that Germany should follow her example by building a strong navy. Wagner, for his part, called military power "the first and most important of all national, and may I add, of all economic necessities." The army, he claimed, was "a truly productive institution" because of "the connection between national might, security, honor and economic development and prosperity."
The writings of these renowned professors served as a blueprint for Germany's disastrous course toward war. They and others like them fostered the intellectual climate in which Germany's leaders made the fateful decisions that crushed liberalism domestically and heightened tensions internationally. They stoked the strident and reckless nationalism that intoxicated the German people and had them spoiling for war. They saw and made the connection between collectivism at home and belligerence abroad.
And they provoked imitators. The German example and threat served to promote collectivist domestic policies in Great Britain under the banner of "national efficiency." German influence extended in similar fashion to British attitudes about international affairs. The "national efficiency" push for social reform was inextricably connected with a newly assertive imperialism.
The connections ran in both directions. On one hand, social reforms were touted as strengthening the empire. Lord Rosebery, a Liberal imperialist and leading spokesman of the "national efficiency" cause, argued, "An Empire such as ours requires as its first condition an imperial race—a race vigorous and industrious and intrepid." But, he continued, "in the rookeries and slums which still survive, an imperial race cannot be reared." Meanwhile, the empire was defended as an essential support for working-class living standards. Nobody put this case more bluntly than Joseph Chamberlain, champion of the protectionist "tariff reform" movement: "If tomorrow it were possible, as some people apparently desire, to reduce by a stroke of the pen the British Empire to the dimensions of the United Kingdom, half at least of our population would be starved."
And so in Britain, as in Germany, collectivism at home went hand-in-hand with an expansionist foreign policy. The Cobdenite vision of peaceful coexistence and non-intervention yielded to one of great empires locked in a "struggle for existence"—a phrase that Joseph Chamberlain employed repeatedly in his speeches. The British Empire—which had been acquired, in the famous phrase, in "a fit of absence of mind"—came to be seen as a prized asset, even a life-or-death necessity. And its health demanded the centralization of economic decision making. In other words, the conditions of external competition required the suppression of competition internally.
Britain, unlike Germany, did not succumb to economic nationalism. Chamberlain led a well-organized campaign to convert the empire into a vast, protectionist trading bloc, and for a time it appeared he would succeed. In the end, though, he lost the campaign for working-class support to the New Liberals, who combined imperialism and social reform with continued allegiance to free trade. The election of 1906, a sweeping victory for the Liberals, effectively squelched the tariff reformers.
Their near-success, though, was enough to stoke fears abroad that the British Empire would soon be closed to outsiders. This prospect contributed to Germany's spiraling economic nationalism and militarism, which in turn provoked an accelerating British military buildup. (The latter prompted Winston Churchill's famous quip, "The Admiralty had demanded six ships: the economists offered four and we finally compromised on eight.") As Britain and Germany armed to the teeth, it became increasingly likely that some chance event would spark a major confrontation. On June 28, 1914, the assassination of Archduke Franz Ferdinand and his wife ignited that spark.
It is customary to view World War I as a tragic accident—a senseless war about nothing in particular, or at least nothing that makes any sense to us now; a war that nobody wanted but into which all were dragged by a ruinous system of entangling alliances. It is true that the outbreak of war at that particular time did hinge on a maddening and heartbreaking sequence of contingencies. But at a deeper level, the war was no accident. It was a product of the ideas of the Industrial Counterrevolution: ideas of centralization that merged into statism, ideas of statism that merged into aggressive nationalism, ideas of nationalism that merged into plans for military conquest.
And its baleful consequences extended far beyond the awful slaughter in the trenches. The Russian Revolution and the rise of fascism were direct outgrowths of the Great War. Less obviously, so was the Great Depression. Postwar efforts to reconstitute the old international economic system, in particular the gold standard, were enacted under the badly distorted and volatile conditions of the time, leading ultimately to disastrously deflationary monetary policy in the United States and Europe. In the 1930s, the combination of economic catastrophe and predatory totalitarianism—both aftershocks of the Great War—spelled the end of the first global economy and precipitated a second global war. So ended the descent into fire and chaos that began with the guns of August.
When critics of global trade claim that the phenomenon's first, failed episode should be seen as a warning against reckless faith in markets, they are standing history on its head. In truth, the first global economy was destroyed by the antithesis of economic liberalism—namely, the misbegotten dream of central planning and social engineering that inspired the Industrial Counterrevolution in all its variants. The collectivist delusion was flatly incompatible with an international division of labor: When the former was ascendant, the latter could not survive. For collectivism invariably attached itself to the ready means of the nation-state, and once so ensconced, it helped to stoke aggressive, beggar-thy-neighbor nationalism. In a world of centralized and increasingly regimented states whose interests could not help but clash, conflict was inevitable. And when war came, its terrible fury hatched new monsters: ruinous economic disruption and barbaric totalitarianism. The gossamer bonds of trust and mutuality that sustain a global marketplace had no chance against such an onslaught.
In the years after World War II, there was a partial reconstruction of the international economy, led by the United States, Western Europe, and Japan. But the continued ascendancy of the Industrial Counterrevolution made true globalization impossible. After all, the bulk of the world's population, located in the Communist bloc and the so-called Third World, lived under rabidly collectivist regimes that rejected the very idea of an international market economy. Only in the past couple of decades has the counterrevolutionary momentum exhausted itself in disillusionment and failure. And as overweening state control has receded—with the opening of China, the fall of the Soviet empire, and many Third World countries' abandonment of state-dominated models of development—market connections have been reestablished. The death of the dream of centralized control has marked the rebirth of globalization.
But the collectivist past continues to cast long shadows. The move toward more liberal policies has occurred amidst the ruins of the old order, and so has had to contend with grossly deformed conditions. The transition, as a consequence, has been wrenching and often brutally painful. And that transition is far from complete. The world economy is littered still with the wreckage of discredited systems, constraining the present and obscuring the future. Life has left the old regime, but the dead hand of its accumulated institutions, mindsets, and vested interests continues to weigh heavily upon the world. Against that dead hand—which includes, among other things, the ideologically distorted misunderstanding of globalization's past—the cause of freedom must contend for many decades to come.
The post The Decline and Fall of the First Global Economy appeared first on Reason.com.
]]>At last—a sensible book about globalization. International economic integration is a phenomenon drowned in hype: Cheerleaders talk breathlessly of a world without borders, while doomsayers rage against the supposed tyranny of uncontrolled market forces. John Micklethwait and Adrian Woolridge, two reporters for The Economist, have succeeded in cutting through both sides' bombast. In their engaging and clearheaded guide to the shrinking of the planet, they show that different parts of the globe are shrinking at very different rates and that the whole process has a long, long way to go.
However fast they have grown, international markets remain the exception rather than the rule in contemporary economic life. Analysts at the renowned consulting firm McKinsey have estimated that only about 20 percent of world output is currently subject to global competitive pressures. Even when they are allowed to operate, the forces of global competition are much more attenuated than most of us think. In that connection, Micklethwait and Woolridge cite an eye-opening study of Canadian trade patterns. Canada and the United States boast the world's largest bilateral trading relationship. A common language and strong cultural ties link the two countries. Tariffs have been eliminated by a free trade agreement. And still, trade between Canadian provinces is 12 times greater than trade between provinces and American states, after correcting for differences in the trading units' size and distance. The bottom line: National borders still matter a great deal.
Despite all the loose talk about their imminent demise, the nation-states that defend those borders are still very much alive and kicking. The authors point to the experience of their native Great Britain, where from 1979 to 1997 Conservative governments strove to limit the size of the bloated public sector. In the end, all they managed was to budge government spending from 43 percent of gross domestic product down to 42 percent.
In the United States, Microsoft's travails reveal the fatuousness of the claim that governments are powerless in the face of footloose capital. "Companies are much less mobile than people believe," Micklethwait and Woolridge remind us. "One reason why the federal government can harass Microsoft is because it knows that the computer company will not move elsewhere."
The Internet, with its contempt for distance and its resistance to top-down control, makes an appealing metaphor for the rapidly integrating world economy. But it is a deceptive one if taken too literally. Those of us under constant e-mail bombardment may think our plight a universal one, but half the world's 6 billion people have yet to place their first phone call. Two-thirds of them still survive by tilling the soil. The world economy, considered as a whole, remains more preindustrial than postindustrial. Micklethwait and Woolridge perform a valuable service by making clear that globalization, far from an accomplished fact, is a process only just getting under way.
Though they debunk globaloney in both its triumphalist and apocalyptic variants, Micklethwait and Woolridge are by no means neutrals in the raging globalization debate. They mount a stout defense of international markets-a defense that is all the more effective because it is not Panglossian. Globalization has not benefited everybody, they frankly acknowledge. They divide the unfortunate into three broad categories: the "has-beens," who work in declining and no longer competitive industries; the "storm damage," who are rocked by the volatility of international finance; and the "nonstarters," the desperately poor who have yet to be touched by global wealth creation.
But it is wrong, they argue, to focus only on the losers when the winners so greatly outnumber them. "Deng Xiaoping's decision to open China's economy in 1978," they recall, "helped some eight hundred million peasants more than double their real incomes in just six years, arguably the single greatest leap out of acute poverty of all time." Meanwhile, the cure for what ails most of the world's suffering billions is not less globalization but more: "The only places where the losers massively outnumber the winners are in countries, such as Cuba, that have shut the door on globalization completely."
Micklethwait and Woolridge strengthen their brief for globalization by putting the phenomenon in historical context. It is a task that doubtless comes naturally to writers for The Economist. Their magazine, after all, was launched in 1843 to campaign for the repeal of Great Britain's protectionist Corn Laws. The success of that campaign three years later secured British commitment to free trade and thereby laid the political foundations for the world's first great episode of globalization—the dramatic expansion of trade and investment flows worldwide during the decades prior to World War I. As Micklethwait and Woolridge note, the extent of international economic integration a century ago, though on the whole less than now, was impressive nonetheless.
But the initial burst of globalization did not last. It was destroyed by the outbreak of World War I and the ensuing calamities of totalitarianism, the Great Depression, and World War II. In the postwar era international trade gradually resumed and expanded, but a truly global economy remained an impossibility: The communist nations sealed themselves off from international markets, as did much of the Third World. It is only in the past couple of decades—with the opening of China, the fall of the Soviet empire, and the abandonment by many developing countries of isolationist "import substitution" policies—that a global division of labor has reasserted itself.
Micklethwait and Woolridge cleverly encapsulate what they call "the fall and rise of globalization" by reviewing the twists and turns of John Maynard Keynes' posture toward the international economy. Keynes burst onto the scene in 1919 with his The Economic Consequences of the Peace, in which he rhapsodized about the prewar international order and warned (correctly) that the draconian provisions of the Versailles treaty were antithetical to the reestablishment of that order. By 1933, Keynes' faith in the possibility of a stable, peaceful international system was so badly shaken that he called for a turn toward "national self-sufficiency." "I sympathise," he wrote, "with those who would minimise rather than maximise economic entanglements between nations."
Yet by 1944 Keynes' faith was sufficiently restored that he played a leading role in creating the Bretton Woods institutions (the International Monetary Fund, the World Bank, and the General Agreement on Tariffs and Trade). Those bodies, for all their flaws, provided a framework for restoring "economic entanglements," at least among the nations of what came to be known as the Free World. What we call globalization today has resulted in large part from the collapse of the communist and Third World alternatives to that Free World international order.
This historical background puts the world economy in a very different light than the one that colors most people's understanding. Globalization is commonly portrayed, by friends and foes alike, as a process whereby market forces—turbocharged by the microchip and the Internet—inexorably bend weakened governments to their will. But until relatively recently, most people in the world lived under governments that flatly rejected the verdicts of the marketplace. Why do they now pay attention? Yes, new information and communications technologies allow markets to operate more effectively, but that hardly matters when governments ban markets from operating at all. Why did many of them stop doing so?
The fact is that alternatives to markets—namely, central planning in various guises—were tried and found wanting. The common contention that markets are causing the retreat of the state is thus less true than the reverse: The collapse of statist policies has allowed market relationships to be restored. The enemies of globalization who rail against supposedly unchecked markets need to be reminded why those checks are being slowly but steadily removed: They were an unmitigated disaster for the billions of people subject to them.
To their credit, Micklethwait and Woolridge do not restrict their defense of globalization to narrow economic grounds. "Arguing that globalization is, on balance, not a bad thing and showing that it has generally enriched the world economically is certainly valuable," they write. "But the same could also be said for the lavatory or the lemon squeezer." They go beyond dollars and cents to make the case that globalization expands human liberty.
First, the authors trace the connections between economic openness and broader freedoms. In one evocative example, they recount how British capital controls during the 1960s imposed a travel allowance that severely restricted citizens' ability to travel abroad—an intrusion on personal liberty that would be considered beyond the pale today. "It is not coincidental that the pace of globalization has picked up with the spread of democratic rights," they note; "the two are symbiotic."
Micklethwait and Woolridge then go further to argue that globalization, by overcoming the "tyranny of place," deepens and enriches the exercise of individual freedom. Here they turn the tables nicely on communitarians like John Gray who bemoan globalization's assault on "the blessings of a settled identity." "John Gray himself," they respond, "happily abandoned the Newcastle working class into which he was born for the metropolitan intelligentsia. One of the many benefits of globalization is that it increases the number of people who can exercise Gray's privilege of fashioning his own identity."
The topics covered in this fine book range far beyond those I have touched on in this review. Among other highlights are intelligent discussions of the "failure of global government," the spread of American popular culture, and the currently modish anti-globalization backlash. If the book has a weakness, it is that the authors fail to bring the wide variety of subjects addressed within any overarching analytical framework. They sound some general themes but do not sustain them. As a consequence the book reads more like a series of set pieces than an integrated whole. But for someone interested in a smart and often witty overview of globalization in all its nebulous dimensions, Future Perfect makes for excellent one-stop shopping.
Contributing Editor Brink Lindsey (blindsey@cato.org) is the director of the Center for Trade Policy Studies at the Cato Institute.
The post Trade Winds appeared first on Reason.com.
]]>When the United States entered the WTO back in 1995 at the close of the Uruguay Round of trade talks, a time bomb was slipped into the implementing legislation. Section 125 of the Uruguay Round Agreements Act allows any member of Congress to propose a joint resolution this year to withdraw authorization for U.S. membership in the WTO. Such a resolution would then be considered under special, expedited procedures. While there's almost no chance that the measure could really be enacted into law, it's conceivable that the House of Representatives might pass it. For that to occur would deal another serious blow to the WTO's already battered prestige.
The sad fact is that, in this country at least, the WTO has become a giant, flashing "Kick Me" sign affixed to the free trade cause. In light of that fact, it's worth remembering why creating the WTO ever made sense. It wasn't because of any mercantilist nonsense about "fair trade." As any Economics 101 textbook will tell you, we benefit from opening our own markets regardless of what other countries do. And it certainly wasn't because of any woolly-headed notion that a world economy needs a world government. That's the last thing it needs.
No, the only good reason for having a body like the WTO is to make it politically easier for our own government, and for governments abroad, to reduce restrictions on trade and investment flows. International trade agreements facilitate liberalization by adding the sugar of improved market access abroad to the political medicine of increased exposure to foreign competition at home. As a result, exporting interests eager to penetrate foreign markets are induced to lobby for the free trade cause. And once barriers have fallen, it's harder to backtrack toward renewed protectionism when doing so violates an international obligation. To cite just one recent example, Congress last year voted down import quotas on foreign steel in large part because of an unwillingness to flout the WTO's ban on such restrictions.
But if the WTO is supposed to reduce and deflect protectionist pressures, it clearly isn't working. On the contrary, it is galvanizing what would otherwise be vague and unfocused anxiety about globalization into an energetic and potent political movement. While the protesters in Seattle may not represent mainstream American public opinion, any cause that can get tens of thousands of people into the streets can't be dismissed lightly. The fact is that the WTO has boomeranged on free-traders.
The fault does not lie with the WTO. In the five years of its existence, it has served as a forum for important new agreements on information technology, telecommunications, and financial services. And its dispute settlement mechanism has performed remarkably well; it has lent a credibility to market-opening rules that never existed in the old days of the toothless GATT.
But the WTO needs help. And the supporters of trade liberalization, in politics and in business, have failed to give it. They have failed to create the domestic political conditions that are necessary if the WTO is to operate effectively. In fact, they have actively contributed to a political culture in which the WTO is a natural whipping boy. Free-traders, through a botched political strategy, have turned what should be an asset into a serious liability.
Here's the nub of the problem. The pro-trade camp in this country has tried to sell free trade generally, and the WTO in particular, on the grounds that free trade in other countries is a good idea. When other countries drop their trade barriers, American companies export more, and consequently create more export-related jobs. All true enough, but what free-traders fail to talk about–and their silence is deafening–is that free trade here at home is a good idea. It's widely believed among free-traders that talking about open U.S. markets amounts to leading with your chin, and so the less said on that subject, the better. (See "Fast-Track Impasse," February 1999.)
But by defending free trade entirely on the basis of what other countries do, supporters of open markets set themselves up for a fall. The fact is that the United States is much more open than most countries around the world–which is a strength, not a weakness. If, however, the WTO is to be justified solely on the benefits of opening markets abroad, then it always looks like we've gotten the short end of the stick. When the benefits of our own open markets are ignored, the argument that we are giving up our rights to faceless bureaucrats in Geneva begins to look plausible.
Meanwhile, by remaining silent in the face of anti-trade claims about job losses and a "race to the bottom," free-traders allow misplaced fears about globalization to spread by default. Trumpeting the prospect of increased exports by U.S. multinationals simply isn't responsive to those fears; indeed, it actually plays to suspicions that the WTO is a front for big corporations that seek to profit at the expense of ordinary Americans.
Over the past couple of decades, the U.S. economy has experienced a sharp surge in the intensity of competition. Much of this surge has had nothing to do with globalization. Deregulation, disinflation, and the revolution in information and communications technologies have all contributed. But our increasing economic ties with the rest of the world have been a significant factor. And while the increase in competition–from both domestic and foreign quarters–has been highly beneficial, it has also raised a lot of people's blood pressure. As a result, Americans are highly susceptible to demagoguery pinning the blame for all the recent tumult on foreigners–and in particular, on an obscure little bureaucracy like the WTO.
If the WTO is to facilitate trade liberalization rather than complicate it, an overhaul in political strategy is needed. In addition to their existing arguments, free-traders will have to make abundantly clear that open markets here at home serve the American national economic interest, even when other countries pursue less enlightened policies. Along those lines, they will have to argue, forthrightly and unapologetically, that a major benefit of WTO membership is the assistance it lends to reducing and holding down U.S. trade barriers against foreign competition. And then U.S. trade policy makers will actually have to walk the walk–they'll have to put American protectionist policies on the negotiating table, not defend them as though antidumping laws and textile quotas and tariffs were the crown jewels (as the Clinton administration did in Seattle).
If this about-face is made, free-traders will be in a strong position to respond to the WTO bashers. The WTO is being lambasted for its inattention to human rights, labor conditions, and environmental standards in developing countries. But once it is clear that our primary reason for joining the WTO is to open our own markets and keep them open, why on earth would we want to invent new reasons for closing them? We shouldn't rob ourselves of the benefits of openness simply because other countries are poorer or more poorly governed than ours. And as to the argument that trade sanctions are needed to help people in poor countries, how exactly do we help them by crushing their livelihoods? Burning the village in order to save it went out of fashion in Vietnam.
If free-traders don't change course, though, the WTO's usefulness will be seriously undermined–and perhaps even extinguished. Trade negotiations, rather than a pragmatic tool for speeding up liberalization, will become a quagmire of proliferating excuses for retarding and reversing it. If that happens, free-traders will have only themselves to blame.
The post Kick Me, I'm For Free Trade appeared first on Reason.com.
]]>In False Dawn, John Gray attempts to attack global capitalism at its intellectual roots. In other words, he portrays the worldwide spread of markets as the manifestation of deeply flawed ideas about how the world works. The attack, a sloppy jumble of internal contradictions and factual distortions, fails spectacularly. Nevertheless, the book does achieve something: It articulates, quite boldly and with rhetorical verve, a relatively sophisticated version of reactionary globalphobia. It's not a pretty sight, but it merits our attention all the same.
Gray is a professor at the London School of Economics and a fairly prominent public intellectual in Britain. Like America's Pat Buchanan, Gray opposes globalization from the right; also like Buchanan, Gray is a repentant ex-free-trader. Gray's intellectual about-face, though, goes far beyond international economics. He is a former classical liberal whose earlier books include intelligent and admiring analyses of J.S. Mill and F.A. Hayek. Now he rejects not just free trade, not just liberalism, but the whole "Enlightenment project"–or at least his caricature thereof. (In The Future and Its Enemies, Virginia Postrel identifies Gray as a leading voice of what she calls "reactionary stasis.")
Indeed, at the bottom of Gray's hostility to the world economy is its supposed Enlightenment pedigree. "A single global market," he writes, "is the Enlightenment's project of a universal civilization in what is likely to be its final form." In an invidious and oft-repeated comparison, he portrays global capitalism and the now-defunct ideal of collectivism as two sides of the same rationalist coin: "Even though a global free market cannot be reconciled with any kind of planned economy, what these Utopias have in common is more fundamental than their differences. In their cult of reason and efficiency, their ignorance of history and their contempt for the ways of life they consign to poverty or extinction, they embody the same rationalist hubris and cultural imperialism that have marked the central traditions of Enlightenment thinking throughout its history."
Gray does not dispute (at least not consistently) that, unlike socialism, free markets deliver the goods. "The argument against unrestricted global freedom in trade and capital movements," he concedes, "is not primarily an economic one. It is, rather, that the economy should serve the needs of society, not society the imperatives of the market." In particular, Gray argues that free markets undermine the "needs of society" by fomenting incessant and unsettling change. "The permanent revolution of the free market denies any authority to the past," he writes. "It nullifies precedent, it snaps the threads of memory and scatters local knowledge. By privileging individual choice over any common good it tends to make relationships revocable and provisional."
At this point Gray sounds like a full-fledged neo-Luddite, rejecting the rat race of economic and technological progress in favor of some lost bucolic wonderland of cheerful, ruddy peasants and a wise and kindly nobility. But Gray's views are more complicated, and less coherent, than they first appear. Gray distinguishes between the "global free market," a utopian fantasy he harshly condemns, and globalization more generally, whose inevitability he recognizes and accepts.
"A global single market is very much a late-twentieth-century political project," he argues. "It is good to remind ourselves of this, and to make an important distinction. This political project is far more transient than the globalization of economic and cultural life that began in Europe in the early modern period from the fifteenth century onwards, and is set to advance for centuries. For humankind at the close of the modern period globalization is an historical fate. Its basic mechanism is the swift and inexorable spawning of new technologies throughout the world. That technology-driven modernization of the world's economic life will go ahead regardless of the fate of a worldwide free market."
It appears, then, that John Gray is highly selective in his railings against the "Enlightenment project." The "universal civilization" of science and technology, after all, has its own "cult of reason and efficiency," heaps "contempt" on traditional superstitions and folkways, and spreads its own "cultural imperialism." Even more so than do free markets, the "permanent revolution" of scientific and technological advance "denies any authority to the past," "nullifies precedent," and "snaps the threads of memory." Yet while free markets are dismissed as a dangerous pipe dream, technological progress is a "historical fate."
The muddle gets deeper. Gray explicitly acknowledges that free markets and technological developments are pushing the world in the same direction. In fact, he actually argues that technology's push is the ultimately stronger and decisive one, writing, "The dislocations of social and economic life today are not caused solely by free markets. Ultimately they arise from the banalization of technology. Technological innovations made in advanced western countries are soon copied everywhere. Even without free-market policies the managed economies of the post-war period could not have survived–technological advance would have made them unsustainable."
Here, then, is Gray's argument so far: The current worldwide displacement of state control by markets is driven by the same kind of Enlightenment-inspired rationalist monomania that gave us the Soviet Union. This ideological campaign should be condemned. Yet this condemnation should not extend to the worldwide displacement of state control by markets that is driven by another bit of Enlightenment-inspired rationalism–namely, the global triumph of Western science and technology. And by the way, this latter technology-driven phenomenon is much more potent than the former ideology-driven one, and indeed is so powerful that statism would be giving way to markets even if free market ideology did not exist.
Got that?
It's the incredible shrinking thesis. It poses as a radical, Enlightenment-bashing jeremiad–but it's just a pose. Gray shrinks from the full implications of his argument, and then proceeds to cut the legs out from under it with understated but devastating qualifications. In the end, there's not much left.
This kind of self-contradiction occurs again and again in Gray's book. For example, Gray blasts the Thatcherite deregulation of the British labor market, attributing to it the following baleful consequences (among others): "The bourgeois institution of the career or vocation ceased to be a viable option for an increasing number of workers. Many low-skill workers earned less than the minimum needed to support a family. The diseases of poverty–TB, rickets, and others–returned."
Yet then Gray turns around and admits: "Margaret Thatcher understood that British corporatism–the triangular coordination of economic policy by government, employers, and trade unions–had become an engine of industrial conflict and strife over the distribution of the national income rather than an instrument of wealth creation or a guarantor of social cohesion."
Likewise, Gray condemns New Zealand's neoliberal reforms for undermining "social cohesion," but then concedes, "By the early 1980s a major shift in policy may have been unavoidable. It was not unreasonable to fear that New Zealand might slip from its status as a First World economy." For both Britain and New Zealand, Gray simultaneously trashes liberalization and the mess that preceded it. What should have been done? On that crucial point Gray is silent.
Meanwhile, Gray argues, in familiar globalphobic fashion, that the world economy today follows a kind of "Gresham's Law," in which "bad" capitalism drives out "good." "Sovereign states are waging a war of competitive deregulation, forced on them by the global free market," he writes. "A mechanism of downwards harmonization is already in operation." With equal vigor, though, Gray argues the exact opposite: "A global free market presupposes that economic modernization means the same thing everywhere….The real history of our time is nearer the opposite. Economic modernization does not replicate the American free market system throughout the world. It works against the free market. It spawns indigenous types of capitalism that owe little to any western model."
To cite a final example, consider Gray's analysis of East Asian capitalism. It is clear that he regards it as far superior to the American alternative, asserting, "In the contest between the American free market and the guided capitalisms of East Asia it is the free market that belongs to the past." Oops–no doubt this passage was written before the full dimensions of the Asian economic collapse had become apparent (the book was originally published in Britain in the spring of 1998). But Gray doesn't wait for events to refute him; he blithely refutes himself.
Thus, when he considers Asia's leading economy, Japan, he admits that it has been a "no-growth economy" for the better part of a decade. Indeed, he even tries to make a virtue of the fact by saying, "Perhaps in Japan's uniquely mature industrial society the collapse of economic growth could be an opportunity to reconsider the desirability of restarting it." Gray can't make up his mind whether Asia is outgrowing or out-stagnating us.
Is there anything solid in all this murk? Well, yes–Gray is consistently unambiguous in his cartoonish, over-the-top anti-Americanism. Through all his zigs and zags, he is steadfast in his loathing of American-style capitalism as it has emerged during the past couple of decades.
Gray argues that in the United States during the 1980s and '90s, "market utopianism" has displaced "Rooseveltian liberalism" and "gone far towards establishing itself as the unofficial American civil religion." Hyperbole aside, Gray is correct that American capitalism has undergone a fundamental shift in recent years. What is today called the "American model"–i.e., the absence of price and entry regulation, flexible labor markets, return-driven (as opposed to relationship-based) allocation of capital–is indeed a relatively recent phenomenon. In Gray's eyes, its advent is a catastrophe of the first order.
Predictably, he repeats the usual canards about rising economic inequality. Here is my favorite line from the book on that score: "The middle classes [in the United States] are rediscovering the condition of assetless economic insecurity that afflicted the nineteenth-century proletariat." That Gray could make such a statement–when an all-time record 66 percent of American families own their own homes, and an all-time record 52 percent of Americans own stocks–is a telling indicator of his regard for the facts.
For Gray, though, the supposed economic failings of American-style capitalism are only the beginning. "In the United States," he warns, "free markets have contributed to social breakdown on a scale unknown in any other developed country. Families are weaker in America than in any other country. At the same time, social order has been propped up by a policy of mass incarceration….Free markets, the desolation of families and communities and the use of the sanctions of criminal law as a last recourse against social collapse go in tandem."
Let us stipulate that broken homes and bulging prisons are serious social pathologies. But to blame these ills on the economic deregulation of the past 20 years is nothing short of ludicrous. After all, divorce, illegitimacy, and crime rates began soaring in the 1960s, when "Rooseveltian liberalism" was still in full flower.
As an alternative to the "global free market," Gray upholds Isaiah Berlin's vision of "a world which is a reasonably peaceful coat of many colours, each portion of which develops its own distinct cultural identity and is tolerant of others." But it is the proponents of economic liberalization, not Gray and his reactionary confreres, who are the true partisans of Berlin's pluralist and tolerant vision. For at the core of Gray's snarling rejection of free markets is intolerance–intolerance of the millions upon millions of individual choices that make up the marketplace.
John Gray disapproves of free markets on the ground that they give short shrift to "social cohesion." Yes, stability and belonging and tradition are certainly "vital human needs." But so are freedom, experimentation, and creativity. Gray regards these as dangerous and supports coercive policies that would suppress them. Supporters of liberalization, on the other hand, celebrate the dynamic virtues, while recognizing that they entail tradeoffs. In the liberal system, these tradeoffs are made by individuals according to their own individual and subjective preferences. Which approach is more likely to produce a vibrant "coat of many colours," and which a dull gray jacket of stagnation and repression?
Contributing Editor Brink Lindsey (blindsey@cato.org) is director of the Cato Institute's Center for Trade Policy Studies.
The post A Gray World appeared first on Reason.com.
]]>Without fast track, U.S. trade policy is dead in the water. Current policy consists of negotiating agreements in which we swap reductions in trade barriers with other countries. Our trading partners, though, won't negotiate seriously as long as they fear that any deals could wind up rewritten on Capitol Hill. Fast track bridges that confidence gap by requiring Congress to vote up or down on trade agreements without amendments and within a specified time period.
But fast track has failed twice in Congress. In November 1997, the bill was yanked at the last minute because it faced certain defeat in the House; last September, supporters forced a House vote and lost ugly, 243 to 180. In the first go-round, President Clinton actively supported the bill but could convince only some 20 percent of Democrats to go along. Republican leaders cooked up the second fiasco to embarrass Democrats prior to last year's midterm elections, but a third of their own caucus broke ranks and voted no.
Such failures are especially depressing given their timing. During these years of paralysis, conditions could hardly have been more favorable for liberalizing initiatives: The economy has been booming, with unemployment and inflation their combined lowest in decades; since the "competitiveness" scares of the 1980s, major U.S. industries have staged dramatic comebacks. If free traders couldn't prevail under these circumstances, when could they?
Not, in all likelihood, for the foreseeable future. As economic crises grip Asia, Russia, and Latin America, and as prospects for continued growth at home appear uncertain, a window of opportunity may have closed. Free traders must face the facts: They blew it.
Even worse than the current predicament is the likeliest "solution" to it. Since fast track authority expired at the conclusion of the Uruguay Round of trade talks in 1994, reauthorization has been snagged on the question of whether labor and environmental issues belong on the trade agenda. Most Democrats have refused to support new trade negotiations unless these issues are on the table; they believe that international rules on labor rights and environmental protection are necessary to prevent economic globalization from prompting a woeful "race to the bottom." Most Republicans, meanwhile, steadfastly oppose such international rules, and many in the GOP would regard any trade agreement that includes them as worse than no agreement at all.
The two failed fast track bills were relatively "clean"–that is, they excluded labor and environmental agreements from the scope of fast track procedures. Thus, the bills were designed to appeal to a center-right political coalition. Since that coalition has been unable to muster a majority, momentum is building for a move to the center-left. That means repackaging fast track to put labor and environmental issues squarely on the trade negotiating agenda.
It's unclear whether decorating fast track with "blue" and "green" trim would gain more Democratic supporters than it lost Republicans. What is clear, though, is that doing so would mark a radical departure from free trade principles. For the first time, the stated goal of trade negotiations would be to increase rather than decrease government intervention in trade and investment flows.
What a choice for free traders: futility or apostasy. On the bright side, the good thing about reaching a dead end is that you no longer have to wonder whether you're on the wrong road. For free traders, now is a time of clarity: The only viable option is to strike out in a new direction.
The way out of the impasse starts with a proposition that, once stated plainly, seems embarrassingly obvious: The prospects for opening markets here and abroad would brighten considerably if more Americans believed that open markets here are a good idea. Unfortunately, free traders have not been making the case for free trade at home. On the contrary, they have steadfastly avoided any head-on confrontations with protectionist forces. Instead, they have sought to hold and gain ground by alternately diverting and appeasing those forces. This strategy is no longer working, and must be abandoned in favor of a more principled approach.
Before free traders can figure out their next move, they need to understand how they wound up in the present mess. In a time of unrivaled prosperity, what has made trade liberalization so bitterly controversial?
The answer lies in an emerging nervous disorder known as "globalphobia." Many Americans are deeply skeptical of the much-hyped global economy and its effects on the U.S. economy. According to a Business Week/Harris poll conducted in September 1997, 56 percent of Americans believe that expanded trade decreases the number of U.S. jobs, while only 17 percent believe that expanded trade increases wages.
The main focus of public anxiety is the supposed threat to American prosperity posed by poor but industrializing countries. Over the past couple of decades, a succession of events–the opening of China, the collapse of the Soviet Union, the abandonment by many developing countries of autarkic import-substitution polices–has added hundreds of millions of new participants to the global division of labor. While new technologies that increase the world's productive capacity–personal computers, for instance–are hailed as economic breakthroughs, an increase in the form of human capital strikes many people as a menace. The fear is that Americans cannot compete with the low wages of "emerging market" countries, and that a kind of living-standards arbitrage will drag us inexorably down to their level.
Globalphobia afflicts both the left and right. On the right, Pat Buchanan leads the charge. In The Great Betrayal, he decries the increasing economic ties between the First and Third Worlds: "The global hiring hall is the greatest buyer's market in history for human labor. It puts American wage earners into direct competition for production jobs with hundreds of millions of workers all over the world." And on the left, William Greider warns in One World: Ready or Not of a "global overabundance of cheaper labor." Greider shares with Buchanan a zero-sum vision of international commerce: "The history of industrial development has taught societies everywhere to think of the economic order as a ladder….The new dynamic of globalization plants a different metaphor in people's minds–a seesaw–in which some people must fall in order that others may rise."
An analysis of 1997 congressional voting patterns by Robert Baldwin and Christopher Magee for the Institute for International Economics shows a clear connection between rising globalphobia and fast track's failure. According to their study, the higher the employment in a member's district in industries for which imports are greater than exports, the more likely was that member to state his opposition to fast track. The likelihood of an anti-fast track stance was also highly correlated with the percentage of workers in a member's district with less than a high school education. By contrast, neither of these relationships was statistically significant in explaining votes for or against NAFTA in 1993. Baldwin and Magee conclude that the deterioration in support for trade liberalization over those four years may be attributed to increased concern about the employment effects of expanded trade, especially with respect to low-skill workers.
But while globalphobes of the left and right have united to block fast track and trade liberalization, their unity does not extend beyond obstruction. When it comes to positive agendas, the left and right wings split along nationalist and internationalist lines. So-called economic nationalists like Buchanan want to stop the world and get off–isolate the U.S. market behind protectionist barriers and let everybody else fend for themselves. Indeed, their hostility to trade liberalization is as much political as economic; they see the free trade cause as a cover for undermining U.S. sovereignty and expanding world government. Conservative activist Phyllis Schlafly uses typical rhetoric when she refers to the World Trade Organization as "a sort of United Nations of trade." "It is dishonest to call something `free trade,'" she writes, "when it is managed by a huge international bureaucracy."
Economic nationalists are content simply to trash the world trading system, but their colleagues on the left have bigger plans. Their goal is to remake that system in their own image, creating a larger structure of global governance that imposes social-democratic labor and environmental policies around the world. By ensnaring emerging-market countries in a web of Western-style regulation, and depriving them of their "unfair" advantages of low wages and environmental squalor, they would stop the imagined race to the bottom. Hence most Democrats' refusal to support fast track without a new "blue and green" paint job.
The internationalist strategy was on display in a recent speech by Rep. Richard Gephardt (D-Mo.) before the Council on Foreign Relations. Describing himself as a "progressive internationalist," he cloaked the case for global labor rules in the rhetoric of free trade: "Free trade also requires free markets. And a component of free markets is free labor markets. That's why I have fought so hard, and will never give up the fight, to have workers' rights be an integral component of our trade policy."
Ironically, globalphobia's internal divisions actually strengthen its ability to challenge trade liberalization as traditionally practiced. The economic nationalists can cite the left's labor and environmental agenda as proof that free trade is just a "new world order" plot; lefties can advance the cause of blurring free trade and world government by casting it in opposition to Buchananite "isolationism." Each side thus draws strength and legitimacy from the other.
In reality, of course, globalphobia rests on economic illiteracy. We don't agonize that the country is racing to the bottom because secretaries are replaced by voice mail, or because bank employees are replaced by ATMs. What, then, is so special about the fact that some labor-intensive manufacturing operations are being replaced by Third World factories? All of these trends are part of a larger process: the process of raising productivity, of squeezing more value from less effort. Far from provoking a race to the bottom, this process is the essential precondition for continued increases in our overall standard of living.
Openness to foreign competition pushes productivity, and living standards, upward in two different ways. First, it gives us the opportunity to buy products from abroad that are better or cheaper than those we can make for ourselves. The result is that we are richer as a society. And over time, the work force and resources that would have been devoted to making what we now import can be shifted to sectors in which we are more productive. Second, by causing domestic firms to compete harder and raise their productivity, the spur of foreign competition makes us richer even when we don't end up buying imports. To take an obvious example, Americans drive much better cars today than they did a couple of decades ago, and not just because many drive imports. American cars have improved dramatically as U.S. automakers responded to the challenge of foreign competition.
The proposition that open markets make a country richer is one of the most thoroughly examined and repeatedly vindicated in all of economics. Adam Smith got to the heart of the matter over two centuries ago when he observed that "the division of labor is limited by the extent of the market." By expanding the scope for voluntary exchange beyond national boundaries, international trade fosters a broader division of labor and the resulting gains from increased specialization. The "global hiring hall," as Buchanan puts it, gives us the chance to profit from the ingenuity, creativity, and old-fashioned hard work of billions of participants in a global division of labor.
Clearly, though, many Americans just don't get it. And one reason why they don't get it is that free traders rarely talk about such things. To a striking extent, the case they make for free trade bears no relation to the one made by Adam Smith and his successors in the economics profession.
Instead, advocates of trade liberalization hammer away repeatedly at two main themes: increasing exports and showing international leadership. For example, the Web site for America Leads on Trade, a coalition of pro-fast track businesses, defended fast track and trade expansion by arguing that it is critical "for maintaining U.S. leadership in the global economy," because such agreements "tear down barriers to U.S. trade and investment. These agreements will boost the U.S. economy and create high wage jobs by expanding export opportunities for our companies and workers."
According to the Web site, "Fast track will allow the United States to keep its competitive edge against foreign competitors. If the United States does not have fast track, we risk being left behind."
In such arguments, and in speeches, press releases, and studies from business groups, politicians, and think tanks, from Republicans as well as Democrats, the supporters of trade liberalization make their case for the benefits of free trade abroad. Reducing trade barriers in other countries will increase American exports, and export-related jobs, and show U.S. international leadership. As to the benefits of free trade here, and as to reducing trade barriers in this country, free traders of all stripes remain conspicuously silent.
Why? Why let the globalphobes carry the day by default? Basically, free traders have been fighting the last war. Their political strategy dates back to a time when grassroots interest in trade issues was virtually nil, and the economic stakes of trade policy were of concern only to organized business interests. Furthermore, the current strategy is a product of the Cold War, when all international economic issues were viewed through the prism of superpower rivalries.
Under those conditions, free traders hatched their strategy of diversion and appeasement. By pursuing trade liberalization exclusively through international negotiations, they diverted attention away from the U.S. market and onto the benefits of opening markets abroad, and neutralized protectionist lobbying pressure from domestic import-competing industries. In this way they recruited exporting industries to lobby vigorously for trade expansion. At the same time, they diverted attention away from economics and onto diplomacy by stressing the geopolitical gains from trade agreements–notably, cementing the Western alliance and keeping Third World countries from defecting to the Soviet camp.
And when diversion didn't work, free traders settled for appeasement, making measured concessions to protectionist demands in order to forestall a larger backlash. The major concession was a system of "trade remedy" laws–including the anti-dumping law, the countervailing duty law, and the Section 201 "escape clause"–which allow special duties to be imposed on imports when legally established criteria are met. These laws were supposed to act as a safety valve that keeps protectionist pressures from building to dangerous levels.
However unattractive to the free trade purist, the diversion-and-appeasement strategy achieved results for many years. Tariff levels plunged during the postwar era as the United States, long a protectionist nation, maintained a commitment–honored at times in the breach–to a policy of gradual liberalization here and abroad.
But times have changed. The Cold War is over, and free trade's foreign policy trump card is gone. In fact, free trade's association with international negotiations and institutions is now costing it supporters among increasingly nationalist (or anti-internationalist) conservatives. Meanwhile, trade policy is no longer an insider's game, as the issues surrounding globalization have become high-profile, hot-button concerns. Globalphobia as an energized, grassroots phenomenon really took off during the rancorous NAFTA debate, and it has retained if not increased its potency.
As a result, the conventional diversion-and-appeasement strategy is no longer working. When large numbers of ordinary Americans are worried that their economic future is being threatened by imports from cheap-labor countries, happy talk about additional exports for Fortune 500 companies isn't an appropriate response. The failure to address genuine, if misplaced, public concerns allows those concerns to fester and spread. Worse, the diversionary focus on exports conveys the impression that free traders value corporate fat cats over regular folks, magnifying globalphobia's populist appeal.
Among supporters of trade liberalization habituated to old tricks, the reaction to diversion's failure is to redouble appeasement. And so today many stalwarts of the pro-trade camp–from such establishment bastions as the Brookings Institution, the Council on Foreign Relations, and the Institute for International Economics–are urging some kind of compromise on labor and environmental issues in order to woo moderate lefties back into the fold. Such a move is repugnant to believers in free markets: Creating new international regulatory bureaucracies isn't what we signed up for. At some point the process of negotiating trade agreements becomes sufficiently adulterated that it's just not worth doing.
In any event, the case for further appeasement is suspect on purely political grounds. While it may be possible to fudge differences sufficiently to get fast track passed, when it comes to actual trade agreements, it's hard to see how any compromise will work. Here at home, if any conservative support is to be maintained, the concessions on international standards will have to be very narrow and modest. Likewise, at the international level, developing countries will refuse to sign any agreements that condition their continued access to rich-country markets on so-called "upward harmonization" of labor and environmental policies.
So no matter how much the compromisers on the pro-trade side may want to deal, they will have very little to offer. And it's highly unlikely that the labor unions, environmental groups, and Naderites will buy what they're selling. The left is too smart to be placated by blue and green window dressing–especially after the experience of the NAFTA labor and environmental side agreements, which most activists regard as worthless. They will demand tough standards and real enforcement, which they're not going to get. As a result, appeasement won't appease.
What, then, will the compromisers accomplish? They will concede that globalphobic fears of a race to the bottom are justified, and that trade policy ought to do something about it. But then the response that they offer will be manifestly insufficient. And so the leftist contention that free traders care more about multinational corporations than about workers and Mother Nature will gain plausibility. Meanwhile, right-wing globalphobes will regard the mission creep of trade negotiations into labor and environmental policy as proof that free trade is a smokescreen for world government. In short, the odds are that appeasement will end up backfiring: Rather than finessing the opponents of open markets, it will only strengthen and embolden them.
Well, what's the alternative? How do free traders get out of their current jam? It's simple, really: Attack the problem at its root. The free trade cause has fallen on hard times because of growing public fears about the United States' place in the world economy. Rather than ignoring those fears, or giving in to them, free traders should make the case that the fears are groundless. Free traders need to take the misconceptions of globalphobia head-on, seize the intellectual initiative, and champion open markets forthrightly and unapologetically.
To begin with, free traders should commit themselves to a major effort of educating the public. They need to demonstrate the benefits of imports as well as exports, of foreign investment here and of U.S. investment abroad. In particular, they need to portray trade as part of the larger process of ongoing technological and organizational innovation that lies at the heart of wealth creation and rising living standards. In that regard, they need to dispel the notion that job losses due to trade are somehow more onerous than those that attend any other technological or organizational breakthroughs.
Abandoning the old strategy of diversion and appeasement, though, entails more than a shift in rhetoric. It requires programmatic change as well. First and foremost, free traders should identify a handful of the most egregious U.S. trade barriers and launch a campaign for eliminating them unilaterally–that is, regardless of whether other countries make similar reforms. There are plenty of targets to choose from: the anti-competitive anti-dumping law, which punishes perfectly normal business practices in the name of "fair trade"; high tariffs and quotas on textiles and clothing; import restrictions linked to price support programs for farm products; the Jones Act ban on foreign shipping between U.S. ports; similar restrictions on the so-called "cabotage rights" that would allow foreign air carriers to fly domestic routes; limits on foreign investment in broadcasting and air transport; and the list goes on.
Skeptics will respond that unilateral liberalization is a sure political loser. If Americans are scared of opening our markets when the deal is sweetened by market opening abroad, why on earth would anyone expect them to take the medicine straight?
But the purpose of campaigning for unilateral free trade isn't to win legislative victories–at least not in the short term. The point is to change the terms of the debate. On that score, the benefits of a unilateral approach are immediate. First of all, taking this tack forces free traders to go on the intellectual offensive. It's impossible to push for the unilateral elimination of trade barriers without making a frontal assault on the misconceptions of globalphobia. Free traders would have to explain why imports make us richer, not poorer; why trade deficits are meaningless; why the elimination of particular jobs is consistent with, and indeed necessary for, long-term economic health. Americans would finally begin to hear the other side of the story.
Furthermore, the unilateral approach frames issues in terms that give free traders the natural advantage. Rather than simply defending free trade, they would attack its alternative: protectionism in actual practice. Admittedly, the case for free trade is to some degree hypothetical and counterintuitive. On the other hand, the case against protectionism is much clearer: It raises prices, restricts choices, and benefits a favored few at the expense of everyone else. Protectionism is unfair, plain and simple. An attack on U.S. trade barriers would allow free traders to put their opponents on the defensive for a change. The beneficiaries of protection would be forced to explain why they deserve their special privileges, and why the welfare of other American businesses and their workers, not to mention consumers, should be sacrificed on their account.
Attacking particular U.S. trade barriers would allow free traders to reclaim their lost populist roots. In the old days, the trade debate typically pitted Democrats and the common man for free trade against Republicans and big business for protection. Free traders used explicitly populist rhetoric, condemning tariff walls as bastions of corruption and privilege. Today, free trade is all too often depicted as elitist–pumping up the profits of big multinationals at the expense of jobs for working men and women. Unilateralism would help to counteract that stereotype by focusing on those aspects of the free trade cause with the greatest populist appeal: cutting taxes, lowering prices, and eliminating corporate welfare.
Finally, a campaign for unilateral reform would liberate the free trade cause from the tangle of diversionary squabbles in which it is currently ensnared. Concerned about fast track's antidemocratic circumvention of normal congressional procedures? Worried about ceding sovereignty to faceless international bureaucrats? Offended by obnoxious practices (continuing trade barriers, subsidies, human rights abuses, drug trafficking, etc.) in the countries with which we strike trade deals? All of these issues become moot when the only question on the table is whether or not Congress, as a matter of purely domestic economic policy, ought to junk particular bad laws.
While they pursue unilateral reforms, free traders shouldn't give up on trade negotiations. International agreements can facilitate the liberalization process by enlisting export interests to support free trade at home; also, such agreements provide a useful institutional constraint against protectionist backsliding. But a new U.S. negotiating posture is needed, one that replaces demands for reciprocity with a commitment to free trade principles.
In traditional trade negotiations, countries offer to reduce import barriers in exchange for other countries' offers of equivalent reductions. In other words, freer markets at home are treated as the price we pay for freer markets abroad. Indeed, in the parlance of GATT negotiations, a commitment to reduce tariffs is known officially as a "concession." Thus, the rhetoric of trade talks is premised on the protectionist notion that imports are harmful and trade barriers are prized strategic assets.
This is not a mere quibble: Protectionist assumptions and attitudes color every aspect of how trade agreements are currently negotiated and evaluated. Trade negotiators, in the process of championing freer trade, demand "reciprocity" from our trade partners. They insist that a "bad deal" (i.e., one in which we liberalize more than other countries do) is worse than no deal at all. They oppose domestic reforms outside the context of negotiations on the ground that our own bad policies are "bargaining chips" that should be retained for their exchange value. More ominously, they refer to liberalization without reciprocity as "unilateral disarmament." And when an agreement has been reached, free traders focus on the benefits to exporters, not importers. They tout the benefits of reducing foreign trade barriers, but say little or nothing about the benefits of reducing our own.
These features of conventional trade negotiations are no accident: They are part and parcel of the old diversion-and-appeasement strategy. But if free traders are to break from that dead-end approach, if they are to mount a head-on challenge to globalphobia, they will have to reinvent trade negotiations so that they don't perpetuate protectionist fallacies.
In areas where the United States still retains protectionist policies, it could identify other countries with similar barriers–but which are committed to reform–and negotiate simultaneous liberalization in a kind of "coordinated unilateralism." Unlike in reciprocity-based negotiations, the goal wouldn't be to swap "concessions" or to "win" at the bargaining table by "getting" more than you "give." Rather, the express purpose of the negotiations would be for each country to gain by reforming its own policies, but to maximize that gain by linking reforms to liberalization abroad. Reforming one's own policies would be a central negotiating objective rather than the downside of the transaction, while coordination would strengthen the political case for free trade by adding the benefits of liberalization abroad to those of market opening at home. For coordinated unilateralism to work, though, all countries involved would have to be committed to real reform. Otherwise obstructionism and holding out by some parties would become excuses for other countries to cling to their misguided policies, and the whole enterprise would degenerate into reciprocity as usual.
The United States can continue to take a leading role in trade negotiations even when it has already eliminated its protectionist policies. In that regard, consider the 1997 World Trade Organization agreements on telecommunications and financial services. Both agreements represented important breakthroughs, and both were negotiated in the absence of U.S. fast track authority. How was this possible? Fast track was unnecessary because none of the U.S. commitments under either agreement required changes in legislation. The major U.S. "concession" was to agree to "lock in" current levels of openness. The United States would not commit to do so, however, until a critical mass of other countries agreed to exceed a minimum threshold of liberalization.
These agreements show how unilateral reform and trade negotiations can complement and reinforce each other. Even without trade barriers, the United States can still exert powerful leverage at the bargaining table–not only through offers to lock in existing liberalization, but also because U.S. participation lends legitimacy to any international agreement and increases other countries' confidence in each other's commitments. Using that leverage, the United States could define negotiating objectives–for example, rules on the treatment of foreign investment, market access for the cross-border provision of services, and so on–and offer to elevate its own unilaterally adopted free trade policies into binding international commitments. That offer would be contingent, though, on pledges by other countries of credible and significant liberalization.
The viability of such an approach is not hypothetical: It worked in the telecom and financial services talks. True, in those negotiations the United States couched its position in terms of demands for reciprocity. But it could easily drop such rhetoric and adopt instead the following line: We pursue free trade policies at the national level because we believe it is in the U.S. interest to do so, but we will not commit ourselves internationally to any agreement unless it reflects a sufficiently serious commitment to free trade principles.
It is thus possible for the United States to remain engaged in the process of negotiated liberalization without fostering misconceptions that undermine free trade in the long term. The United States can still wield significant bargaining power–most important, by refusing to participate in watered-down agreements–without clinging to wrongheaded policies simply because other countries have not yet gotten rid of theirs. In short, the United States can enjoy the best of both the unilateral and the multilateral worlds. Unilateral liberalization, far from undermining trade negotiations, can put them on a much sounder footing.
Many supporters of free trade will be reluctant to abandon a tried and true strategy, but a sober assessment demonstrates that the old diversion-and-appeasement approach has outlived its usefulness. It is contributing to popular anxieties about globalization. It is bending trade negotiations away from true liberalization and toward international bureaucratization. In short, it is creating more problems than it solves.
There is a better way. Free traders have it in their power to promote their cause, here and abroad, with much greater effectiveness than at present. They can seize the intellectual initiative. They can frame issues in ways that give them the natural political advantage. They can set an example for the rest of the world to follow. And best of all, they can achieve these things by standing up for what they know to be true.
Contributing Editor Brink Lindsey (blindsey@cato.org) is director of the Cato Institute's Center for Trade Policy Studies. This article is adapted in part from a Cato Trade Policy Analysis titled "A New Track for U.S. Trade Policy."
The post Fast-Track Impasse appeared first on Reason.com.
]]>The ongoing globalization of economic life leaves many Americans nervous and suspicious. According to a Business Week poll taken last fall, 56 percent of Americans believe that expanded trade will destroy more jobs than it creates, and 40 percent think that more trade means lower wages, compared to only 17 percent who believe the opposite.
Pat Buchanan has played to this anxiety in two presidential campaigns and is now preparing to do so a third time. To that end he has written The Great Betrayal, a root-and-branch rejection of free trade in favor of a "new nationalism." Consider this book a preview of his 2000 campaign strategy.
Buchanan advances two distinct and contradictory lines of argument in The Great Betrayal. On the one hand, he defends protectionism as sound economic policy. According to Buchanan, tariff barriers promoted American prosperity throughout much of our history, while their progressive elimination in recent decades has begotten industrial decline and falling living standards. At the same time, he lambastes free trade "ideology" for exalting economic efficiency over concern with flesh-and-blood people–specifically, the people whose lives have been disrupted by foreign competition. Buchanan calls upon "conservatives of the heart" to embrace protectionism on the ground that there's more to life than economics.
So which is it: Is free trade inefficient or too efficient? Neither position is tenable, and by flipping back and forth between the two, Buchanan manages to get the worst of both worlds.
First, let's look at Buchanan's claim that import barriers promote economic vitality. His chief evidence is historical: Throughout much of its existence, the United States maintained high and consciously protectionist tariffs while experiencing rapid growth and development. True enough, but what of it? In all countries at all times, governments have hampered markets with ill-advised restrictions on freedom, and yet the creative power of markets has persevered to deliver the goods. That's no proof that the restrictions helped. The fact that I can carry a bag of cement up a hill doesn't mean it's making me go faster.
And Buchanan neglects to mention that despite the existence of protectionist tariffs, Americans continued to enjoy other kinds of unregulated international trade. Although foreign goods were often kept out, foreign money wasn't: British investment in particular played a major role in bankrolling American canals and railroads during the 19th century. And while we blocked the products of foreign labor, we didn't block the foreign labor itself: Mass immigration supplied the manpower–and much of the brain power–for the new mass production economy that propelled the country to affluence. These points hardly square with Buchanan's thesis that America's rise was based on independence from foreign influence and protecting "good jobs at good wages" from foreign competition.
In any event, it's no good to argue that the historical coexistence of protection and prosperity demonstrates a causal connection between the two. This is the post hoc, ergo propter hoc fallacy: The rooster crowed, the sun rose, so therefore the rooster caused the sun to rise. To get anywhere, you have to have some analytically convincing explanation for why the one leads to the other.
Unsurprisingly, Buchanan fails to come up with any such thing. Space constraints prevent me from cataloging and responding to all of the butcheries of economic reasoning contained in The Great Betrayal. Let me focus instead on the nub of the matter. "Here is another fallacy of free-trade theory: what's best for its consumer is best for a country," Buchanan declares. He continues: "Putting consumption first goes against the grain of common sense, as well as inherited wisdom. Before consumption comes production. Before production, investment. Before investment, savings. And before savings, income–the reward for work. Before a family consumes bread, a farmer must plow the ground, sow the seed, till the field, wait and watch….As Aesop's fable of the ant and the grasshopper teaches: he who puts consumption first has put his foot on the road to ruin."
Buchanan here trips on the root misconception of protectionist thinking: that production, not consumption, is the end of economic activity. The notion sounds superficially plausible, which is why two centuries of railing against it by economists have failed to put it to rest. Of course production is what it's all about; how silly of those ivory-tower economists to say otherwise.
But the economists aren't denying the centrality of production; they are defining what production is. Specifically, production is economically meaningful only if it is of value to someone–that is, only if there's a consumer out there who wants to buy it. You can show all kinds of determination and grit while digging holes and filling them back in, but that's not production; it's a waste of time.
Thus, the bedrock principle that consumption is the end of economic activity is not a call to hedonistic self-indulgence, as Buchanan charges. On the contrary, putting the customer first is a fierce discipline that the market imposes on producers. Work as hard as you want, but unless you're creating more value than you're expending, you're wasting resources and will eventually go out of business. It is this relentless discipline that drives producers to create more and more value for less and less effort–in other words, to make us richer.
The primary benefit of free trade is that it further tightens the screws of market discipline by expanding the realm of competition. Industries that face import pressure must become more productive or give way; industries that can take on the world's best are able to export and expand. International commerce thus shifts a country's resources away from less productive industries and toward more productive ones.
Protectionists like Buchanan get all of this backwards. They believe that wealth consists of particular domestic industries with high-paying jobs; they want to defend those industries and jobs from foreign competition. But high-paying jobs don't just fall from the sky; they emerge from the process of market discipline that encourages ever-increasing productivity. By shielding producers from market discipline, protectionists interfere with and undermine the wealth-creating process that ultimately produces high-paying jobs.
In the end, the only economically literate case for protectionism comes down to the claim that, under certain circumstances, government decisions about how resources should be allocated (i.e., which industries should be protected) will produce better outcomes than the market process. It's a theoretical possibility, of course, but so is hitting it big at the roulette table by playing your lucky number. Over the long term, neither is likely to be a winning proposition.
Turning from the past to the present, Buchanan contends that the U.S. "free trade era"–which he dates from the conclusion of the Kennedy Round of GATT talks in 1967–has been an economic catastrophe. Here he trots out familiar statistics about our ballooning trade deficits and eroding manufacturing base–all of which figured prominently in the "declinist" literature of the '80s and early '90s. Times have changed, though, and now this Cassandra routine comes across as stale. Put aside for a moment all the hype about our own economic performance, and just take a look around: With Europe stuck in the mud and Asia falling off a cliff, exactly whom are we in danger of falling behind?
Perhaps sensing that it's no longer the New Hampshire winter of '92, Buchanan does not stick too firmly to the old declinist rant. Or rather, his focus is elsewhere. The emphasis in The Great Betrayal is not on America's standing relative to the world but on some Americans' standing relative to others:
"We are now the `two nations' predicted by the Kerner Commission thirty years ago. Only the dividing line is no longer just race; it is class.
"On one side is the new class, Third Wave America–the bankers, lawyers, diplomats, investors, lobbyists, academics, journalists, executives, professionals, high-tech entrepreneurs–prospering beyond their dreams. Buoyant and optimistic, these Americans are full of anticipation about their prospects in the Global Economy….
"On the other side of the national divide is Second Wave America, the forgotten Americans left behind. White-collar and blue-collar, they work for someone else, many with hands, tools, and machines in factories soon to be hoisted onto the chopping block of some corporate downsizer in some distant city or foreign country."
Here at last, Buchanan strikes a nerve–even if he uses a meat axe to get to it. Free trade, while broadly beneficial, does have its human cost. Some Americans have lost their jobs and seen their prospects diminished because of foreign competition.
But there is nothing distinctive about international trade in this regard. What about mom-and-pop stores displaced by Wal-Mart? What about Eastern and Braniff and Pan Am (the first one, anyway)? What about IBM's eclipse? What about the market share Big Steel lost to Nucor and other minimills? What about merger waves and the resulting "downsizings"? What about new management gurus and their jihad against middle managers? What foreigners are to blame for any of this?
The fact is that we are living in a time of sweeping and convulsive economic change. This change is creating vast new wealth and breathtaking new opportunities; at the same time, it is claiming some victims and fraying more than a few nerves. Foreign trade is a part, but only a part, of that overall picture.
Accordingly, if you wish to side with "Second Wave America" against "Third Wave America," you can't stop at trade policy. A true "protectionist," one who would defend the economic status quo against all comers, must declare himself an enemy of change itself, and the open and dynamic market system that endlessly foments it.
Buchanan flirts with this kind of full-fledged reactionary stance: "What is wrong with the Global Economy is what is wrong with our politics; it is rooted in the myth of Economic Man. It elevates economics above all else. But man does not live by bread alone.
"To worship the market is a form of idolatry no less than worshipping the state. The market should be made to work for man, not the other way around."
But Buchanan can follow this line only so far, since ultimately it conflicts with his economic nationalism. After all, why does it matter that free trade is bad economics if economics itself is bad? If protectionism really would make our economy grow faster, that just means the pace of change would accelerate and the toll of disruption increase. Why would a true-blue reactionary support that?
Economics bashing also makes a hash of Buchanan's reading of history. What sense does it make to champion the Americans left behind by "Third Wave America," while at the same time totally ignoring all the Americans left behind by "Second Wave America"? Buchanan identifies the period between the Civil War and World War I as a golden age of economic nationalism, yet that was a time of economic dislocation similar to, if not more turbulent than, our own. Buchanan, however, does not castigate the Gilded Age for its market idolatry; he sides with the big-business elitists and their "cross of gold."
So Buchanan tries to have it both ways. His heart bleeds for the victims of change, but only when foreigners can be made to take the rap for it. Buchanan's book proclaims sympathy with two conflicting public sentiments–the desire for economic dynamism and the aversion to disruptive change–and reconciles the two by blaming their conflict on an unpopular scapegoat. Politically, it's an elegant straddle. Intellectually and morally, it's shameful demagoguery.
Contributing Editor Brink Lindsey (blindsey@cato.org) is director of the Center for Trade Policy Studies at the Cato Institute.
The post The Great Contradiction appeared first on Reason.com.
]]>If you're depressed about the state of politics these days, read The Commanding Heights. Forget about the Republicans' fecklessness for a while. Take a break from Bill Clinton's Rasputin act. Step back and look at the big picture, a picture that spans the whole planet and comes into focus over decades. Look at the big picture and see that our side–the side of human freedom–is winning.
In The Commanding Heights, Daniel Yergin, author of The Prize: The Epic Quest for Oil, Money & Power, and Joseph Stanislaw, Yergin's colleague at Cambridge Energy Research Associates, chronicle the global rise and fall of government control over the economy. Their narrative covers the past half-century, beginning with the Labour Party's big victory in Britain at the close of World War II and concluding with the still-unfolding Asian currency crisis. The theme is a simple one: Through bitter and repeated experience, faith in "government knowledge" and anxiety about "market failure" have given way to trust in "market knowledge" and wariness of "government failure." As a result, market forces are retaking the "commanding heights" of the economy (Lenin's phrase) that governments had stormed and captured.
The Commanding Heights is a solid book, though not a great one. The prose is clear and taut, but lacking in style or wit. With subject matter of such epic sweep, it's a shame the storytelling is so flat. And though the authors pay attention to thinkers as well as doers, their presentation and analysis of ideas is uniformly shallow. All told, The Commanding Heights reads like a gigantic newspaper article.
But it's a well-reported newspaper article, and it gets the story straight. The book rests not only on a fat bibliography but also on extensive interviews with dozens of the key players in the drama: Pedro Aspe, Gary Becker, Stephen Breyer, Domingo Cavallo, Milton Friedman, Alberto Fujimori, Yegor Gaidar, Mahathir Mohamad, Jeffrey Sachs, Margaret Thatcher, and Paul Volcker headline the gaudy list. Yergin and Stanislaw synthesize their research into a well-paced story filled with interesting details and anecdotes, exposing socialism's broken promises and celebrating the market's creative power.
It's hard to believe how far we've come in 50 years. When Winston Churchill was unceremoniously dumped in favor of Clement Attlee in the spring of 1945, the Soviet Union served as the economic (if not political) model for Labour's plans to rebuild Britain. As historian E.H. Carr wrote around that time, "Certainly, if `we are all planners now,' this is largely the result, conscious or unconscious, of the impact of Soviet practice and Soviet achievement." The free market was an anachronism; central planning was the wave of the future. For his part, Attlee referred to belief in the private profit system as "a pathetic faith resting on no foundation of experience."
Things were little different across the Channel. Charles de Gaulle declared in 1945 that "the state must hold the levers of command." Historian A.J.P. Taylor surveyed the intellectual scene of the time and concluded: "Nobody in Europe believes in the American way of life–that is, in private enterprise. Or rather those who believe in it are a defeated party and a party which seems to have no more future than the Jacobites in England after 1688."
Faith in planning seized the imagination of leaders around the world. In India, Jawaharlal Nehru proclaimed that "the Soviet Revolution had advanced human society by a great leap and had lit a bright flame which could not be smothered and that it laid the foundation for a new civilization toward which the world could advance." At least India's leaders were committed to democracy. Elsewhere in the developing world, China, Cuba, and others followed the Soviet model all the way to totalitarian communism.
The founders of the new mixed economies, like their communist cousins, believed that state-owned industries would lead the way toward modernization and growth. The chaos of the marketplace could not be relied upon to mobilize the necessary investments or to create enterprises of sufficient scale. Nationalized industries would be more efficient and more technologically advanced; their joint efforts could be coordinated by central planners to produce full employment and prosperity for all.
In much of the developing world, planning and state ownership were embellished with enforced economic isolation. Foreign investment by Western multinationals was shunned as exploitation or "neocolonialism." Even exports to the industrialized world were seen as a trap. According to the theory of dependencia, which held sway throughout Latin America, international trade locked developing countries into a subservient role as suppliers of cheap farm goods and raw materials. In this view, the benefits of trade flowed one way: from the commodity-producing "periphery" to the industrialized "center." True economic development would come to the Third World only through protectionist "import substitution" policies that encouraged industrialization at home.
Of course, the triumph of statism was not uniform. The United States avoided widespread nationalizations, preferring instead to address perceived market failures through antitrust law and economic regulation. Germany, under the influence of Ordoliberals like Wilhelm Röpke and Alexander Rüstow, rejected top-down planning in favor of corporatist consultation and a "social market economy." And Japan, followed by other countries in East Asia, embraced the world economy in pursuit of export-led growth.
Meanwhile, a tiny intellectual minority resisted the general socialist clamor and maintained allegiance to the free market. Yergin and Stanislaw tell the story, doubtless familiar to many REASON readers, of F.A. Hayek, the Mont Pelerin Society, and the rise of the Chicago school.
Hayek, a socialist in his youth, was converted to the free-market camp by Ludwig von Mises in the 1920s. In the '30s, he attained a kind of professionally suicidal prominence through his futile struggle against John Maynard Keynes's growing influence. Having lost that battle, he began to win the war–first, in 1944, with the popular The Road to Serfdom; and then, in 1960, with his masterwork, The Constitution of Liberty. As Yergin and Stanislaw write: "In the postwar years, Keynes's theories of government management of the economy appeared unassailable. But a half-century later, it is Keynes who has been toppled and Hayek, the fierce advocate of free markets, who is preeminent."
Hayek exerted his influence not only through his writings but also by reaching out to like-minded thinkers. In 1947 he helped organize a meeting of 36 free market intellectuals at a Swiss resort on Mont Pelerin. The gathering turned into an institution–the Mont Pelerin Society–whose biennial get-togethers helped to create an international network of classical liberal scholars.
One of the attendees at that first Mont Pelerin meeting was Milton Friedman of the University of Chicago, who was making his first trip to Europe. Friedman, of course, went on to become the most famous member of the so-called Chicago school of economics, which included George Stigler, Gary Becker, and a host of other free market luminaries (actually, Hayek spent a dozen years at Chicago, though interestingly not on the economics faculty). The intellectual ascendancy of the Chicago school may be measured in Nobel Prizes: Since 1974, eight Chicago professors and another 11 who had some association with the university have won the Nobel Prize in economics.
How did it happen that the ideas of a courageous few ultimately changed the world? Broadly speaking, the past half-century served as a vast and tragic social science experiment, in which the hypothesis of central planning's superiority to markets was tested and decisively repudiated. The fall of the Soviet Union on the one hand, and the rise of East Asia on the other, were the two most important data points that exploded the statist worldview. "Between the fall of the Berlin Wall and the collapse of the Soviet Union in 1991," a senior economic official in India admitted, "I felt as though I were awakening from a thirty-five-year dream. Everything I had believed about economic systems and had tried to implement was wrong."
Yergin and Stanislaw pile example upon example of socialism's shining ideals gone awry. My favorite is their account of the Hindustan Fertilizer Corporation in India: "In 1991, at the time of the economic crisis, its twelve hundred employees were clocking in every day, as they had since the plant had officially opened a dozen years earlier. The only problem was that the plant had yet to produce any fertilizer for sale. It had been built between 1971 and 1979, using considerable public funds, with machinery from Germany, Czechoslovakia, Poland, and a half-dozen other countries. The equipment had looked like a great bargain to the civil servants who made the basic decisions, because it could be financed with export credits. Alas, the machinery did not fit together and the plant could not operate. Everyone just pretended it was operating." After a while, even true believers lose their faith in the face of such evidence.
Of course, the specific events by which the ideas of a few dissident economists came to topple governments and change the course of history looked nothing like a neat and tidy scientific experiment. In the exceptional case, the path from ideas to practice was direct. Yergin and Stanislaw relate a mid-'70s visit by the new Conservative leader Margaret Thatcher to the party's research department. She got into an argument with a staffer who was preparing a paper advocating a middle way between left and right; reaching into her briefcase, she pulled out a copy of Hayek's The Constitution of Liberty and held it aloft. "This," she said, "is what we believe."
Much more often, though, freedom's resurgence received aid from unexpected quarters. Mikhail Gorbachev broke up the Soviet Empire while trying to revive it. Deng Xiaoping unleashed market forces that lifted 200 million people out of poverty in two decades, all in the name of the Communist Party. Carlos Salinas, a creature of Mexico's corrupt PRI who came to power in an apparently rigged election, swept away trade barriers and privatized industries. In Peru, Mario Vargas Llosa was a dedicated believer in economic freedom, but he lost the election; instead, Alberto Fujimori, an obscure agricultural engineer, won the presidency and led his country through dramatic market reforms. New Zealand's sweeping liberalization during the mid-'80s came under a Labour government. Airline deregulation in the United States was launched by Stephen Breyer, now a Supreme Court justice and then a staffer for Sen. Ted Kennedy. And so on.
Indeed, Yergin and Stanislaw's account highlights the crucial importance of sheer accident in the historical process. A flap over a reference to U.S. food aid as "chicken feed" led Gen. Lucius Clay to fire the German director of economic administration in the American and British occupied zones; Clay replaced him with Ludwig Erhard, whose snap elimination of price controls launched the German Wirtschaftswunder. The election of a Polish pope–the first non-Italian in centuries–proved vital in nurturing the Solidarity movement in Poland and sustaining it after the 1981 crackdown. The Falklands War, a third-rate conflict in military terms, catalyzed reform in both the combatant countries; victory gave Thatcher the popular support she needed to launch a full-scale privatization drive, while defeat spelled the end of the generals' rule in Argentina and the eventual embrace of market reforms–by, of all people, a Peronist president, Carlos Menem.
So when you put down The Commanding Heights and descend again into the drift and mediocrity of today's headlines, don't despair. Keep your focus on the larger view that Yergin and Stanislaw have sketched. Remember that over the past half-century, the ideas of liberty have survived near-extinction to gain worldwide acceptance. The struggle between state and market for the commanding heights continues, and doubtless the cause of freedom will suffer setbacks and reverses. Nevertheless, there is firm ground for optimism in the realization that, whether by exceptional leadership, or by assistance from unlikely champions, or simply by accident, good things happen to good ideas.
Contributing Editor Brink Lindsey (blindsey@cato.org) is director of the Center for Trade Policy Studies at the Cato Institute.
The post A Trip to the Market appeared first on Reason.com.
]]>It's fashionable these days to dismiss the industrial era as a kind of Dark Ages from which, thanks to the integrated circuit, we have only just emerged. In this caricature of history, Frederick Winslow Taylor, father of "scientific management," figures as one of the chief villains. His hierarchical control systems and treatment of workers as brainless interchangeable parts stand in diametric opposition to the flattened organizations and "knowledge workers" that are touted by today's management gurus.
Of course, caricatures are based on actual (and usually unattractive) features. And Taylor makes an inviting target: Much of his influence has indeed been godawful. But the full story is much more complicated, and much more interesting. A balanced look at his life and times reveals not a villain but a tragic hero. His innovations ushered in enormous productivity gains, which brought unprecedented affluence to the United States and the nations that followed its lead; at the same time, though, Taylor's system employed methods that misunderstood, and thereby grievously undermined, the full promise of the new mass production economy. It is fair to say that Frederick Taylor's career exemplified the Industrial Revolution he helped to lead: a mixture of beneficent achievements and malign shortcomings.
Robert Kanigel's The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency tells Taylor's story comprehensively and fairly. The length of the book is somewhat forbidding, and in some sections excessive. Kanigel clearly immersed himself in his subject, and at times he is too eager to make sure we know it. Despite the page count, the book is highly readable and on the whole richly rewards the reader's investment of time. If you want to understand the history of American political economy during the 20th century, you really need to know about Frederick Taylor; he is, for good and ill, one of our founding fathers.
Frederick Winslow Taylor was born in 1856 to a wealthy Philadelphia Quaker family. Until the age of 18, he appeared destined to follow in his father's footsteps as a gentleman of leisure, dividing time between philanthropic projects and managing inherited wealth. Toward this end Taylor had prepped at the elite Phillips Exeter Academy, receiving a traditional classical education, and was poised to enter Harvard. At this point, though, his life took a sharp and unexpected turn: For reasons that remain obscure, he decided to forsake Harvard for a career in industry. In 1874, the training he needed generally wasn't taught in universities; instead, Taylor signed on as an apprentice at a small Philadelphia pump works.
Four years later, his apprenticeship complete, he got a job as a laborer in the machine shop of Midvale Steel. He spent 12 years there, rising quickly through the ranks, first to foreman and ultimately to chief engineer. (In his spare time, it should be mentioned, he and his partner won a doubles title at the U.S. Open Tennis Championship.) At Midvale, he developed and put into place the basic elements of what later came to be known as "scientific management": the breakdown of work tasks into constituent elements; the timing of each element based on repeated stopwatch studies; the fixing of piece rate compensation based on those studies; standardization of work tasks on detailed instruction cards; and generally, the systematic consolidation of the shop floor's brain work in a "planning department."
From Midvale, Taylor went on to become one of the world's first management consultants, his business card proclaiming, "Systematizing Shop Management and Manufacturing Costs a Specialty." Around the turn of the century, Taylor did his last prolonged stint as a corporate employee, spending three years with Bethlehem Iron (later Steel). At Bethlehem, Taylor recorded two great achievements: first, the development with a colleague of a new "high speed" tool steel, a material that allowed machine tools to cut metal at three to four times the previous speeds; and second, the systematization of years of metal-cutting experiments into a special slide rule for calculating machine speed and feed. Both were landmark engineering breakthroughs; putting aside Taylor's management theories, they would have sufficed to make him an important figure in the history of American industrialization.
But Taylor was to become known as much more than an engineer. After leaving Bethlehem–being forced out, more accurately–he more or less retired from day-to-day work. He became instead an evangelist for his management ideas, offering private seminars to corporate leaders from his Philadelphia home. Worldwide fame came in 1910 as a result of a railroad rate increase dispute before the Interstate Commerce Commission. Crusading lawyer (and later Supreme Court Justice) Louis Brandeis, who represented interests opposed to the rate increase, based his argument on Taylor's management system, which he dubbed "scientific management." Brandeis claimed that if the railroads adopted Taylor's methods, they could save a million dollars a day: They didn't need a rate increase; they needed greater efficiency.
Taylor became an instant celebrity. Controversy, though, wasn't far behind. Labor leaders and others denounced "Taylorism" as oppressive and antidemocratic, and a strike at the Watertown Arsenal over the adoption of scientific management led to heated congressional hearings. Bruised by the conflict, Taylor soon withdrew from public life, letting his growing number of disciples carry the battle for "the one best way," a catch-phrase of the movement he launched. He died in 1915.
Taylor's shop management system was adopted in something close to full form in only a few companies; the general thrust of his views, though, permeated American and world industrial society. His writings were translated into dozens of foreign languages. Taylor Societies sprang up everywhere. Along with Henry Ford, he became a personification of American efficiency and industrial might. Even the communists became Taylorites: In 1918 Lenin gave a speech in which he declared, "We must introduce in Russia the study and teaching of the Taylor system and its systematic trial and adaptation."
Before discussing the pros and cons of Taylor's system, it is necessary to grasp the problems it was addressing. The factory floor that Taylor came to at Midvale was typical of its day but a completely alien place from our contemporary perspective. Factory work was done according to the craft system; jobs were "trades," and their secrets and rules of thumb were passed down, slowly and grudgingly, from master to apprentice. The owners and operators of the business really had no idea how their work should be done. They didn't know how tasks were best arranged, they didn't know how to optimize the output of the machines, and they didn't know what pace of output was sustainable. They supplied the workplace and the tools, and through the foremen and shop bosses they prodded their workers, often brutally, to do more faster. Ultimately, though, when the workers told them there wasn't a better or faster way, they lacked the knowledge to prove otherwise.
Workers jealously guarded their shop floor secrets from management because of a fundamental conflict of interest. Compensation generally took the form of piece rates rather than daily or hourly wages, and piece rates were set based on expected output. If managers discovered that work could be done faster or machines could be operated more efficiently, piece rates tended to be reduced (at least in the short run). So a worker's attempt to earn more money by increasing his own output was generally self-defeating: The piece rate would be reduced, and then he and everybody else would have to run harder just to stay in place. This state of affairs encouraged systematic "soldiering"–the deliberate slowdown of work output. (The expression originated as a nautical term, having to do with the laziness of ground troops when they were transported by ship.) Workers who didn't toe the line could expect ostracism if not physical abuse.
Thus, the early industrial factory mixed the dynamism of amazing new technologies with the backwardness of the medieval guild. Capital and labor were separated by a fault line of unresolvable, zero-sum conflict: Management knew it was being cheated but couldn't prove it, while workers knew that management was trying to cheat them. Frederick Taylor lived this conflict as an apprentice, banged his head against it as a foreman, and then resolved to do something about it.
Taylor recognized that knowledge is power. Management had to understand what was happening on the factory floor. Thus, the starting point of scientific management, according to Taylor, was "the deliberate gathering in on the part of those on the management's side of all of the great mass of traditional knowledge, which in the past has been in the heads of the workmen, and in the physical skill and knack of the workmen, which he has acquired through years of experience." Through his notorious time studies and his less well-known metal-cutting experiments, Taylor allowed those who ran the business to pierce the veil of shop practice secrecy.
Taylor wanted more than raw data, though. He wanted to systematize the knowledge that was gained, to replace habit and rules of thumb with precise and usually quantitative analysis. He was convinced that scientific study would reveal a better way–the one best way–of doing things. No task was too mundane for scrutiny. In one celebrated example, Taylor conducted extensive experiments to determine the optimal size of a shovelful of dirt to maximize the total amount shoveled in a day. "In the past the man was first," Taylor said in a famous line, "in the future the system must be first."
To implement the gathering and analysis of all this previously hidden knowledge, Taylor envisioned new kinds of managers who prepared the detailed instruction cards, planned the use of and set the machinery, and generally coordinated who did what, when, and in what order. Taylor bucked the traditional perception of white-collar workers as "nonproductive" by redefining their jobs. Under Taylor's system, shop bosses were no longer browbeaters; they were knowledge workers.
Once management gained control of the production process by consolidating and systematizing information about it, Taylor believed the stage was set for solution of the "labor problem"–the zero-sum conflict between capital and labor over compensation. First of all, the question of how much work could be done would no longer be resolved by a test of wills; it would be resolved by "science," in the form of time studies. Next, differential piece rates would be established so that workers would receive a substantial pay increase if they met management's output targets; furthermore, there would be a strict ban on rolling back piece rates after they had been set "scientifically."
Through achieving substantial gains in productivity and sharing those gains with the workers, Taylor envisioned a way out of the traditional impasse: "The great revolution that takes place in the mental attitude of the two parties under scientific management is that both sides take their eyes off the division of the surplus as the all-important matter, and together turn their attention toward increasing the size of the surplus until this surplus becomes so large that it is unnecessary to quarrel over how it shall be divided." Under scientific management, the old zero-sum rancor would give way to positive-sum mutuality of interest.
So far, so good for Taylor's system. By breaking down the old craft system and subjecting shop practices to rigorous analytical scrutiny, scientific management allowed the incentive-creating signals of the marketplace to penetrate–for the first time–all the way to the factory floor. It is not saying too much to conclude that Taylor opened up a new world for the capitalist discovery process to explore.
And although Taylor's hope that labor-management conflict would disappear proved utopian, the truth is that he only exaggerated his case rather than misstating it. The productivity gains that scientific management and its offspring created did spur rising living standards, and this generalized affluence did in time, if not eliminate class conflict, at least quench its revolutionary combustibility. Once "working class" people owned their own homes with washer/dryers and color TVs and two cars in the driveway, the "labor problem" as a threat to social peace had indeed been solved. Taylor had the vision to prophesy that outcome, and the genius and tenacity to help it come true.
Notwithstanding its considerable virtues, Taylor's system was marred with terrible flaws. While grasping that systematic knowledge gathering was the key to industrial development, Taylor then insisted on imposing needless and perverse limits on that process. He tended to see knowledge work as a finite task, rather than a continuous and never-ending process; moreover, he treated it as the exclusive preserve of an elite few, rather than something that should be diffused as broadly as possible.
The whole notion of "the one best way" betrays a crabbed and static conception of management. In this view, optimal practices are deterministic and stable over time; once they are discovered, all that remains is routine implementation. Needless to say, this approach looks hopelessly retrograde from our Information Age perspective. For helping to build a corporate culture in which standard operating procedures were holy writ and the "not invented here" syndrome reigned supreme, Taylor deserves the opprobrium that is heaped upon him by today's "creative destruction" enthusiasts.
Even worse, Taylor's belief in a rigid separation between planning and doing was antithetical to the integration of the workplace and marketplace that scientific management sought to achieve. There is no mistaking Taylor's views on this subject; he had no use for workers from the neck up. "In our scheme, we do not ask for the initiative of our men," he said. "We do not want any initiative. All we want of them is to obey the orders we give them, do what we say, and do it quick."
This ruthless top-downism became the norm in American industry, and its effect has been nothing short of disastrous. Taylor's utopian solution became instead a Faustian bargain: We'll bribe you to check your brains at the factory gate. As a result, productivity has been hobbled by cutting off management from all of the potentially useful information that resides in the heads of workers; furthermore, generations of workers have been alienated by a system that treats them as inanimate objects. We are only now beginning to overcome the serious dysfunctions caused by this bargain. Much of the dislocation and pain caused by corporate restructurings over the past decade can be laid at the feet of Frederick Taylor.
And in the broader view, Taylor's misplaced confidence in know-it-all technocrats is of a piece with the great collectivist tragedies of this century. It's not surprising that Lenin was a Taylorite, or that Yevgeni Zamyatin's dystopian We identified Taylor as the chief prophet of its imagined future totalitarian state.
In both his virtues and his flaws, Frederick Taylor was a man of his era. The Industrial Revolution was a time of both prodigious creativity and profound misunderstanding. A new world based on competition and diffused brainpower was being constructed, but its leading builders saw instead a brave new world of all-controlling technocrats. For his part, Taylor did much to unleash creativity, but also to stifle and misdirect it. For better and for worse, in our liberating affluence and our strangling bureaucracy, we are inheritors of Taylor's hopeful, troubled legacy.
Contributing Editor Brink Lindsey (102134.2224@compuserve.com) practices trade law in Washington, D.C.
The post The Man with the Plan appeared first on Reason.com.
]]>Unfortunately, William Greider's One World, Ready or Not is the best-written book on the global economy I have yet read. It is a panoramic work: Greider tells the stories of businessmen, financiers, government officials, workers, and activists from around the world, and then combines this flesh-and-blood detail with broad and sweeping historical generalizations. The prose is clear and sharp, and energized by Greider's passionate political sympathies.
Greider does a particularly good job of conveying the dizzying, wonderful strangeness of the phenomenon he analyzes. In one well-drawn anecdote, he describes a Motorola semiconductor plant in Malaysia. The workers—young Muslim women in ankle-length dresses, heads and shoulders swathed in scarves—file into the factory, walk down a hallway hung with Norman Rockwell prints and festooned with goofy slogans like "You'll Be Prepared for Anything with Enthusiasm," and then proceed into the changing room, from which they emerge in white jumpsuits and surgical masks, ready to operate some of the most complicated machinery human beings have ever devised. Here Greider summarizes the scene:
"The spectacle of cultural transformation was quite routine—three times a day, seven days a week—but it conveyed the high human drama of globalization: a fantastic leap across time and place, an exchange that was banal and revolutionary, vaguely imperial and exploitative, yet also profoundly liberating."
As someone who has witnessed his fair share of similar scenes, I can attest to how vividly jumbled the human condition is where the Third World meets the Third Wave. Leave it to a critic of the global economy to do a better job than its friends in bringing to life this sometimes breathtaking, sometimes comical complexity.
The virtues of this book are unfortunate, however, because its vices are so colossal. At virtually every turn, Greider's economic and political analysis could not be more drastically wrongheaded. He completely misunderstands how new wealth is generated and spread by market competition; as a result he sees ruin where in fact there is promise, and then proposes "solutions" that would be ruinous if adopted. The fact that the book is well-written will only ensure that it gains readership and influence it does not deserve.
Over the past 10 to 15 years, the web of marketplace relationships that connect people across political boundaries has expanded into vast new frontier areas. Most dramatically, the communist bloc collapsed; the Soviet empire dissolved and China, while remaining under Communist Party control, has tied its economy to capitalist countries. And throughout the developing world, fear of exploitation by rich nations has given way to a desire to sell to their markets and court their investment. The international capitalist economy has thus added billions of potential new producers and consumers.
International market relationships have grown not only in geographic scope but in complexity as well. Economic growth in developing countries has changed them from mere commodity suppliers into increasingly sophisticated manufacturing bases. This process has been accelerated by investment from multinational corporations: Improvements in transportation and communication have allowed these corporations to integrate their operations on a global basis, locating facilities wherever costs are lowest. Goods and capital are thus flowing across more borders, in greater volume, along increasingly complicated routes.
With all the hype about the global economy, it is important to maintain one's perspective. After all, billions of people still live in isolated agricultural villages, where economic activity remains decidedly local. And even in developed countries, most of us work in service industries that are not traded across borders.
Nevertheless, the changes that have occurred over the past couple of decades—and those that the next decades portend—are nothing short of revolutionary. The wealth-creating powers of free people have now been unleashed to an unprecedented extent. As a result, it is possible to picture seriously, for the first time in human history, a world in which affluence and opportunity replace poverty and ignorance as the normal lot of mankind. Putting it mildly, that's pretty great stuff.
Greider, though, doesn't see it that way. Rather, he believes the new global system is hurtling toward self-destruction. According to Greider, the global economy is beset by "inherent contradictions" that are "propelling the world toward some new version of breakdown, the prospect of an economic or political cataclysm of unknowable dimensions."
What is the source of instability in Greider's view? In a word, oversupply. First, the "global overabundance of cheaper labor" is dragging down wages and living standards in rich countries. Second, global overcapacity in major manufacturing industries—only to be exacerbated by continued rapid growth in developing countries—threatens "some source of decisive breakdown, a financial crisis or an implosion of global commerce" à la the Great Depression.
In other words, the world is suffering from an excess of wealth-creating power. The opening up of the old communist bloc and Third World has made too many productive workers available to commercial enterprise; the development of competitive industries in these countries has burdened the economy with too many productive assets. According to Greider, "Shipping high-wage jobs to low-wage economies has obvious, immediate economic benefits. But, roughly speaking, it also replaces high-wage consumers with low- wage ones. That exchange is debilitating for the entire system."
Greider thus sees the emerging global economy as a zero-sum game: "The history of industrial development has taught societies everywhere to think of the economic order as a ladder. The new dynamic of globalization plants a different metaphor in people's minds—a seesaw—in which some people must fall in order that others may rise."
Greider looks with particular fear at China, which if it continues to develop "could underbid almost everyone in the world on wages and prices." Although he does not welcome this outcome, Greider speculates that "the global system will be spared its nightmare" only if "some sort of disaster will befall the Chinese." For China, Greider states, "it is difficult to know not only what to expect, but also what to wish for."
Greider's analysis falls prey to the classic protectionist fallacy (it's no surprise that Clyde Prestowitz is warmly praised in the acknowledgments): the assumption that work is an end in itself. In this view, a nation is rich because it has profitable businesses that pay high wages. Anything that imperils those businesses, or those jobs, imperils the nation's standard of living.
In fact, however, the whole genius of capitalist wealth creation runs in the opposite direction. Adam Smith stated the principle in The Wealth of Nations: Consumption is the end of production, not vice versa. We don't drive cars to give people in Detroit something to do; people in Detroit build cars because they think we want to drive them. Effort for its own sake, or just to keep busy, is economically meaningless. A producer is a producer only if there are willing consumers; otherwise he is a hobbyist.
Capitalism—whether it operates on a national scale or internationally—creates abundance by encouraging people to maximize the value to other people of their efforts. We grow richer because we are constantly both adding new value and reducing effort. Wealth creation, then, is an ongoing process of doing more with less. A Chinese peasant works to exhaustion just to feed himself and his family; an American farmer feeds thousands. The productivity of the American farmer liberates those thousands from the necessity of growing their own food, and allows them to spend their time building computers, selling insurance, making movies, and so forth.
Accordingly, economic progress is made possible by eliminating work, thus freeing up resources to do other things. This can happen by building machines that save effort, or by trading with people who can make things more cheaply than we can ourselves.
In reality, then, Greider's "oversupply" is not a problem, but a magnificent windfall. The enormous pool of new labor now available for productive use, the new low-cost producers in developing countries, are the functional equivalent of some fantastic new piece of labor-saving machinery that will allow people in rich countries to spend their time on other things. Both rich and poor stand to get richer.
Of course, this process is painful for some. People who lose their jobs and savings when businesses fail or shrink suffer real hardship. Predictably, Greider focuses on the capacity and employment reductions experienced by U.S. smokestack industries, such as the integrated steel mills. But he ignores the rise of mini-mills like Nucor—not to mention the ferment of wealth creation in other industries, from microchips to software to discount retailing to entertainment. To look at creative destruction and see only destruction is to miss the main point of economic life.
Greider trots out the usual claims of declining wages to support his claim that wealth creation abroad is a threat to living standards at home. Among others, W. Michael Cox and Richard Alm have shown these claims to be bogus. (See "The Good Old Days Are Now," December 1995.) And recently, the Boskin Commission's findings that the inflation rate is being systematically exaggerated knocked the props out from under the Chicken Little crowd.
Greider argues that the proper response to the oversupply "problem" is to stimulate demand: "An aggressive effort aimed at rapidly bringing up the bottom of the global wage ladder would directly contribute to the greater purchasing power needed worldwide to consume the world's surpluses of goods and thus narrow the supply gap." In particular, he advocates unionizing the work force in the developing world, thus "freeing workers to demand a larger share of the returns from their burgeoning economies."
Here again, Greider betrays his complete lack of understanding of how capitalist wealth creation works. The fundamental reason for low wages in poor countries is not the absence of collective bargaining, but rather the low average productivity of labor in those countries. A quick look at gross domestic product per head shows that the developing world still generates relatively meager wealth. Only continued growth and investment can raise productivity, and consequently overall wages.
Meanwhile, the policies Greider advocates to encourage unionization abroad would throttle poorer nation's prospects for such continued growth. Specifically, Greider urges rich countries to impose a "social tariff" against countries that do not observe what he considers to be appropriate labor standards—this on top of an "emergency tariff" of 10 percent to 15 percent to reduce the U.S. trade deficit. It is hard to imagine a policy better designed to keep the developing world impoverished—not to mention start a trade war that could send the whole world into an economic meltdown.
According to Greider, the malignant effects of global capitalism are registered not only in declining living standards but also in a fraying of the social safety net. Once again he sees catastrophe where in fact there is cause for optimism.
Greider correctly observes that increased foreign competition, along with loosened restrictions on the mobility of capital, have put pressure on governments to reduce tax and spending burdens. From Greider's perspective, a real achievement is now under attack from short-sighted penny-pinchers. "Rich nations," he says, "are all confronted in different ways by the same assault, the same question: Must they now undo what the twentieth century created—the strong social presence of the state?"
In other words, Greider remains an unreconstructed welfare statist: "The welfare state was, in fact, an attempt to devise a fundamental compromise between society and free-market capitalism. The aid programs and labor laws were intended to compensate for the social consequences of unfettered enterprise—the poverty and unemployment and family dissolution—without destroying the energies of the capitalist process."
Greider is living in a time warp. There is absolutely no acknowledgment in his book that any government policies have failed, or that there are any better ways to pursue agreed- upon policy goals than through existing programs. In the United States, the war on poverty created or at least exacerbated horrible social dysfunctions, while Social Security is a demographic time bomb. In Europe, labor laws and unemployment/disability benefits conspire to produce double-digit unemployment year in and year out. Greider doesn't say a word about any of this: By his account, attempts to address these serious policy failures are just callous and mean-spirited Scroogism.
The inability of domestic firms to compete with less encumbered foreign rivals, and the decisions of domestic firms to pack up and move, are merely signals that present policies are in need of change. Greider blames the alarm bell for starting the fire and seems to think everything would be fine if it just stopped its infernal ringing: He proposes to lock capital in place with taxes and other controls.
Thus, he believes that a new burst of statism is needed to save global capitalism from itself: "The world's nations must eventually turn to political solutions of this nature: collective reform to ameliorate or slow down the destructive forces, to correct the economic imbalances of supply and demand, to reassert control on capital and restore the social understandings, to foster a more stable promise of prosperity." In addition to the trade and capital restrictions I've already mentioned, he advocates progressive taxation, a populist monetary policy, and subsidies and controls to promote "sustainable development."
Although Greider recognizes that his enthusiasm for interventionism is currently out of favor, he warns that the alternative is dire: "Respectable opinion is now enthralled by the secular faith that Austrian economist Karl Polanyi long ago described as 'the utopian endeavor to establish a system of self-regulating markets.' Today, there is the same widespread conviction that the marketplace can sort out large public problems for us far better than any mere mortals could. This faith has attained almost religious certitude, at least among some governing elites, but, as Polanyi explained, it is the ideology that led the early twentieth century into the massive suffering of global depression and the rise of violent fascism."
Well, well. Once again Greider has put his finger on the exact opposite of the truth. The totalitarian horrors of the 20th century may indeed be blamed on a secular faith—not laissez faire, but the belief that central planning and top-down control were the wave of the future. That same belief, stripped of the bloodthirstiness, helped cause the Great Depression and underlies the chronic ills of the welfare state. Now, at century's end, the world is finally beginning to unburden itself of this misconceived faith, making possible the emergence of the global economy and all its liberating potential.
If the beneficent process of globalization does suffer future reverses—and that is certainly a live possibility—they will be due to Greider's beloved "strong social presence of the state," not the lack thereof. In the developing world, warfare or theocratic regimes could isolate markets from the global system. In the advanced economies, a protectionist reaction or continued fiscal profligacy could trigger a major worldwide economic shock. Less drastically, misguided policies around the world that shackle private initiative will reduce the enriching benefits of global commerce.
Ironically and unwittingly, Greider has done all he can to ensure that his prophecies of a future crackup are self-fulfilling. He has advocated, with eloquence and conviction, policies whose adoption could well precipitate just such an economic collapse.
There is no mincing words: This is a truly awful book. It would be easy, but I think improper, to chalk up the book's failures to the author's leftist leanings. One can imagine a book, as yet unwritten, that offers a powerful and challenging critique of the new global economy from a leftist perspective. As Greider notes in his first paragraph, creative destruction on a world scale "throws off enormous mows of wealth and bounty while it leaves behind great furrows of wreckage." A book that engaged our compassion for those left behind and urged some amelioration of their condition would have made an important contribution to the political debate.
There are glimpses of such a book in Greider's opus, but they are swamped by all the pernicious nonsense I have outlined above. Greider's book deserves round condemnation—richly deserves it—not because of his perspective or priorities, but because of his woefully flawed understanding.
Contributing Editor Brink Lindsey (102134.2224@compuserve.com) practices trade law in Washington, D.C.
The post Trade Winds appeared first on Reason.com.
]]>Walter Truett Anderson
In recent times we have seen the emergence of a new polarization–anti-technology vs. pro-technology, Luddite vs. techie. The neo-Luddites dream of people leaving technology behind and advancing into a future that looks–well, looks a lot like the past. The techies dream of artificial intelligence–computers so brilliant that they can advance and leave people behind. (See, for example, the cyberpunk classic Neuromancer.) Along with this goes a lot of argument–much of it useless hyperbole–about whether technology is good or bad, destroyer or savior.
What we really need to do, rather than take sides in any such simplistic fistfights, is to understand how inseparable technological change is from human evolution. Technology is us.
Two books that I will mention here address this issue straight on. The third, a work of science fiction, illuminates it in more indirect ways, as fiction should.
Origins of the Modern Mind (1991), by the Canadian psychologist Merlin Donald, argues that the human species has evolved by developing new "systems of representation," and that at each stage–as people invent new ways to communicate and manage information–we become in fact a different species. The first big jump, he says, was the invention of mimesis. Then came speech, then–much later–writing. We are now in the midst of another such transition, and it is literally changing the way we think: "The growth of the external memory system," he says, "has now so far outpaced biological memory that it is no exaggeration to say that we are permanently wedded to our great invention, in a cognitive symbiosis unique in nature." What this means is that we are now evolving into computer-connected beings with a computer culture and a computer civilization.
Bruce Mazlish of MIT makes a similar point in The Fourth Discontinuity (1993), although his framework is more historical than evolutionary. He takes his title from the proposition that the human species has in recent centuries gone through a number of "discontinuities," each of which involved learning new–and disturbing–lessons about the world and our place in it. We learned the Copernican lesson that our planet is not discontinuous from the heavenly bodies, we learned the Darwinian lesson that humans are not discontinuous from the animals, and we learned the Freudian lesson that the conscious mind is not discontinuous from its preconscious origins. Now, he says, "humans are on the threshold of decisively breaking past the discontinuity between themselves and machines," discovering "that tools and machines are inseparable from evolving human nature." Mazlish's book doesn't footnote Donald's, and I don't think that is an academic oversight. Rather, I suspect he merely moved along his own disciplinary path (he is a historian) and came to a quite similar conclusion.
Kim Stanley Robinson isn't trying to make any such point in Blue Mars (1996), but he makes quite a few anyway. This is the third volume in a trilogy (the previous installments were Red Mars and Green Mars) about an expedition to Mars that results in the deliberate transformation of the planetary ecology and the growth of a new human civilization. Robinson's books reflect most of the current scientific thinking about "terraforming," and also show how that issue might lead to a new kind of technophobe-technophile argument. The major political groupings in his story are the Greens who are eager to modify the planet, and the Reds (more or less similar to the Greens here on Earth), who prefer to leave it alone. In the books the Reds win many of the arguments, but the Greens proceed to change Mars–while human beings move on to terraform Venus, various asteroids, and the moons of the outer planets. Along the way they go through several technological revolutions and evolve some fancy artificial intelligence, but remain recognizably human. Technological change and human evolution proceed inseparably from one another, much as (according to Donald and Mazlish) they always have.
Walter Truett Anderson (waltt@well.com) is a political scientist, journalist, and author/editor of numerous books-most recently Evolution Isn't What It Used to Be: The Augmented Animal and the Whole Wired World (W.H. Freeman) and The Truth About the Truth: De-confusing and Re-constructing the Postmodern World (Tarcher/Putnam).
Stephen Cox
To grasp the significance of technology, it's helpful to look at a society that didn't have much of it to go around. Conquest (1993), Hugh Thomas's magisterial account of the destruction of the Aztec Empire, shows precisely how far a society could advance without wheels, nails, or candles. (The lack of firearms was a comparatively minor problem.) Thomas demonstrates what can and can't be done in such a society, and he dramatically illustrates its vulnerability to any competitor that has marginally less primitive tools.
But material technology is the child of intellectual technology, whose best conquests are peaceful ones. A remarkable example is the sudden triumph of agriculture on the North American plains–an effect of the advanced intellectual technology of free enterprise. Willa Cather's great novel O Pioneers! (1913) richly evokes the experience. Cather's protagonist is a young woman who is distinguished by her skillful use of capitalist methods. Hoping to do more than scratch out a modest living through sheer hard work, she takes the risk of thinking. She invests in real estate and farming methods that other people scorn, and her investments pay off. She transforms both her land and her life.
You can't understand technology without understanding how people think about technology, and, of course, you need to know the bad ideas as well as the good. That's why I want to insert a recommendation of just one influentially bad book, Thorstein Veblen's The Engineers and the Price System (1921). Veblen ably advocates a leading myth of the machine age: the idea that material technology "advances" because of people's collective efforts, only to be manipulated and hindered by capitalists for the sake of their private profits. This idea represents a profound misapprehension of the ways in which material technology is affected by investment, market prices, and property rights. Veblen recommended that capitalists be replaced by a "Soviet of technicians" that could "take care of the material welfare of the underlying population"–a proposal that is either chilling or comic, depending on the way you want to take it, but that is very much in the 20th-century spirit.
One of the finest books written in opposition to that spirit is Isabel Paterson's The God of the Machine (1943; republished 1993, with an introduction by me). Paterson offers a complex and compelling theory of history that explains the relationship between a dynamically developing material technology and the concepts of individual rights that are fundamental to capitalism. And she adds a warning to anyone who assumes that the industrial machine on which modern life depends will keep on humming even in a world dominated by social engineering. "For the very reason that the action of inanimate machinery is predetermined," she says, "the men who use it must be free. No other arrangement is feasible." After reading Paterson, one can hardly look at a toaster, much less a computer, without thinking of the Bill of Rights. But that's as it should be. It's not an accident that rights came first and toasters came second.
Stephen Cox (sdcox@ucsd.edu) is a professor of literature at the University of California, San Diego, and the author of Love and Logic: The Evolution of Blake's Thought (University of Michigan Press).
Penn Jillette
When you're in a Spielberg state of mind, try this: Take a baby from 150,000 years ago and raise him/her in modem Manhattan. What have you got? You've got a 21st-century kid, with in-line skates. Now, take the next kid born now and send him/her back 150,000 years and what have you got? Some grub-scrounging missing link.
Technology is all that matters. Technology is all that makes us human. You want books on technology? Every goddamned book is about technology. Every conversation is technology. Technology is all we got. If you don't like technology, you don't like humans.
If you want the above premise written by authors who aren't smartasses, try Making Silent Stones Speak: Human Evolution and the Dawn of Technology (1993), by Kathy D. Schick and Nicholas Toth. They're a nutty couple that went out, lived in the bush, made stone-aged tools, and used them for wacky stuff like butchering an elephant. Is that science or performance art? It's the best of both. Read it.
You want another book that'll rock your world? Try The Beak of the Finch (1994), by Jonathan Weiner. It's about another cool science-nut couple. The couple is perfect, but the author screws the pooch with biblical references. Jesus and Bible quotes in a science book are pure evil. He also misuses the phrase, "Possession is 9/10ths of the law," and yaps about Zen, global warming, and other hippie Luddite ideas. It's still a great book. This nutty couple goes and lives on one of those rocky, useless, Darwin islands and measures the length of the beaks of the finches (it's a well-named book). Again very close to performance art. They see evolution happening! Technology finding out exactly how we got here.
These books make you love goofy couples. The book I want now is the sex book about these wilderness-science couples. Never mind Pamela Anderson and Tommy Lee, I want to hear the nasty on the professionals that have nothing to do but discover our world like humans and screw like beasts. Those are the techno-videos I want to study.
The cheeses at REASON asked me to include one work of fiction. Why not include the best work of fiction in the world? The Mezzanine (1986), by Nicholson Baker, will kick your ass. I laughed, I cried. What the hell more you want? After science-sex couples use technology to learn about who we are and how we got here, let Nick think about it. His character takes a deep whiff of the madeleine of our modem world and records everything he thinks while riding the escalator. It's an importantly funny book. When your thoughts overlap his, you get that universal-one-world-one-people feel that's so great and when you don't overlap it's "oh-man-we're-all-alone," Camus-city. Both feelings are important. Both are true. The Mezzanine is true.
And it's technology that lets us think together.
Penn Jillette (mofo666@sincity.com) is more than half of Penn & Teller by weight. He has co-authored Penn & Teller's Cruel Tricks for Dear Friends (Villard) and Penn & Teller's How To Play with Your Food (Villard). He also does some sort of magic show.
Jonathan Kochmer and Jeff Bezos
Jonathan Swift's 1696 satire, "Battle of the Books," describes a passionate war between armies of living books–the Ancients and the Modems. Exactly 300 years later, equally vigorous yet often simplistic battles rage among today's Ancients and Modems: technophobes and technophiles.
The Pinball Effect, Out of Control, and Haroun and the Sea of Stories are three books that abandon simple explanation, instead acknowledging complex interdependencies between the makers and the made. They also explore the possibility that distinctions between society and technology are becoming less pronounced.
Everyone recognizes that technological history is complex, but most authors still clutch timelines and linear paths of cause and effect like drowning sailors. In contrast, James Burke's The Pinball Effect (1996) is a stunningly original and joyously otterine swim throughout the sea of history: Few other books we know so masterfully document the dizzyingly intricate symbioses of inventors and invention.
Kevin Kelly's Out of Control (1994) demonstrates that human artifacts such as machines, networks, and economies are becoming like organisms, ecologies, and societies–sometimes by design, but increasingly by emergence. Will centralized, engineering-oriented solutions be abandoned as the distinction between the made and the living dissolve? Some call Kelly a technophile, but he is in fact a syncretic biophile: His rallying cry is "life is the ultimate technology."
Finally, Salman Rushdie's Haroun and the Sea of Stories (1990) can be read as a lyrical parable of culture as a technology, with stories portrayed as the machinery of cultural replication. And few would suspect that Rushdie has written one of the best metaphorical descriptions of the Internet: "…the water…was made up of a thousand thousand thousand and one different currents, each one a different color weaving in and out of one another like a liquid tapestry of breathtaking complexity; and Iff explained that these were the Streams of Story, that each coloured strand represented and contained a single tale…as all the stories that had ever been told and many that were still in the process of being invented could be found here, the Ocean of the Streams of Story was in fact the biggest library in the universe….the stories were held here in fluid form, they retained the ability to change…unlike a library of books, the Ocean of the Streams of Story was much more than a storeroom of yarns. It was not dead, but alive….'And if you are very, very careful, or very, very highly skilled, you can dip a cup into the Ocean…and you can fill it with water from a single, pure Stream of Story.'"
Increasingly, the meaningful question may not be whether technology is good or bad, but instead, whether there are substantive differences between the makers and the made.
Jonathan Kochmer (jonathan@amazon.com) is a senior editor at Amazon.com Books, a Weh-based bookseller, and author of four Internet books and scientific articles on evolutionary theory, animal behavior, and climate change. Jeff Bezos (jeff@amazon.com) is president of Amazon.
Bart Kosko
I first began to think how technology relates to society when I read and reread Daniel Defoe's Robinson Crusoe (1719) in grade school. The stranded Crusoe is a society of one person for much of the book. He defied the claim of John Donne and Ernest Hemingway that no man is an island. Crusoe was a social island on an ocean island. He fought his way through John Locke's state of nature only to later have to fight the island natives in Thomas Hobbes's war of all against all.
Crusoe used technology to fight these battles for personal and political survival. He used ideas to shape the structure of his world. Crusoe killed goats for their meat and hides. He made weapons from bones and stones. He built a shelter from tree parts and hides. Defoe helped him with the rifle and the shipwrecked barrels of gunpowder that had floated to shore. Crusoe gladly took those gifts from his maker and used them to make himself a better world.
A second book gave me a deeper insight into how technology drives survival and vice versa. I was 18 years old and stood in line at my high school graduation in the small Kansas farm town of Lansing. My physics teacher came up and shook my hand and gave me a book as a graduation gift. He was an Army colonel at nearby Fort Leavenworth. He came to our high school as a type of pro bono effort to teach just one physics class of seven students. The book was Carl Sagan's The Dragons of Eden. It went on to win the Pulitzer Prize in 1978 but was out of step with a Bible Belt farm town with a posted population of little over 4,000 persons.
Sagan's book showed how the ancient battle between the reptiles and mammals has shaped our brains. Dreaming often involves our reptile-like midbrain. Most of us fear touching a harmless black garter snake more than we fear touching the more dangerous rabbit or raccoon. Brains house their evolutionary history in their present structure and function.
Sagan also made a conjecture that forced me to think about just how much of personal and social behavior stems from genes. You sometimes flinch yourself awake just as you fall asleep. Sagan suggested that this was a reflex that helped keep our hominid ancestors from falling out of their trees at night. The reptiles or "dragons" down below ate those great-grandparents who did not have the reflex or who had it and still fell. This book put me on a path that led me in time to write the text Neural Networks and Fuzzy Systems on the mathematical structure of the brain.
The book also led me in time to test one of its ideas on my newborn daughter just minutes after she emerged from a C-section birth. I held my swaddled daughter as all proud fathers do. Soon I could not resist touching the side of my forefinger to her small bare foot. Sagan was right. Her foot clutched at my finger just as a young monkey's paw would clutch at a tree branch.
Arthur C. Clarke showed where survival and technology could end in his 1956 novel The City and The Stars. A billion years have passed. Earth's mountains have crumbled and its seas have dried up. Humans have been to the stars and come back. They have lived now for eons in a huge city under the control of an omnipotent but benign central computer.
Humans have lost their teeth and nails in this dull steady state. They have lost much of the hominid within them–and with it their thirst for change and adventure. Hope comes only from the rare loner who challenges the system with no chance of changing it.
Bart Kosko is a professor of electrical engineering at the University of Southern California and directs USC's Signal and Image Processing Institute. He is the author of Fuzzy Thinking (Hyperion) and the new Prentice Hall text Fuzzy Engineering. Bantam/Broadway will publish his Heaven in a Chip and Avon will publish his novel Nanotime in the spring of 1997.
Brink Lindsey
The capacity to produce technology–to remake the world around us in the image of our thoughts–is a basic aspect of human nature. It is as old as our first ancestors, who chipped stone axes 2 million years ago: We have named them homo habilis–"handy man"–because of their toolmaking.
In the past century or so, however, this capacity has achieved an entirely new potency. A sustained, focused, and intricately integrated creative outburst on the part of millions of people has redefined the pace and possibilities of human existence in ways previously only dreamed about. Life dominated by natural rhythms and limits has given way to life mediated and liberated by artifacts.
This transformation, still unfolding, is one of the greatest quantum leaps in human development. But of course technology cannot be disentangled from the rest of human nature, and all its frailties and contradictions. And so the process of building this new, man-made world has been an exceedingly messy one-fluky and unpredictable, often tragic and misunderstood.
In this regard, it is humbling to realize this great release of creative energies was made possible by violence and predation. Yet through history, warfare has done as much as anything to drive technological development. William McNeill's fascinating The Pursuit of Power (1982) tells of the unique contributions made by the intense but inconclusive military competition that roiled medieval and modern Europe. Incessant conflict accelerated the pace of innovation directly, but the indirect effects were even more momentous. By combining domestic order with international anarchy, the fractured political landscape of Europe created spaces within which commerce could develop and flourish. The addition of profit motive to plunder motive gave technological growth an unstoppable momentum.
The resulting Industrial Revolution created unprecedented wealth and opened broad new avenues for creativity and achievement. But its promise was perverted by two grievous misconceptions widely shared among its implementers and champions: first, a belief that the logic of industrial development required ever more centralized control on ever more massive scales; and second, a withering reductionism that rejected as irrelevant or even valueless anything that is not measurable and concretely utilitarian. These two ways of thinking combined in powerfully destructive cultural and political movements that sought to bring mankind down to the level of the clanking, soulless machines it had constructed. These movements–which can be summed up with the terms "technocracy" and "social engineering," and which can be seen as a kind of Industrial Counterrevolution–inflicted deadening bureaucracy at their best, totalitarian horror at their worst.
The misbegotten ideal of technocracy was brilliantly satirized in Yevgeny Zamyatin's We, written in 1920-21, just as the Soviet Union was being established as its purest historical incarnation. The book is set in a dystopia of the far future, where in a city cordoned off from lush, unruly nature by the "Green Wall," humanity has been brought nearly to mechanistic perfection. "Numbers," as people are known in the "One State," live in glass apartments, so that all actions are visible to the ever present but unidentified "Guardians." A "Table of Hours" prescribes every action of every day, down to 50 chews per bite at mealtime, masticated in unison by all the numbers in the One State. (Interestingly, the "ancient" identified as the chief prophet of the One State is not Karl Marx, but Frederick Winslow Taylor, the father of "scientific management" and, incidentally, no small influence on Lenin.)
In Zamyatin's fiction, the parts of human nature suppressed by the One State–the spontaneous and unpredictable, the imaginative, the biological–are not eliminated: They persist outside the Green Wall, and are cherished by dissidents within who plot the tyranny's overthrow. Zamyatin's slender hope has been fulfilled in our generation, as the technocratic ideal has suffered stunning reverses-from the collapse of the Soviet Union to the restructuring of corporate bureaucracies.
Here again, however, things are messy: The abuses of the social engineers have helped to strengthen the case of Luddite reactionaries who despise and fear the man-made world. What has happened, then, is that much of the intellectual critique of technocracy has been of the baby-and-bathwater variety. Both sides have shared a common false premise: that technology is dehumanizing. The technocrats reject humanity; the Luddites reject technology.
Where to go from here? Fortunately, a body of thought that transcends this false dichotomy has been developing in recent years. The new sciences of chaos and complexity focus on the general capacity–in material objects, living creatures, and human institutions–for spontaneous order: complex behavior that is not the product of any central design, and cannot be reduced to the sum of the parts. While there are many excellent books that provide the general reader an overview of this new thinking, let me recommend Louise Young's The Unfinished Universe (1986). Young offers a lyrical vision of the universe's unfolding complexification-from physical order to chemistry, then biology, and then mind and culture.
This new way of understanding the world–non-mechanistic, non-reductionist–reconnects the technological and biological by showing that both are manifestations of spontaneous order. In this way, it can support a new kind of idealism about technology's possibilities. The ongoing creation of a man-made world can be seen, in this view, as part of the larger aspiration of mankind's continuing complexification: the development, through free institutions, of the best of human nature in all its variety and richness.
Contributing Editor Brink Lindsey (10213 4.2224@compuserve.com) practices trade law in Washington, D.C.
David Link
I begin with "The Dynamo and the Virgin," a chapter from The Education of Henry Adams (1905). In it, Henry Adams visits the Great Exposition of 1900 and, viewing the incredible, towering power sources on display, begins to contemplate the awesomeness of the developing technologies. The only precedent he can think of is the inspiring and miraculous power the Virgin held in the history of art.
Adams was on to something about why technology has been so compelling in the 20th century. Men are every bit as moved by the might of the machines they can build as they are in thrall to the power of sexual beauty. Every little boy knows what it's like to stare wide-eyed at a fire engine or a fighter jet or a bulldozer. This is a large part of what has moved men to make bigger and more majestic bridges, buildings, and then machines. That sheer glory of force, of size and power and muscularity, is a large part of what made the film Top Gun such a hit. Sure, there was a perfunctory love story in the movie, but what it's about, what you can't forget, is the roar of those jets, the fire and the blaring sound of them, and the thrill and fun the pilots have in actually being in charge of all that force.
As our control over technology developed, though, a necessary and inevitable lesson accompanied our progress: humility. The second book on my list is Walter Lord's A Night to Remember (1956), on which the 1958 film was based. The story of the sinking of the Titanic is the stuff of pure myth-if it had not actually happened, someone would have had to make it up. The sinking of the Titanic is most useful as a story about hubris. Our technology is the manifestation of our dreams, and nothing in the story suggests we should stop trying to make our marvelous, outsized visions real. On the other hand, we ought to keep in mind that while we are godlike, we are not gods. Glorious luxury liners are well worth the effort and the cost, but that's no reason not to have enough lifeboats on board.
My third choice is The Dancing Wu Li Masters (1979), by Gary Zukav. The goal of the book was to make the obscure field of quantum mechanics comprehensible to non-scientists, and, surprisingly, it succeeds in illuminating what the attraction of all that heady stuff is. Like Adams, these men (and they're pretty much all men) experience real awe, this time around, not at size and force, but at the enigmatic elegance of the physical world. The Dancing Wu Li Masters helps explain why modern scientists are driven to explore particles so infinitesimally small that they are (sometimes quite literally) nothing but ideas.
This move from Adams to Zukav is summed up in Stanley Kubrick's 2001: A Space Odyssey. The film begins with man's fascination with the first tool; in one brilliant edit it moves dazzlingly to the end of our century, where the idea and execution of tools has been all but perfected. What is left is the realm of pure mystery. Which, in a way, brings everything back to where Adams found it in 1900-the point of wonder.
David Link (dflink@pacbell.net) is a Los Angeles writer whose essays appear in Beyond Queer: Challenging Gay Left Orthodoxy (The Free Press), edited by Bruce Bawer.
Paul Lukas
Most of us tend to think of technology in terms of the macro rather than the micro. I refer here not to sheer physical size–surely Silicon Valley has taught us that the mightiest technological achievements can come in the tiniest of processing chips. I refer instead to our notions of technological complexity and, especially, technological power, whether measured in horsepower, kilowatts, megatons, or gigabytes. For the most part, our cultural mindset goes, bigger is better and biggest is best.
But technological power, not to mention technological utility, comes in many sizes. For every supercomputer, digital camera, and electronics laboratory, there's a host of products that may seem far more mundane yet are no less remarkable: office supplies, kitchen gadgets, canned goods, children's toys. Don't mistake these items' ubiquity for technological simplicity. Our tendency to take them for granted is in fact a testament to an immensely sophisticated production system, a system so vast and efficient that it can provide these things without most of us even stopping to ponder how it all gets accomplished. The mechanical engineering and industrial design processes that produce these items may be less glamorous than, say, software design, but they're no less important or impressive. Think of them collectively as inconspicuous technology.
There are a number of books that focus on inconspicuous technology. One of the best is Henry Petroski's The Evolution of Useful Things (1992), a loving examination of such minor miracles as paper clips, tin cans, pins and needles, nuts and bolts, silverware, adhesive tape, pull tabs, and the like. You don't need to be a minutiae fetishist to appreciate Petroski's examination of the subtler aspects of our everyday world, and you don't need to be an engineer to understand his prose. (The book also devotes several pages to a discussion of the zipper, which is itself the subject of an excellent book: Robert Friedel's Zipper: An Exploration in Novelty [1994]).
Packaging is another key component of inconspicuous technology. Package-design elements like the Coca-Cola wave and the Wrigley's arrow have become subsumed into our collective cultural psyche. For a look at the history of packaging, including an examination of the formidable technological challenges packaging can present and the additional technological innovations it can facilitate, try Thomas Hine's The Total Package (1995), which examines everything from milk cartons to cereal boxes and will forever change your perceptions of your local supermarket.
These books are wonderful and instructive, but they're also a bit scholarly; the best way to appreciate inconspicuous technology is in the context of the real world. Curiously, the best example of this is a work of fiction: Nicholson Baker's The Mezzanine (1986), in which the narrator enthusiastically pursues a series of obsessive intellectual tangents on such unlikely subjects as soda straws, men's-room urinals, shoelaces, doorknobs, tear-off perforations, cigarette butts, and vending machines, all in the course of a short escalator ride. To fully appreciate the wonders of the inconspicuous world, in theory and practice, start here.
Paul Lukas (krazykat@pipeline.com) is a columnist for New York magazine, the editor of Beer Frame: The Journal of Inconspicuous Consumption, and the author of the forthcoming Inconspicuous Consumption: An Obsessive Look at the Stuff We Take for Granted, to be published by Crown in January.
Henry Petrosk
To understand technology fully, it is necessary to understand the nature of engineering. The formulation and solution of technical engineering problems is, of course, at the heart of every technological endeavor, whether it be the design and production of an automobile or the generation and distribution of electricity, but dealing with technical problems within the constraints of the laws of nature is only one aspect of the total engineering enterprise. Real engineering in the real world is inextricably complicated by cultural, social, political, economic, and aesthetic goals that shape and in turn are shaped by the technical objectives.
The full story of just about any ostensibly technical project will illustrate the complex interrelationships among competing goals and constraints that engineers must deal with in the course of producing a technological artifact. Among the best of such stories is that of the building of the Brooklyn Bridge, told by David McCullough in his 1972 book, The Great Bridge. The true story of the conception and realization of a bridge that has become a cultural treasure is also the very human story of how John Roebling, his son, Washington Roebling, and his wife, Emily Warren Roebling, dealt with accidents and death, not to mention political corruption and greed, along with the physical and technical challenges of constructing the largest bridge in the world. It is as gripping as any novel.
A more recent book, The Innovators (1996), by David P. Billington, tells the stories of engineering pioneers who shaped modern technology and thereby made America modern. Rather than weaving an extended narrative about a single artifact, however, Billington uses famous technological achievements such as the steam engine, the distribution of electricity, the telegraph, and steel making to show how even the most technical aspects of engineering–its equations and formulas–are influenced by such factors as social, economic, and aesthetic considerations. By explaining in simple terms how technical decisions must incorporate a wide range of seemingly nontechnical considerations, Billington's book shows more explicitly than any other the true nature of engineering as a social and cultural, as well as a technical, endeavor.
Engineering is done by engineers, of course, and the 1976 book by Samuel C. Florman, The Existential Pleasures of Engineering, has become a classic for understanding the passion and enthusiasm individual engineers can feel for their work and the satisfaction they can experience as they make tangible contributions to society and culture. Florman's book has recently been reissued in a second edition (1994), which includes a new preface and chapters from some of his other books, and it is considered by many to be required reading for those wishing a full understanding of the engineer and engineering in modern society.
Henry Petroski (hp@egr.duke.edu) is A. S. Vesic Professor of Civil Engineering and professor of history at Duke University. His most recent books are Engineers of Dreams: Great Bridge Builders and the Spanning of America (Alfred A. Knopf) and Invention by Design: How Engineers Get from Thought to Thing (Harvard University Press).
John J. Pitney Jr.
Foundation (1951), by Isaac Asimov: In this classic of science fiction, a "psychohistorian" uses computers and advanced mathematics to predict the decline and fall of the Galactic Empire, and launches a plan to shorten the coming dark ages. In writing this novel, Asimov assumed that human nature would remain flawed but that technology would evolve to the point of producing accurate forecasts of events centuries in the future. The latter assumption is highly debatable, for as Hayek wrote, the "mind can never foresee its own advance." Asimov was on firmer ground when he included a parable about technological literacy. As the Empire declines, the planet Terminus gains power because its inhabitants are the only ones who still understand nuclear energy. In the surrounding kingdoms, people remember the operating instructions for their nuclear-powered machines, but have long forgotten how the things actually work. Terminus exploits this ignorance by creating a mystical cult around the technology, which allows it to manipulate the kingdoms with spells and sorcery. Remember this part of the novel the next time you read about the state of science education in the public schools.
The Ascent of Man (1973), by Jacob Bronowski: This companion volume to the excellent television series of the 1970s is not your typical coffee-table book. Though highly readable and handsomely illustrated, it also has a strong philosophical viewpoint. The title itself is revealing: The late Bronowski believed that the growth of scientific and technological knowledge did indeed constitute an "ascent." His book celebrates discovery and invention while bemoaning the "loss of nerve" represented by the all-too-frequent mysticism of pop culture. And it powerfully dismisses the widespread notion that technology will turn people into numbers. At one point, we see a photo of Bronowski squatting in a muddy field, and the accompanying text explains: "This is the concentration camp and crematorium at Auschwitz. This is where people were turned into numbers. Into this pond were flushed the ashes of four million people. And that was not done by gas. It was done by arrogance. It was done by dogma. It was done by ignorance."
See How They Ran: The Changing Role of the Presidential Candidate (1991), by Gil Troy: This volume may seem an odd choice in this context, but it is filled with wise observations about the ways in which technology has changed (and has failed to change) the relationship between voters and candidates. During the 1850s, for instance, rail travel and telegraphy allowed candidates to reach more people than ever before, thus allowing a hitherto obscure Illinois lawyer named Abraham Lincoln to become a national figure. Since then, technology has continued to expand the potential for public conversation on campaign issues. But the realization of that potential hinges on the substantive content of the campaigns, which is something that technology itself cannot determine.
Contributing Editor John J. Pitney Jr. (jpitney@mckenna.edu) is associate professor of government at Claremont McKenna College.
Virginia I. Postrel
One of the pleasures of editing a magazine is bringing together fine, insightful writers whose work enriches and extends your own ideas. On the subject of our relation with technology, then, it's no surprise that my touchstones include Walt Anderson's Evolution Isn't What It Used to Be (1996), Henry Petroski's The Evolution of Useful Things (1992), and Fred Turner's Tempest, Flute, and Oz (1991)–very different books, all important and well written, all ably represented by the presence of their authors in these pages. So, invoking another editor's pleasure, I can pick three other books.
Few writers better explore the tensions among technological innovation, scientific evolution, and politics than physicist Freeman Dyson. And few writers more appreciate the ecologies of human societies, with their complex dynamics, odd feedback effects, and occasional paradoxes. In From Eros to Gaia (1992), Dyson collects some of his finest essays, including the priceless "Six Cautionary Tales for Scientists," originally written in 1988. It contrasts the small-scale, unglamorous, and effective Plan A approach to technical problems with the politically successful Plan B–and finds the same patterns in the First, Second, and Third Worlds. The essay should be required reading for all science policy makers.
"Technology" means not only gadgets or computers but any embodied knowledge, including business systems. Both sorts of technology combine in Joseph Nocera's A Piece of the Action: How the Middle Class Joined the Money Class (1994), which tells the story of how bank credit cards and money market funds made credit and investment mass, middle-class products. Since these instruments work only because they are, in Paul Lukas's phrase, "inconspicuous technologies," we never appreciate the absolute genius, pigheaded determination, computer power, and sometimes disastrous experiments that building such systems required. Nocera is a terrific storyteller, and he has a fascinating story to tell. Plus his chapter on the Age of Inflation is itself a remarkable piece of technology: It so captures the utter, spiraling financial panic that hit middle-class America in the late 1970s that it will give you flashbacks. Good reading, too, for GenXers who think economic insecurity began with them.
Following my own instructions, I end with a novel or, rather, a series of novels. It's been 25 years since I read Laura Ingalls Wilder's Little House books, but they left some indelible technological images: the joys of a father's fiddle playing, the advantages of blindness (you can sew when it gets dark), the preservation and uses of a slaughtered hog, the economic triumph represented by glass window panes. Ghostwritten by Wilder's daughter, the important libertarian intellectual Rose Wilder Lane, the Little House books remind us of both the value of old technologies in their time and the wonders of newer ones in ours.
Virginia I. Postrel (VPostrel@aol.com) is the editor of REASON and a columnist for the technology magazine Forbes ASAP. She is writing a book, called The Future and Its Enemies, on clashing ideas about social and economic evolution, which will be published by The Free Press.
Adam Clayton Powell III
The future is always with us, in some form or outline, so wherever technology is taking us, the clues are already here. Each of these books informs us in its own way to look for hints of the future in what is already around us.
To a man with a hammer, goes the expression, the world appears a nail. And so to Andrew Grove–the president of Intel, the world's most successful producer of microprocessors–the world is a universe of hardware and software. However, by remaining so narrow, Grove's Only the Paranoid Survive (1996) focuses on exactly the right aperture for a glimpse of the future.
Computer makers are in territory so new and changing so rapidly that they (and we) must plan for a largely unknown and unknowable future, making a path across a trackless land: "It's about finding your way through uncharted territories." Grove insists this unknown territory is suffused with the thrill of the possible–and, too, with the fear that often accompanies the unknown. But he insists that fear can be a force for positive change: "Ideally, the fear of a new environment sneaking up on us should keep us on our toes." (Now we know what to make of the book's title.) Grove provides an analogy of a fire department: Firefighters do not know exactly where the next fire will be, but they can guess a rough total and the historical distribution pattern, so they can plan accordingly.
Grove also notes the fundamental generation gap now forming between most of us industrial-age, analog-era adults and the post-industrial, under-25 digerati who have spent most of their lives learning about the world, playing games, working on homework, and socializing with each other online. Just as their grandparents abandoned afternoon newspapers for television and their parents found generational refuge in FM radio, audience data now show that the computer generation born since 1970 is abandoning print and electronic mass media to embrace the more tailored, more fragmented, computer-based media. This has profound implications for public discourse, which Grove calls "a demographic time bomb ticking away."
And so with Grove's vision of future media, which may be science fiction to the industrial agers, but to the under-25s, it is just another upgrade–and just a few years away: "Processing power is going to be practically free and practically infinite," said Grove in a September Newsweek interview. "This will allow us to turn automatic 3-D photorealistic animation into a ubiquitous reality within two or three years."
Kevin Kelly, executive editor of Wired magazine, takes Grove's vision of the future of computers and extrapolates that vision to describe the future of society, with a fundamentally different social contract. He embraces the chaos of ultimate decentralization and individual power, writing a book that in its very title celebrates the complete absence of authority imposed by any central place or person. "There is no central keeper of knowledge in [the Internet], only curators of particular views," writes Kelly in Out of Control (1994). So he argues we are moving to a "highly connected yet deeply fragmented society" where "distributed, headless, emergent wholeness becomes the social ideal."
And if that leaves us out on the edge, it may be a challenge for Tom Stoppard's puzzle of a play, Arcadia (1993), to take us even further. But further it does, in a work that has nothing to do with computers or microchips but everything to do with research, society, the past, and the future–and also manages to encompass Lord Byron, quantum physics, and trends in British garden design in the late 18th century.
Stoppard presents alternating scenes of today's researchers and the subjects of their research in 1809, and we can see how painstaking care and rigorous procedure lead our present-day characters to exquisitely erroneous conclusions about the past. Amid this lesson in humility is one exclamation that serves as a signpost, an emblem for our sea change to the brave new digital world. The character Valentine, a twentysomething mathematician, is entranced by the possibilities of chaos theory and quantum mechanics, as excited by the fall of the old order as are his real-life twentysomething digerati counterparts.
"The future is disorder," says Valentine. "A door like this has been cracked open five or six times since we got up on our hind legs. It's the best possible time to be alive, when everything you thought you knew is wrong!"
Adam Clayton Powell III (apowell@freedomforum.org) is vice president, technology programs, at The Freedom Forum.
John Tierney
For critics of technology, The State of Humanity is a terribly depressing read. The 1995 book, edited by the economist Julian Simon, is a collection of essays and graphs analyzing human welfare over the past few millennia. The trends are just about all positive: Humans are enjoying longer, healthier, wealthier, and freer lives in a world with less pollution and more plentiful resources. The reason for all this good news, of course, is technology. Doomsayers claim to be taking the long view of technology and its problems, but how can they ignore evidence like this?
The answer to that question can be found in William Tucker's book, Progress and Privilege: America in the Age of Environmentalism. This 1982 work (Tucker had the misfortune of being too far ahead of his time to sell many books) anticipates the neo-Luddites and environmentalists of the 1990s. As Tucker shows, today's critics of technology–including the ones who think they're progressives fighting to help the poor–are part of an elitist, conservative tradition. Aristocrats, clerics, and intellectuals have always come up with creative reasons for opposing any technological change that threatens their comfortable status or interferes with their determination to write the rules for everyone else. Tucker offers lovely analyses of our current environmental "crises." Consider his explanation for the angst over population growth in the Third World: "There have been two lines of demagogic argument that have always gone down well in history. The first is to tell the poor that the rich have too much money. The second is to tell the rich that there are too many poor people."
If these books haven't persuaded you to ignore the Luddites and Malthusians, try a work of inadvertent fiction by an ornithologist named William Vogt: Road to Survival. It preaches against technological innovators-"the freebooting, rugged individualist" must be recognized as "the Enemy of the People"–and warns of pending famine and poverty due to pollution, overpopulation, vanishing topsoil, depleted stores of natural resources, and various environmental catastrophes. It reads exactly like a tract from Bill McKibben or the Worldwatch Institute. But this book was published in 1948–and the doomsday predictions were being made about the United States. Reading it will make you laugh, restore your faith in modern technology, and free you from any temptation to take the doomsters seriously.
John Tierney is a staff writer for The New York Times Magazine.
Frederick Turner
In some ways the most interesting book on the relationship between humans and our technology that I have read recently is William R. Jordan III's remarkable work The Sunflower Forest, but it is as yet unpublished (smart publishers, take note). Jordan is a curator at the University of Wisconsin, Madison, Arboretum, and his book is a searching analysis of the way in which our Cartesian and Puritan heritage has led us to divide humans from the rest of nature and thus, finally, to the view espoused by the likes of Bill McKibben, in which nature is by definition what is untouched by human hand, and humanity is a blight and cancer upon nature. Jordan presents an alternative view, in which the human appetite for ritual, myth, play, and gardening can actually improve the richness of the earth's living ecosystem, with beneficial economic results and without governmental invasion. We shall trade with the rest of nature, he believes, to our mutual benefit, thought as an extension of biological evolution.
But since the book is unpublished, I cannot make it one of my three. A science fiction work, Neal Stephenson's The Diamond Age (1995), set in industrial China a hundred years hence, is a marvelous fountain of ideas about the human technological future. It is, like some recent works of Greg Bear and Gregory Benford, a convincing vision of a nanotechnological age (nano-engineered diamond has become the most popular building material in Stephenson's future). What is especially provocative about it is that Stephenson, despite his countercultural roots, has perceived, correctly in my view, that the masters and mistresses of a future technological age will be people with solid family backgrounds, personal honor, and discipline, the new Victorians or "Vickies" as he calls them. As the subtitle of his book, A Young Lady's Illustrated Primer, suggests, however, a sufficiently sophisticated and artistic piece of educational software, based on storytelling, might serve as a substitute family. But the point is that chemical technology is more potent than our present physics-based technology, biotechnology more potent than chemical technology, and nanotechnology is the most potent of all, exerting the greatest leverage upon the physical world.
My second bona fide book is Homer's Odyssey. This poem is among other things a good introduction to the brilliant simplicity of ancient Greek technology, and an exemplary demonstration of what constitutes user-friendliness in both software and hardware. Homer shows us how a bow, a door, a garden, a ship, a narrative verse tradition, an olive grove, an adze, a polytheism, a viticulture can work together to the enrichment of the world.
My last book is Martyn Fogg's Terraforming (1995), the current bible of those who believe we must begin to become a spacefaring civilization. He makes a compelling argument that we can colonize Mars using contemporary technology, and that we should do so. Reading Fogg, who is the president of the British Interplanetary Society, one feels a blush of shame at the pusillanimity and malingering of our times, the most prosperous in history, and of our generation, which would rather bicker over nasty little social prizes and slights than take up once again the heroic mantle of our destiny.
Frederick Turner is a poet, a theorist of the links between the sciences and the humanities, and Founders Professor of the Arts and Humanities at the University of Texas at Dallas. His most recent books include April Wind (University Press of Virginia) and The Culture of Hope (The Free Press).
The post Art and Artifacts appeared first on Reason.com.
]]>Meanwhile, as overall manufacturing employment has remained more or less flat for decades, employment in the much less unionized service sector ballooned. As a result, unionization of the private sector labor force has declined from 36 percent in 1953 to only 11 percent today.
Next came Big Business. In the 1960s it was argued, most eloquently by John Kenneth Galbraith, that large American corporations were so powerful that they were effectively immune from market forces. Then came the '80s and '90s. One blue-chip giant after another began hemorrhaging red ink. General Motors saw its market share drop from 45 percent to 35 percent during the '80s; IBM's stock price collapsed from $140 to $40 between 1991 and 1993; Sears Roebuck was forced out of the catalog business. Foreign competition walloped not just smokestack dinosaurs, but Silicon Valley as well. Restructuring and reengineering became codewords for middle-management layoffs; while middle managers make up only 8 percent of the work force, they accounted for 19 percent of the job losses between 1988 and 1993.
Now it's Big Government's turn. Disillusionment with government can be traced back to the grand betrayals and failures of the 1960s and '70s: Vietnam, Watergate, stagflation, the growth of the underclass. But the current anti-Washington fervor ignited in the late '80s and was fueled by a succession of scandals featuring a sleazy mix of public power and private gain: the Keating Five, the Jim Wright book deal, the HUD mess, honoraria, the flap over congressional pay raises, and check kiting at the House bank. Throw in the chronic irresponsibility of deficit spending, George Bush's broken "no new taxes" promise, and Bill Clinton's general fecklessness, and distrust of politicians finally appears to have reached a healthy level.
Thus far the new anti-Washington mood has produced two electoral spasms: the 19 million votes cast for the bizarre Ross Perot in 1992, and the sweep of Republicans into control of Congress (as well as numerous state houses and state legislatures) in 1994. The first was a dead end; the fate of the second remains unclear. What is clear, however, is that we are far, far removed from the heady days of the New Frontier and the Great Society. Omnicompetent government has lost its luster, and its legitimacy; it is only a matter of time before it loses significant amounts of power.
One by one, the three great institutions of modern American political economy have come under sustained and furious assault. Those events are interrelated, and their combined historical significance is profound: A whole way of life is coming to an end. The triumvirate of Big Government, Big Business, and Big Labor–whose rise and ascendancy have done so much to shape American society over the course of this century–is collapsing, and something new is emerging in its stead.
A lot of ink has already been spilled in describing these changes. Virtually every popular business book these days is filled with talk of flattening organization charts, replacing functional departments with ad hoc teams, downsizing, outsourcing, speeding up response times and product cycles–in short, breaking up creaky old corporate empires and replacing them with something more flexible, more dynamic, more market-like. Meanwhile, authors such as George Gilder and Alvin Toffler (and politicians, notably Newt Gingrich) have spied a larger social transformation–from Machine Age to Information Age–and identified its defining feature as, in Toffler's words, "demassification": the decline of mass production, mass media, and mass politics, and their replacement by social institutions less centralized and hierarchical, more individualized and interactive.
What has been missing, though, is a satisfying explanation of why those changes are necessary. Overwhelmingly, the analysis up to now has focused on technology: Our institutions must change because they are technologically obsolete. According to this view, the technology of the industrial era was inherently centralizing and homogenizing (the assembly line, the skyscraper, broadcast television), while that of the information age is centrifugal and variegating (the personal computer, the fax machine, desktop publishing). The fundamental character of technology has changed, and so economics, culture, and politics must adapt accordingly.
Even as a rough generalization, this view of historical change is incomplete at best. Yes, new information and communications technologies have changed the workplace, making it easier to push decision making away from the center and closer to the customer. And yes, the entrepreneurial rambunctiousness and extravagant productivity of the electronics industry have shown private enterprise at its best just as government's stock has been dropping.
Nevertheless, there is a lot more to the old regime's decline and fall than the invention of the microprocessor. Up to now at least, foreign competition has done more to reshape American business practices than have computers–particularly competition from Japan, a nation much less computerized than our own. And in the political realm, primary credit for the present disaffection from government must be given to two factors: a string of government-caused disasters that has sapped public faith in statist "solutions"; and set against that backdrop, an ongoing war of ideas against collectivism in all its forms.
As to the rise of the old regime, it is fair to say that the concentration of people and resources begotten by mass production made the case for top-down control more plausible, and thus helped its imposition. But the idea, or even the implication, that the governmental and economic institutions now under attack were appropriate to a certain level of technological development is utterly wrongheaded. Those institutions have been flawed from their inception.
The transformation currently in progress is needed not to update the obsolete, but to correct the mistaken. What we are witnessing around us now is the uprooting of error–false assumptions and confusion buried so deep at the foundations of economic and political life that their excavation and removal leave the structures built upon them in ruins. Specifically, the old order now passing from the scene was less the institutional incarnation of the industrial revolution than a tragic misinterpretation of it. Indeed, it is not going too far to say that this order was the result of an industrial counterrevolution.
The Brainpower Revolution
The American industrial revolution represented a blazing efflorescence of creativity, invention, and analytical genius–in short, of brainpower–in the economic realm. The result was a radical break in human affairs: New energy sources, new electro-mechanical technologies, and new forms of organization were combined to increase the capacity for creating wealth beyond any prior imagining.
Thomas Hughes, in American Genesis, has compared the burst of technical genius during this period to the accomplishments of Periclean Athens and Renaissance Florence. It is exemplified by the careers of Bell and Edison, and charted by the increase in U.S. patents issued annually from 683 in 1846 to 22,508 only 40 years later.
The organizational innovations of the time are less celebrated, but also transformed the world. To give just a few highlights: line-and-staff management (1850s), modern cost accounting (1850-60s), commodities exchanges (1850s), futures markets (1850s), department stores and chain stores (1860s), monitoring of inventory by stock turn (by 1870), continuous-process production (1870-80s), vertical integration (1880s), large-scale trading of industrial stocks (1880-90s), incorporation of industrial enterprises (1890s), R&D departments (1890s), consumer packaging and national advertising (1900s), earnings forecasting and capital budgeting (1900s), moving assembly lines (1910s), market research (1910s), and the multidivisional corporate form (1920s).
As the complexity and intellectual challenges of economic life escalated dramatically, the need for knowledge workers–business managers, engineers, accountants, lawyers, advertising and marketing specialists–rose correspondingly. According to James Beniger in The Control Revolution, knowledge workers as a percentage of the total U.S. labor force made a quantum jump with the advent of mass production: from 4.8 percent in 1870 to 12.4 percent 20 years later, rising to 24.5 percent by the end of the 1920s. Thus the industrial revolution occasioned an unprecedented application of brainpower to and diffusion of brainpower throughout economic life.
At the same time, however, other developments were pushing in the opposite direction. Political and economic institutions were being created that bottled up brainpower, frustrated its exercise, or ignored it altogether. Most dramatically, government's rapid growth encroached upon the blooming, buzzing variety of private action and substituted the inflexible sameness of bureaucratic edict. Meanwhile, the new giant corporations were the instruments of industrial revolution, but they were flawed instruments. In their handling of workers, and their organization of managers, they betrayed their promise and became instruments of industrial counterrevolution.
The Intellectual Counterrevolution
The cephalization of economic life brought about by the industrial revolution was not sui generis. It was, rather, part of a larger historical continuity: the development of capitalism. As opposed to the custom- and coercion-bound feudalism from which it emerged, capitalism is characterized by the systematic encouragement it gives to the development and use of brainpower. By dispersing control over investment decisions, and allowing unsuccessful investments to fail and successful ones to attract first profits and then imitators, capitalism creates a social environment that is powerfully conducive to experimenting with new ideas and new ways of doing things. Friedrich Hayek had this in mind when he referred to market competition as a discovery procedure. Industrialization represented an escalation of that discovery procedure to a new level of intensity.
To contemporaries, however, the marvels of the Machine Age were considered not a testament to capitalism, but a repudiation of it. The leading interpreters of the new economy were dazzled by the productive abundance of the new industrial techniques, but they failed to see that this abundance was inextricably connected to and sustained by the competitive market process. Competition they regarded as wasteful, an anachronism. In one of history's bitterest ironies, capitalism's great achievement–the creation of previously unimaginable wealth–served as the inspiration for its nemesis: the delusion of central planning.
The supposed conflict between competition and the industrial economy was central to the writings of Thorstein Veblen, the iconoclastic economist whose influence was strongly felt among Progressives and New Dealers. (In 1939 the editors of The New Republic conducted an informal poll of "books that changed our minds," and Veblen headed the list.)
Veblen distinguished between "industry," which is motivated by the "instinct of workmanship," and "business," which is motivated by the prospect of pecuniary gain. "[T]he modern industrial system," he wrote in The Theory of Business Enterprise (1904), "is a concatenation of processes which has much the character of a single, comprehensive, balanced mechanical process." However, he continued, "the pecuniary interests of the business men…are not necessarily best served by an unbroken maintenance of the industrial balance."
Veblen believed that the continuation of business rivalry in an industrial economy caused "chronic derangement, duplication, and misdirected growth." In that light, he praised the mergers and consolidations that had been effected by the largest business enterprises: "So long as related industrial units are under different business managements, they are, by the nature of the case, at cross-purposes, and business consolidation remedies this untoward feature of the industrial system by eliminating the pecuniary element from the interstices of the system as far as may be….The heroic role of the captain of industry is that of a deliverer from an excess of business management. It is a casting out of business men by the chief of business men."
Veblen offered no clear political program, but others who shared his dim view of competition certainly did. Prominent among those was Edward Bellamy, whose 1888 utopian novel, Looking Backward: 2000-1887, sold a million copies and inspired the formation of Bellamy clubs that continued around the country for decades. In Looking Backward, Bellamy outlined a future history of the coming socialist millennium, and he saw the giant enterprises of his day as a kind of transitional stage:
"The movement toward the conduct of business by larger and larger aggregations of capital, the tendency toward monopolies, which had been so desperately and vainly resisted, was recognized at last, in its true significance, as a process which only needed to complete its logical evolution to open a golden future to humanity.
"Early in the last century the evolution was completed by the final consolidation of the entire capital of the nation….The nation, that is to say, organized as the one great business corporation in which all other corporations were absorbed; it became the one capitalist in the place of all other capitalists, the sole employer, the final monopoly in which all previous and less monopolies were swallowed up, a monopoly in the profits and economies of which all citizens shared. The epoch of trusts had ended in The Great Trust."
Thus, according to the story, was market competition eliminated, and its fourfold wastefulness: "the waste by mistaken undertaking," "the waste from the competition and mutual hostility of those engaged in industry," "the waste by periodical gluts and crises," and "the waste from idle capital and labor at all times." The example of the large corporations helped to show the way:
"Fifty years before, the consolidation of the industries of the country under national control would have seemed a very daring experiment to the most sanguine. But by a series of object lessons, seen and studied by all men, the great corporations had taught the people an entirely new set of ideas on the subject….It had come to be recognized as an axiom that the larger the business the simpler the principles that can be applied to it; that, as the machine is truer than the hand, so the system, which in a great concern does the work of the master's eye in a small business, turns out more accurate results. Thus it came about, thanks to the corporations themselves, when it was proposed that the nation should assume their functions, the suggestion implied nothing which seemed impracticable even to the timid."
In Veblen's and Bellamy's analysis, the new industrial economy thrived on central control. They saw artisan production swept away by enormous economies of scale. They saw traditions and rules of thumb swept away by organization and system. They saw handicraft and common sense swept away by engineering and technical expertise. They saw these things and concluded that a new world was emerging in which a few experts would tell everyone else what to do.
The problem, in their view, was that the new world had not yet fully supplanted the old. The old traditions of private ownership and competition still refracted the logic of the machine; engineering remained subservient to profit. As a result, much of the productive power of the new industrial processes was wasted in either idleness or duplication; moreover, production was too often diverted from serving the needs of the many in order to satisfy the extravagances of a parasitic few.
This then was the goal of collectivism: to render industry less wasteful and more equitable by extending the principle of central control. Power would be stripped from various industrial fiefdoms and vested in the true center: the state. There it would be exercised, not for private gain by businessmen, but for the common good by public servants.
The logical extreme of such a program was the full-fledged socialism preached by Bellamy, but such radicalism never took firm hold in mainstream American public opinion. In the United States, the collectivist spirit was expressed more in proposals to reform private ownership through regulation and government spending than in plans to eliminate it altogether.
While the ambitions of radicals and reformers may have varied, their driving social vision was the same: to take the triumph of planning and organization at the factory level and apply it to society as a whole–in short, to engage in "social engineering."
It is commonly imagined today that the regulatory reforms of the Progressive Era and the New Deal were staunchly opposed by Big Business. All too often, however, leaders of the new corporate giants saw no room for competition in the industries they ran, and welcomed government intervention (short of expropriation). Judge Elbert Gary, the first chairman of the board of U.S. Steel, held weekly dinners with other steel executives to set prices. Gary defended this "cooperative plan," stating that "the law does not compel competition; it only prohibits an agreement not to compete." If such "friendly association" did run afoul of the antitrust law, Gary had another idea: "I would be very glad if we had some place we could go, to a responsible governmental authority, and say to them, `Here are our facts and figures, here is our property, here our cost of production: now you tell us what we have the right to do and what prices we have the right to charge.'"
Precisely this approach was adopted in industry after industry–frequently with the support, and sometimes at the instigation, of the businesses involved. Thus, AT&T's president Theodore Vail reacted to AT&T's falling market share by lobbying for regulated monopoly status. Such a move, he argued, was necessary to ensure universal access: "It is not believed that this can be accomplished by separately controlled or distinct systems nor that there can be competition in the accepted sense of competition."
In the midst of the Great Depression, confidence in market competition was at a low ebb in the business community as elsewhere. In 1931 Gerard Swope, president of General Electric, put forward a plan for the cartelization of industry, to be administered by trade associations; the U.S. Chamber of Commerce and the National Association of Manufacturers endorsed similar proposals. In 1933, in the famed first 100 days of the New Deal, the National Industrial Recovery Act put such cartelization into effect. Henry Harriman, president of the Chamber of Commerce, praised the new law as a "Magna Charta of industry and labor"; laissez faire, he contended, "must be replaced by a philosophy of planned national economy."
Ignoring Ignorance
The rejection of market competition, and consequent embrace of government-led social engineering, represented a misreading of industrialization at the most fundamental level. The social engineers simply assumed away the root problem of economics: the problem of ignorance, of figuring out what to make and how to make it. They assumed that these were purely technical issues whose solutions were already within the grasp of engineering. Accordingly, they believed that the most important economic problem was putting the people with that knowledge in charge and having them tell everyone else what to do. On those assumptions, private ownership and competition did indeed seem a hindrance.
What they failed to see was that the question of what to do is in fact enormously complicated, and cannot be answered without reference to what millions of consumers actually want. In particular, they did not understand that the despised pecuniary considerations of price and profit are indispensable in communicating those wants to producers, or that competition among producers–for both customers and investment capital–is the best way of ensuring that better answers to the question of what to do are occasionally concocted.
For all of their fondness for engineering and scientific metaphors, the devotees of technocratic central planning abandoned the essential humility of the scientific method. Instead, they claimed that a small group of people had all the answers. Just at the time that industrialization was delegating brain work throughout the economy to an utterly unprecedented extent, an immensely powerful intellectual movement sprang forth which sought (albeit unwittingly) to restrict sharply the amount of brainpower applied to economic life.
The movement changed the country in waves: the Progressive Era and the New Deal, the mobilizations of the two world wars, and finally, the calamitous reign of "the best and the brightest" in the 1960s. As a predictable result, the country has been saddled with a set of rigid, unresponsive, and dysfunctional government policies, from the original sin of the Interstate Commerce Commission to the current bloated, sclerotic, $1.6 trillion a year mess. And of course, America's suffering at the hands of would-be social engineers has been mild compared to many other places in the world–most notably the former Communist bloc.
Dumbing Down Work
As the misplaced faith in top-down control altered the larger American economy, so its effects were replayed in microcosm in the development of the internal structure of the new large corporations. Nowhere were those effects more destructive than in the area of management-labor relations.
In the early decades of industrialization, what happened on the factory floor remained largely outside the purview of owners and managers. How work was to be divided up, what procedures to follow, what tools should be used, who should do what, and what pace was appropriate–all of these were decided by the workers themselves (or, less idyllically, by their often brutal and domineering shop foremen).
That state of affairs was not conducive to high productivity. In an era of highly complex production operations and accelerating technological change, rules of thumb and received craft wisdom needed to give way to more systematic analysis of how work should be organized. Moreover, as long as workers controlled factory output, they could be relied upon–as normal, self-interested human beings who were typically working long hours under miserable conditions–to take it easy on themselves.
In the last decades of the 19th century and first decades of the 20th, owners and managers asserted and ultimately gained control over the production process. They did so under the banner of "scientific management," and their victory did indeed produce enormous gains. Management consultant Peter Drucker refers to this period as the "Productivity Revolution," and credits scientific management–or to use his terms, "the application of knowledge to work"–with the surging rise in living standards over the course of the 20th century. This cause and effect can be most readily seen in Henry Ford's development of the moving assembly line in 1913, and his inauguration of the $5.00 work day the following year.
Management's victory, though, was the result of a bitterly contested and often bloody struggle with labor. This conflict separated management and labor into opposing camps, and poisoned their relations with animosity and distrust that continue to this day. Consequently, the potential for even greater gains in productivity and living standards was wasted.
No doubt labor resistance to reorganization of the factory floor would have been considerable under the best of circumstances. Tension between management and labor was unavoidable given the harshness of much of the work; America in those days was a desperately poor country by current standards, and brutality in the workplace was one expression of that backwardness. Furthermore, the labor movement was imbued with collectivist anti-business sentiment, and was highly unlikely ever to cozy up with what it regarded as its class enemy.
Nevertheless, a great deal of the continuing acrimony between labor and management can be blamed on the top-down arrogance of the scientific management movement. This was particularly evident in the writings and career of Frederick Winslow Taylor, the founder and leading proponent of scientific management.
Taylor's contempt for the mental ability of the American factory worker was profound. He used the example of handling pig iron, "the simplest kind of human effort….A man simply stoops down and with his hands picks up a piece of iron, and then walks a short distance and drops it on the ground." That said, he continued: "I can say without the slightest hesitation that the science of handling pig-iron is so great that the man who is fit to handle pig-iron and is sufficiently phlegmatic and stupid to choose this for his occupation is rarely able to comprehend the science of handling pig-iron."
In line with such thinking, Taylor set forth the following goal for sound management: "All possible brain work should be removed from the shop and centered in the planning or lay-out department." Professionally trained managers, armed with Taylor's famous time and motion studies, should determine "the one best way" of doing every single task in the factory, and order the workers to do it that way and no other. The role of workers in this system was, according to Taylor, "to do what they are told to do promptly and without asking questions or making suggestions." When questioned by workers, Taylor would commonly reply, "You are not supposed to think. There are other people paid for thinking around here."
Labor's reaction to Taylorism was understandably indignant. Samuel Gompers's assessment was typical: "So there you are, wage-workers in general, mere machines–considered industrially, of course….Not only your length, breadth, and thickness as a machine, but your grade of hardness, malleability, tractability, and general serviceability, can be ascertained, registered, and then employed as desirable. Science would thus get the most of you before you are sent to the junkpile."
But in the end, the labor movement did cede control of the production process; it moved its focus to organizing the work force on an industry-wide basis and improving wages and working conditions through collective bargaining. The triumph of scientific management, though, had forced a sharp cleavage between white collar and blue. As a result, there was a near total abdication of responsibility by the latter for improving the work of the company.
As industries became unionized in the 1930s and '40s, labor relations settled into an uneasy adversarial standoff, in which uncompetitively high wages were used to bribe the work force into accepting their mindless role. Ironically, as labor unions sought standardized seniority-based wage scales, they ultimately came to out-Taylor Taylor, insisting on a byzantine structure of work rules that confined the responsibilities of workers within the narrowest possible limits.
Big Labor's surly accommodation with Big Business in the postwar period was a gilded prison. The pay was good, too good; it bought acquiescence in a work life that otherwise would have been intolerable. Consider these excerpts from The End of the Line, a compilation of interviews with workers at Ford's Michigan Truck Plant outside Detroit:
* "Intelligence didn't come into play unless you were on salary; you weren't really part of the decision-making process. The management made all the decisions; you had no responsibility."
* "It was like a war between management and the workers. For one side to get the other to do something, they had to bring out the guns and hold them to their heads. You would sometimes see sabotage….We used to have a breakdown once a week for a half hour because some guy would stick a tool in the line."
* "[I]f there was something I wanted from the supervisor and didn't get, I would let trucks go by without doing my job. I was no angel. Like everyone else, I would get away with whatever I could."
* "That first week I must have quit at least twenty times in my head. I wouldn't want to walk out in the middle of the day, so I would try to make it to quitting time. The next morning I always came back. It was the money."
Thus did scientific management and Big Labor squander the dispersed intelligence, skill, and experience on the front lines of production. The competitiveness of American industry certainly suffered; so did the souls of workers who were required every day to check their brains at the factory gate.
It was left to the Japanese, rebuilding from the wreckage of World War II, to find a better way. The Japanese junked the old top-down Taylor system for a bottom-up approach, one that uses workers' heads as well as their bodies.
In the Taylor system, managers determined the "one best way" once and for all time, incorporated it into product specifications and standard operating procedures, and then rammed it down workers' throats. In the Japanese kaizen (continuous improvement) system, workers are integrally involved, through "quality circles" and the like, in monitoring the work process statistically and adjusting it to make it run better–making incremental improvements as workers discover better ways of getting the job done. Thus, in the Taylor system brainpower was concentrated at the top, and used once (or at best episodically); in the Japanese system, brainpower is distributed, and used continuously. Ironically, the Japanese devised their new system under the tutelage of Americans W. Edwards Deming and Joseph Juran, prophets roundly ignored in their own country.
White-Collar Waste
In addition to wasting the potential of their workers, the new large corporations created hierarchical management bureaucracies that too often squandered their white-collar talent. Those bureaucracies became increasingly rigid and dysfunctional over time, choking off information flows so thoroughly that the people running the company often had no idea what they were doing, and skewing incentives so badly that rational action within the organization was frequently impossible. All the ills typically associated with Soviet commissariats could be found–in a much less malignant variety, to be sure–in America's great corporate headquarters.
In understanding what happened, it's important not to get carried away with bureaucracy-bashing. Bureaucracy, in its place and properly structured, is a wonderful thing. The fabulous burst of wealth creation brought about by industrialization was due not just to new energy sources and technologies, but new forms of organization. In the pre-industrial era, economic activity consisted of relatively simple tasks, and the business enterprises that conducted them were accordingly simple in structure: single proprietorships or partnerships, managed according to personal knowledge and judgment.
With the coming of high-energy, high-speed, mechanized production, economic activity vaulted to a superhuman scale. The complexity of production processes, the number of people involved, the geographical extent, and the speed of raw material and finished good flows all exceeded the management capacity of traditional business enterprises. What was needed was a new complex form of business organization; what emerged, as chronicled by Alfred Chandler in his magisterial The Visible Hand, was the modern corporation, run by professional managers. Work became superhuman in structure as well as scale; in other words, it became bureaucratized.
The problem with the new large business organizations is that they failed to incorporate within their administrative structures the very features in the ambient market economy that gave rise to them in the first place: decentralized decision making, ceaseless experimentation, feedback loops that ensure good ideas from whatever source are copied and bad ones abandoned. Instead, they created management structures that were the opposite of the marketplace: rigid chains of command, narrow channelling of information flows, resistance to new ideas from unexpected sources. As a result, while American corporations were relatively good at implementing plans concocted at the top, they were much less good at improving those plans, or changing those plans, based on new information that came from outside the top ranks of management.
The deficiencies in American management were not apparent, or at least not pressing, in the early days of industrialization. In the industries where the potential for mass production existed, the adoption of the new techniques generally meant a phenomenal increase in productivity. Accordingly, management systems that could implement and administer those techniques competently and reliably represented an enormous advance.
What evolved were management structures in which information flowed from the bottom up in prescribed channels, and directives then flowed back down. For their time, the new organizational forms were a considerable achievement: They coordinated economic activity at a scale and level of complexity previously unimagined. Despite their latent flaws, these organizations were thus still good enough to allow the new giant corporations to outperform anything that had come before.
And indeed, during the turbulent early decades of industrialization, American management was restlessly improving itself. Between the 1880s and the 1920s, a series of innovations did help to increase the brainpower of large corporations: R&D departments were established specifically to generate new useful knowledge; the intensification of advertising and market research increased the interaction between companies and their business environments; reorganization along multidivisional lines dispersed responsibility by giving full operational autonomy to product group managers.
Nothing fails like success, though, and the success of the new corporations bred a pervasive "if it ain't broke, don't fix it" mentality within the ranks of American management. By the 1920s, as the conversion to mass production was consolidated, further evolution more or less stopped. Efforts to make the corporation more open to change and new ideas–more like the marketplace–tailed off.
Instead, corruption set in. Corporations broke down into internal empires; information flows, and all too often trust, stopped at the departmental or divisional border. The "not invented here" syndrome rendered businesses perversely hostile to opportunities that arose from developments outside the corporation. Companies grew unimaginative about new ways to create consumer value as they lost touch with the consumer; marketing and salesmanship were too often treated as substitutes for paying attention to what consumers like and want. Management "by the numbers" treated financial manipulation, not creation of consumer value, as the key to corporate success.
This corruption plagued many of America's great industries, and none more so than the automotive industry. Resistance to innovation is well illustrated in an example from David Halberstam's The Reckoning. Ford Motor Company developed a new rust-proofing paint process called E-coat back in 1958; the process was expensive to install, however, and rusting often occurred after the company's warranties had expired. Ford's institutional obsession with cutting costs blinded it to an obvious opportunity to create value: "The men who had developed E-coat and the plant men who pushed for it considered it the key to a great increase in quality. Unfortunately, there was no way to quantify that improvement in terms of sales….How, after all, asked one of its proponents, did one put a price on a happy customer?" As a result, despite well-known problems with rusting cars, it took until 1984 until all Ford plants were equipped with E-coat.
The auto industry was also bedeviled by internal empire- building and the lack of cooperation across departmental lines. In Rude Awakening, Maryann Keller describes how it was at General Motors: "General Motors did not operate as one cohesive organization but, rather, as seven separate and distinct operations, each with its own insulated empire. It took three separate organizations–a car division, Fisher Body, and GMAD [General Motors Assembly Division]–to build a single car. And at no time did they interface, except through the president. They were entirely vertical organizations." Thus a car division would design a new model, Fisher Body would then engineer it, and finally GMAD would assemble it–without anyone ever talking to each other. It was a system practically designed to generate delays and defects.
The effects of this kind of mismanagement were concealed for decades. Despite all its faults, this corrupted version of 1920s-style management persevered by default. Competition from the outside world was cut off, first by trade restrictions and depression in the interwar years, and then by the destruction of most of the rest of the world's industrial capacity by World War II. American industrial might dominated a ruined world; people mistakenly assumed that this was because and not in spite of American management.
Albeit from a dissident's perspective, John Kenneth Galbraith's writings typified this misperception. In 1967, he celebrated the unrivalled efficiency of the American corporate "planning system" in his bestselling The New Industrial State: "The mature corporation has readily at hand the means for controlling the prices at which it sells as well as those at which it buys. Similarly it has means for managing what the consumer buys at the prices which it controls. This control and management is required by its planning. The planning proceeds from the use of technology and capital, the commitment of time that these require and the diminished effectiveness of the market for specialized technical products and skills."
Galbraith wrote those words just as the Japanese wolf was approaching the door. In the aftermath of World War II, Japanese corporations developed new management systems that stressed continuous product improvement over financial manipulation, and cross-department cooperation over turf consciousness. Those systems were combined with, as described above, a new way of dealing with labor–one that did not ignore workers from the neck up. What followed, in the 1970s and '80s, was a competitive rout of American manufacturing.
The American corporation has been forced by this competitive challenge into a thoroughgoing restructuring along more market-like lines. This restructuring was much needed and will be highly beneficial over the long term; however, it should not be forgotten that the necessity for restructuring has exacted a heavy toll in wasted resources and dislocated lives. Those are the costs of arrogance and error.
The Open Economy
The new technologies and institutions of the industrial revolution opened up vistas of human experience that were previously all but unimagined. They created, for the first time in history, a society of widespread material abundance. They offered unprecedented opportunities for intellectual challenge in work. Brainpower, and its material effects, were transforming the world.
By current standards, however, conditions in the early days of industrialization were still primitive. Many modern comforts did not exist, and the existence or threat of real privation hung over large sections of the populace. Even with the new machines, production required great amounts of punishing manual labor. The factory floor was a rough place, occupied by rough, uneducated men. In the office, much of the work was routine and clerical. In the larger economy, cost structures often allowed profitable production only at a massive scale, thus favoring consolidation and concentration over vigorous competition. Those same cost structures frequently yielded standardized, least-common-denominator products.
The logic of market development, however, was hostile to all of those shortcomings; over time it has brought significant, sometimes sweeping, amelioration. Yet that progress has been seriously impeded by the imposition of top-down control in both the political and economic spheres. The repudiation of market forces and principles was once considered progressive; its true effect, however, was reactionary, retarding the diffusion of brainpower throughout society that industrialization initiated.
The embrace of top-down institutions can thus be seen as a kind of industrial counterrevolution. The legacy of this counterrevolution was to magnify and prolong the harshest and least attractive features of the industrial economy, and squelch its most benign and hopeful ones. We have moved away from the rough edges of the early industrial era in spite of, not because of, the grand designs of social engineers and technocratic elites.
Now, however, this reactionary order is passing from the scene, and the information revolution is upon us. The revolution is not, as some claim, that information has now become the source of all wealth. That has always been true; what is revolutionary is that we finally realize it. Seeing information at the center of things means seeing our own ignorance as the central challenge of social action. It means rejecting the notion that a few of us have all the answers. It means rejecting institutions that were founded on that notion, and embracing institutions that encourage experimentation and openness. In short, it means believing in freedom again.
Contributing Editor Brink Lindsey (102134.2224@compuserve.com) practices trade law in Washington, D.C.
The post Big Mistake appeared first on Reason.com.
]]>Brink Lindsey
What has always been best and most distinctive about the American character is its sense of adventure. The immigrant knows this: That is what brought him here. Willingness (even eagerness) to take risks, to depart from old ways of doing things, to try the unknown–these represent the ideal of American daring.
This adventurous spirit achieved its best-known expression in the conquest of the Western frontier. An appreciation of this episode must transcend caricatures, whether of today's P.C. demonizers or yesteryear's whitewashers. A good place to begin is Larry McMurtry's Lonesome Dove (1985), the story of two former Texas Rangers who lead a cattle drive from Texas to Montana. It is a beautiful, funny, and immensely entertaining book, and it captures perfectly the reckless, rambunctious vitality that led the Western expansion. In particular, the richly realized character of Augustus McCrae is my idea of what a great American should be: lighthearted, good at his work, sociable but independent, practical but a dreamer.
The primary outlet for American adventurousness today is the workplace. Snobs of both the left and right deny that commerce allows for any largeness of spirit, but they could not be more wrong. Daring and competitive striving were traditionally aristocratic virtues; capitalism democratized them, and capitalism's development spreads the opportunities to practice them ever more widely.
An adventure does not require gunfire or death-defiance; it needs only a formidable challenge, and the boldness to take it on and meet it. Richard Preston's American Steel: Hot Metal Men and the Resurrection of the Rust Belt (1991) tells the adventure of a steel mill–specifically, Nucor's opening of the first flat-rolled minimill. The drama of the story grips like a novel. Read this book to experience capitalism at its best.
Americans are the great pioneers and defenders of a social order based on capitalist-style adventure. And the growth of this order–the integration of millions of dreams and risks taken through the coordinating forces of the market–may itself be seen in the larger view as a grand collective adventure. The prize of this quest is described in Max Singer's remarkable Passage to a Human World (1987): the transformation of the normal human life from one mired in ignorance and poverty to one broadened by the possibilities of affluence.
In creating this new world, we are exploring the unknown–human beings have never lived like this before. It is a world well suited to American adventurousness.
Contributing Editor Brink Lindsey practices trade law in Washington, D.C.
Andrew Ferguson
It's a sad fact that most great works of American literature are anti-bourgeois, anti–small town, hence, in some way, anti-American. A newly arrived immigrant unlucky enough to read, say, Sister Carrie or Main Street or Winesburg, Ohio, would take away an unmistakable message: "Go back!"
This doesn't make our great works of literature any less great, though, so choosing from them almost at random I would hand our new immigrant a copy, well-thumbed, of Spoon River Anthology (1915). This is Edgar Lee Masters's collection of poems about a small valley in Western Illinois, pre–World War I. Taking names from the headstones of a local cemetery, Masters wrote a poem for each townsman, and as you read along the tales interweave and overlap and fold back upon one another, exposing the inevitable small-town lies and hypocrisies but also–and this is crucial–instances of grace and nobility and redemption. If nothing else, the book shows why Americans were so in a rush to urbanize. If we'd all had to stay in a small valley in Western Illinois, we would have gone crazy.
I would also force upon our immigrant friend a load of Mencken (probably the Second Chrestomathy, edited by my friend Terry Teachout and published in 1995), so that he might begin to glimpse the exuberance and wit the American language is capable of expressing. Along with the singular quality of his prose, Mencken's habits of mind–the skepticism and hardheadedness and unfailing sense of appreciation and pleasure–are good habits for anyone caught up in the raucous carnival of American life.
And last I would hand him a copy of Wealth and Poverty by George Gilder (1981). I haven't yet decided whether I agree with Gilder about the altruism that he believes lies at capitalism's heart. But I probably should, for no one shows such an understanding of both the mechanics and the morality of the marketplace. And as our new immigrant would soon discover about the American marketplace, if you can make it there, you can make it anywhere.
Andrew Ferguson is a senior editor of The Weekly Standard.
Gary Alan Fine
Ever since Adolph Hitler and his cronies wrecked the legitimacy of assessing the traits of peoples, writers have been properly wary of embracing too tightly the belief that nations have "character." Yet, despite the mischief that some have made of it, a common-sense perception exists that different societies are fundamentally distinctive. National character feels right, even if definitive proof is difficult to come by.
We Americans treasure what has come to be called "American exceptionalism"–those features of who we are that we believe distinguish us from others: those nasty un-Americans. Dismiss any biological basis, any American gene; we have been melted in the same pot.
In recommending books that reveal this character one is tempted to name two distinctively American popular genres and leave it at that: science fiction and Westerns–literatures that look forward and back. These literatures enshrine the American reverence for technology and for the land, and both within the context of a rugged individualism.
Beyond those categories, three volumes stand out for me as guides to what it means to be an American: for good and for ill.
Perhaps we should junk our current citizenship tests, and merely insist that all prospective citizens read Mark Twain's The Adventures of Huckleberry Finn (1885). Each applicant could be required to explain how Huck Finn moved them. Any number of explanations would validate one's Americanism. Set within a crucial period of American history, capturing the American tragedies of slavery and racial bigotry, depicting the importance of both community and individual initiative, and set on the intersection of regional cultures of the Midwest, South, and West, Huck Finn confronts the reader with the questions of what American society is and what it should and could be. Further, if one believes that one cannot truly understand a people until one can laugh at their jokes and cry at their sorrows, Huck Finn, alternatively raucously funny and mordantly sad, provides a test for becoming an American in one's emotional response.
My second selection is a bit of a cheat. Trying to decide whether to chose Henry David Thoreau's Walden (1854) or his lecture/essay "Civil Disobedience" (1848) was eased by the fact that I have an edition that includes both. As readers of REASON recognize, the latter is a grand, radical libertarian paean to freedom–an American political tract that stands up against Marx and Engel's contemporaneous Communist Manifesto. The former defines individualism in practice. If we do not choose to retreat to our own Walden, we experience the awareness vicariously through Thoreau's clean prose and wild life. Could such an essay be written anywhere but America? Our wilderness is our freedom.
As a practicing sociologist, I cannot resist including a volume by a colleague: Joseph Gusfield's classic and spirited study, Symbolic Crusade: Status Politics and the American Temperance Movement (1963). Gusfield takes as his case the battle over Prohibition laws: a lengthy struggle, unimaginable in many other industrial nations. For Gusfield, temperance is not really about alcohol, but about class, ethnicity, gender, and moral discipline. Lines are drawn between female, rural, Protestant residents of Anglo-English descent and more recent migrants to these shores: Catholics, urbanites, males, and "ethnics." The battle is not over the bottle, but over the ballot and the economy. Significantly, Prohibition was enacted at about the time that immigration was sharply curtailed: The first experiment lasted barely a decade, while the latter exercise in exclusion lasted 40 years. The battles over immigration are as American as the battle over slavery. The Statue of Liberty may reflect a cherished American ideal, but statues don't vote or march.
Gary Alan Fine (Gfine@uga.cc.uga.edu) is a professor of sociology at the University of Georgia and author of Kitchens: The Culture of Restaurant Work (University of California Press, 1995).
Joseph Epstein
Democracy in America, the first book I would have our new American read, is one that surprises me afresh whenever I return to it by its powers of penetrating beyond the surface of social and political life. It was published in 1835, when its author was 30, and is based on information and observations he acquired when sent to this country to study penal reform in 1831, when he was 26. Tocqueville, though not himself an immigrant, provides a matchless model for anyone newly arrived in our country of the possibilities of astute social observation. Henry James advised that one try to be a person on whom nothing is lost. The young Alexis de Tocqueville was such a person and Democracy in America proves it beyond any question.
Chapter 19 of Part II of Tocqueville's book begins: "The first thing that strikes one in the United States is the innumerable crowd of those striving to escape from their original social condition; and the second is the rarity, in a land where all are actively ambitious, of any lofty ambition." Ambition, or perhaps following Tocqueville one does better to say "personal aspiration," which for so long has been at the heart of American life, dictates my choice of a second book for my new immigrant: The Great Gatsby by F. Scott Fitzgerald (1925). What Fitzgerald's novel ought to make plain to the new American is that Americans, at their best, have been a nation of dreamers. Yet he or she should also know that these dreams frequently carry a price. Poor Jay Gatsby's dream of recapturing and revising the past may not qualify as a "lofty ambition" in the Tocquevillian sense, but it has its own kind of grandeur. "Gatsby," this novel's penultimate paragraph reads, "believed in the green light, the orgiastic future that year by year recedes before us. It eluded us then, but that's no matter–tomorrow we will run faster, stretch out our arms farther….And one fine morning"
The third book I would recommend is Independence Day, a novel by Richard Ford that is less than a year old and that I myself have not even finished reading. But unless Ford blows it badly, his book seems to me to fit in handsomely with my other two suggestions, in being a work about American ambition, aspiration, and dreams. Its unlikely hero is a divorced father of two, of all unromantic things a real estate salesman, and the book is about what America does to dreams–not all of it, by any means, very nice, but much of it useful to know. It is a novel about life in this country at a time when the notion of progress that has for so long propelled so many American actions and beliefs has to be significantly qualified without being altogether jettisoned. To an attentive immigrant–or, for that matter, American-born–reader it has a vast amount of important information about the way Americans live now: about our hopes and fears and what it means to be an American at the end of the 20th century.
Joseph Epstein is editor of The American Scholar.
Charles Paul Freund
The landscape of the American character is rather broad for the three small structures this assignment allows me to build on it. Let's build then with three novels of this century: They throw big shadows.
If Americans are part cowboy, an important reason is Owen Wister's 1902 novel The Virginian. Wister's tale of cowboy life in Wyoming created the essential American myth–and hero–we have been revisiting ever since. Americans know this book whether or not they've read it or even heard of it.
Unlike his garrulous, socially humble dime-novel predecessors, the never-named hero of Wister's novel is important for his code, not his birth: His family is irrelevant to his character, as is his meager education. He is a man of deeds, not words, ideas, or culture, and he acts out of a powerful sense of duty. Never seeking violence, he must do what honor and justice demand. The trail runs true from The Virginian to Tom Mix, Gary Cooper, and John Wayne; even to Herb Jeffries, the Bronze Buckaroo of '30s all-black movies.
When The Virginian appeared, Frederick Jackson Turner had already declared the frontier closed; the age of cities and consumerism had begun. What Wister shaped from a fading past was a folk-epic West where a man could mold himself free of artificial restraints: our American dream. His book, set amid the infamous Johnson County Wars, is also unalloyed propaganda for cattlemen; the 1980 film Heaven's Gate told the story in class-struggle terms. Different audience.
By 1939, when Raymond Chandler's The Big Sleep appeared, open trails had become mean streets, and down them walked private eye Philip Marlowe, maintaining his honor in a corrupt world. Chandler's debt is to Hemingway and Hammett, but the spectacular world of American noir owes its greatest debt to Chandler.
Tough, mistrustful, knightly, Marlowe's is the most distinctive of American voices, the clean if smoke-coarsened music behind a world of garish neon, too much booze, and dreams gone sour. A man of deeds, as is the American style, Marlowe is also a man of words, which he wields like bullets. That voice lives: You still hear it in Blade Runner and in William Gibson's Neuromancer, the basic text of cyberpunk.
The Big Sleep isn't Chandler's best book (Farewell, My Lovely is), but it's a more revealing combination of American toughism and our cultural ambivalence toward cops, power, and wealth. It's hard to mold yourself in an American city: They're big, dirty, and full of phony restraints. You've got to know how to slip those restraints and still be able to look at yourself in the mirror when you snap your hat brim. Marlowe could. That's why we still hear him.
Truman Capote once sniffed famously that On The Road by Jack Kerouac wasn't writing at all; it was "just typing." True, Kerouac's 1957 book about his travels around the country is shapeless and undisciplined. But Kerouac wasn't offering American picaresque. On The Road is a work of sensibilities: wild, cool, and beat. Kerouac was typing spontaneously amid a rising storm of generational discontent and self-absorption, characteristics that came to dominate postwar American (and not only American) culture and character.
Kerouac invented neither '50s beat culture nor '60s counterculture, though On The Road heralded the prose arrival of the former, and was an essential text of the latter. Indeed, the work of the beats must stand in for the largely missing literature of their hippie offspring, who channeled their juices into music.
On The Road isn't a bad stand-in. Kerouac and his traveling buddies make their own highway frontier where they slip restraints Owen Wister never dreamed of. Melding with many Americas, black and Indian as well as white mainstream, they are the cool, slang-talking, bebop-thumping, messiah-dreaming products of what we now call cultural discourse. Cultural cousin to the young Brando and the thin Elvis, and a buzz in the ear of Bob Dylan and Jim Morrison, Kerouac's book implies the technologically possible placelessness that is the final American frontier: in his case, cars; in ours, satellites and computers. Beyond that, there is no national culture, and you're not an American anymore.
Charles Paul Freund is a Washington, D.C.writer.
Steven Hayward
In one of his many encomiums to the Declaration of Independence, Abraham Lincoln hit upon the chief reason why it is possible for anyone from anywhere to become an American, while it is nearly unthinkable for an émigré to become a Frenchman or a German: One becomes an American by adopting its principles, especially the principles of equal rights expounded in the Declaration. But the political principles alone are not the sum of the matter. The "American Dream," which connotes something more than merely political character, is similarly exceptional: The mere mention of the possibility of the Canadian Dream or the German Dream elicits a smile.
Hence, an immigrant to America should start with something like A.J. Langguth's Patriots: The Men Who Started the American Revolution (1988), which offers vivid portraits of the main figures of the revolutionary generation. In a more contemporary vein, Richard Rodriguez's Hunger of Memory: The Education of Richard Rodriguez (1981) offers a stirring account of the necessary but often brutal process of becoming an American.
The American Dream is in the end bound up with the nation's principles, in ways which can be hard to discern today. All regimes are vulnerable to a kind of corruption specific to their principles: in our case, the attenuation of the idea of rights, along with an apolitical liberalism that overemphasizes comfortable self-preservation, constitutes a corruption of the civic virtue at the heart of the American Dream as the Founding generation understood it. There are a variety of difficult nonfiction books one might punish an immigrant with, but for a better impressionistic look at several aspects of these problems, a new immigrant would do well to read Tom Wolfe's Bonfire of the Vanities (1987).
Contributing Editor Steven Hayward (Hayward487@aol.com) is research and editorial director for the Pacific Research Institute, a San Francisco–based think tank.
John Hood
New Americans deserve to know what they've gotten themselves into–not simply a country with defined borders and a common national culture, but a two-centuries-old experiment whose boundaries have yet to be determined and for which tumultuous change is itself a tradition. The American Experiment is unique in world history, but its goal is to satisfy a universal desire for human freedom and dignity. To a great and unprecedented extent, the experiment has proved a success. But the intervening struggle has often been a difficult one. New Americans who in the future may well be called upon to defend and expand the freedom that is their bequest today need to learn more about it.
The novels that make up James Fenimore Cooper's The Leatherstocking Tales (1823–41) are an excellent introduction to the important American heroic concepts of personal freedom, audacity, and individual responsibility. That America is a frontier society has long been (correctly) taken as a given, and used by the modern left to justify abandonment of the country's original political and economic principles–since, they say, the frontier no longer exists. That is absurd, of course, as any biotechnology executive or cybersurfing teenager can attest.
The Tales also help to chronicle the days of rebellion against oppressive government, an American Revolution that didn't just end with the surrender of Cornwallis at Yorktown. Contentious debates continued about how far government power should extend over money, trade, and the freedom of millions of human beings, culminating in 19th-century war and tragedy. At the same time, entrepreneurs such as Cornelius Vanderbilt, Andrew Carnegie, James J. Hill, and John Rockefeller faced enormous challenges and government-erected hurdles–in the form of subsidized and protected competitors–in their efforts to build a modern industrial economy. On these two subjects, I'd put a good history of the Civil War (say, by Shelby Foote) and the thin but indispensable volume Entrepreneurs vs. the State by Burton W. Folsom Jr. (1987) on any new American's reading list. (Folsom's book is also available in a 1991 expanded version titled The Myth of the Robber Barons.)
The 20th century has seen great tragedy as well as great accomplishment. For many Americans, the promise of freedom remains unfulfilled. Nevertheless, the amount of progress would be hard to overstate. Henry Grady Weaver, in his classic 1947 work The Mainspring of Human Progress, explains how the concept of freedom created the American society so many immigrants seek to join: "Why did men, women, and children eke out their meager existence for 6,000 years [of recorded history], toiling desperately from dawn to dark–barefoot, half-naked, unwashed, unshaved, uncombed, with lousy hair, mangy skins, and rotting teeth–then suddenly, in one place on earth there is an abundance of things such as rayon underwear, nylon hose, shower baths, safety razors, ice cream sodas, lipsticks, and permanent waves?" Immigrants, perhaps more so than natives, intuitively understand why Weaver's simple question is so provocative. When they can answer the question as easily, their journey to America will be truly complete.
Contributing Editor John Hood (74157.415@compuserve.com) is on leave from the John Locke Foundation, a state policy think tank in North Carolina, and is a Bradley Fellow at the Heritage Foundation.
Marcus Klein
America is the one nation in the world that is defined not for its immigrants but by them and not simply as they might contribute one ingredient or another to the great American bouillabaisse, but by record of the adventure in itself of their finding a place in 20th-century America. It is an odd but demonstrable fact that in modern times the most subtle of definitions of American tradition and culture have come from the pens of those who have had that adventure or from their first-generation American children. Therefore for the new immigrant the most instructive books might well be accounts of his predecessors, and among such accounts it would likely be works of fiction that would be most instructive because fiction allows for complicated and sometimes contradictory feeling, for tentativeness of discovery and judgment.
For a hundred years and more the immigrant to America has been confronted by a country that is at once beckoning and hostile, at once welcoming and demeaning, at once a guarantor of liberties and a restrictor of the same, and which at once promises material opportunity and denies the same. Add to such bafflement of day-to-day life the drag, moral and familial, of the culture that is being abandoned and the sheer necessity of surviving in the new–there is material here for a rich and enlightening literature.
The new immigrant might well consider Abraham Cahan's novel of 1917, The Rise of David Levinsky. The title character, a Russian-Jewish immigrant, works hard and rises to become a wonderfully successful businessman, and does not thereby lose his soul. David Levinsky is a very long novel that is instructive because it is true to its ambiguities. Levinsky becomes sly and occasionally is brutal in his rise to riches, as is not an unlikely price of character for the sake of success in America, while at the end he is nevertheless faithful to his beginnings, balancing pride and guilt, with no clear end to his adventure in sight.
No end, in fact, to this literature that records the making of Americans, and therefore the making of America. But one might make special mention of Henry Roth's novel of 1934, Call It Sleep, which illuminates the adventure by presenting it through the eyes of a child.
For that matter the black experience in modern America is not essentially different from that of the immigrant, and an account of it might provide him with another kind of illumination. The novel he should look at, without doubt, is Ralph Ellison's Invisible Man, published in 1952. While it is an angry novel, it, too, struggles with the guilt of abandonment of a prior culture. "I yam what I yam," says the hero, to speak of more than his dietary traditions. But America nonetheless is this hero's fatality, and his adventure consists of his becoming the American. "Who knows," this narrator famously says to white America, "but that, on the lower frequencies, I speak for you." Which is what our new immigrant will be doing, too.
Marcus Klein is a professor of English at the State University of New York at Buffalo and author of, most recently, Easterns, Westerns, and Private Eyes: American Matters 1870–1900 (University of Wisconsin Press, 1994).
Linda Chavez
"Once I thought to write a history of the immigrants in America. Then I discovered that the immigrants were American history." Thus begins The Uprooted by Oscar Handlin. The Pulitzer Prize–winning book, first published in 1951, turns the romantic story of our immigrant nation on its head, telling the turn-of-the-century immigrant story as it was actually lived, full of alienation and despair. The catastrophic journey to America severed the immigrants' ties to a familiar world and dropped them in a place they could never fully understand, and which never fully understood them. But their pain was our gain. Their journey made us a far less parochial society and helped create the American Dream.
How The Other Half Lives by Jacob Riis is another classic of the immigrant experience in America. First published more than one hundred years ago in 1891, the book remains a powerful indictment of the slum conditions in which most immigrants lived at the turn of the century. Riis wrote the book while he was a New York police reporter. Although the book is often credited with sparking the first "urban renewal" project that removed the worst tenements, Riis's main interest was in transforming the immigrants themselves into Americans. He was an early champion of teaching immigrants English, which he believed was the key to Americanization.
Next Year in Cuba by Gustavo Pérez Firmat (1995) chronicles the bittersweet Cuban-American experience. Pérez, like most of his compatriots, came to America as a refugee, not an immigrant. But because he was a child when he arrived, he could never fully identify either with his parents' generation, who dreamed of returning to Cuba, nor later, with his own American-born children, who can imagine no life outside the United States. Pérez is a man caught between two worlds, at home in neither. No matter how hard he tries to become an American–majoring in English in college and becoming an English professor in North Carolina, marrying an American woman, playing Bob Seger records and eating frozen yogurt–he still feels guilty when he plans to cast his first vote in a U.S. election. Next Year in Cuba doesn't fit our sentimental wish to recast the immigrant's story as one of unalloyed joy and quick assimilation, but it does provide insight into what Pérez calls the one-and-a-half generation: "Wedged between the first and second generations, the one-and-a-halfer shares the nostalgia of his parents and the forgetfulness of his children."
Linda Chavez (lchavez.usa@aol.com) is president of the Center for Equal Opportunity.
William B. Allen
The first American to address the question of American character, in a context in which the separate existence of the United States was assumed, laid it out as a project of formation in accord with standards of liberty. That was George Washington, and no one can do better than to begin a study of America with a study of his extremely important writings. They are available in many forms, but perhaps that which is both most accessible and best calculated to offer a comprehensive picture is my own volume, George Washington: A Collection (Liberty Press). In it one meets not only the first real American but the first America.
In the century after Washington many works labored at constructing the ideal picture of American character, many very worthwhile. None, however, contributes so meaningfully and constructively as Uncle Tom's Cabin (1852), which shows character in the crucible of struggle and moral uncertainty. Harriet Beecher Stowe stole a conceit from Alexis de Tocqueville (namely the contrasts on opposite banks of the Ohio River) and turned it into the quo warranto of the nation, to be redeemed in its great War of American Union. Let no one deny: The story of America is the story of the ouster of slavery. America became what she was prior to that time, but she was unable to trust what she was until that matter was resolved. And no one else but Stowe made equally clear and compelling how America needed to resolve that question.
Finally, in our time, many elegiads, many screeds, and many anathemas contend for the prize of authoritative interpreter of America. But Americans require not so much secondhand interpretations as genuine challenges to take the question in hand themselves. Of contemporary works, none has worked that charm so well for me as Peter Brimelow's Alien Nation (1994), which evoked from me the scream, "I was just joking (in my modern skepticism); please give us our old (American) man back!" For any who dream that a mere philosophical predisposition ("open immigration") suffices to respond to the fundamental question–Is the American merely the human localized?–needs to suffer a little in thinking through how much America is worth to him. That is character building!
William B. Allen is dean of James Madison College at Michigan State University.
Paul A. Rahe
The United States of America is not a nation in the old-fashioned sense of the word. Nationhood traditionally implied a common natality–that the nation's citizens were somehow of common birth. But, as Americans, we cannot even pretend a common descent: We hail from every corner of the globe; we exhibit every human feature; we come in every shade; and the naturalized are no less fully our fellow citizens than those born within the fold. If the citizens of this country sometimes speak of the nation's Founding Fathers, they do so by analogy: They do not trace their genetic or biological lineage to Benjamin Franklin, George Washington, John Adams, Thomas Jefferson, Alexander Hamilton, Gouverneur Morris, John Jay, James Madison, and the like.
If our nation's progenitors fathered a people, they did so by fathering an idea. This is not a nation of blood and soil; it is a nation of principle. As a people, we stand or fall by our adherence to the understanding of justice enshrined within the Declaration of Independence and reiterated in Abraham Lincoln's Gettysburg Address. We are less an imagined community of blood than a genuine, if contentious, community of faith.
That fact poses a problem for immigrants. They have to cross a great cultural divide separating the world that understands nationality in terms of birth and the world that understands it in terms of adherence to common first principles. To help them make that crossing and to instruct them in our peculiar ways, I would suggest the following three books: my own Republics Ancient and Modern: Classical Republicanism and the American Revolution (1992); Alexis de Tocqueville's Democracy in America, tr. George Lawrence (1969); and Michael Shaara's The Killer Angels (1975).
My own great tome may not be the best recent work on the American Founding, but it is, weighing in at 1,200 pages, the most comprehensive account. It sets the Revolution, our Declaration of Independence, our Constitution, and the quarrels that they inspire in the context of the history of self-government in the West, emphasizing what we owe to the ancient Greeks and Romans, what in our polity is peculiar to modernity, and what was achieved for the first time on these shores.
Tocqueville's wondrous book was written by a foreign visitor to the United States for the edification and instruction of his own countrymen, and it has served for many generations to explain America to the Americans as well. It surveys virtually every aspect of American life–our Constitution, our laws, our customs, and our beliefs. It situates the myriad details within an understanding of the whole, and it analyzes dangers inherent within our regime that are far more ominous today than they were in the Jacksonian period. Where Tocqueville's description no longer fits, it is generally because we have undergone a decline explicable in terms of his analysis.
Finally, Michael Shaara's stirring novel, in relating the story of the battle of Gettysburg, brings home to its readers just what was at stake in our greatest and most important war. No one can understand America without paying attention to the racial tensions that bedevil us, and no one can understand these without reflecting on the legacy of slavery. Moreover, it is only with regard to our failure as a people to come to grips with the dilemmas imposed by the attempt to found and sustain a multiracial society that one can understand federalism's demise and the difficulties that we now face in our quest to restore a semblance of local self-government. There are finer American novels than The Killer Angels but I know of none better suited to the purposes of teaching our immigrant what makes us many and what makes us one.
Paul Rahe (Paul-rahe@utulsa.edu) is Jay P. Walker Professor of American History at the University of Tulsa.
Virginia I. Postrel
The paradox of America is that we have built a history and tradition, a national culture, on the defiance of history and tradition. From William Penn, who would not take off his hat, to Rosa Parks, who would not give up her seat, we teach our children the stories of stiff-necked heroes. We make them read Romeo and Juliet, lest they overvalue ancient feuds.
Hollywood's greatest cliché is the cop who breaks rules in the interest of justice. Rhett Butler, not Ashley Wilkes, is the hero of Gone With the Wind. Nobody thinks Huck Finn should return Jim to slavery or stick around to be civilized. We're not a by-the-book country.
This culture has political consequences; you can read about them in the first few paragraphs of the Declaration of Independence. But for the immigrant, the personal will be more important than the political. Huck had no parents, no one who tied him to history and tradition, no one to question or grieve when he went his own way. Huck Finn is the great American novel, but it's not on my list (in part because I know it is on others).
Start, instead, with a less-great novel, but a more relevant one: Chaim Potok's The Chosen (1967), a tale of clashing cultures and a son's choice of truth over tradition. (That the truth in question is Freudian psychology dates, but does not undermine, the story.) The milieu is Jewish–the exotic world of the Hasidim and the more familiar one of the modern Orthodox–but the story is more generally American, limited to no particular religion or ethnic group.
In her novels of Chinese mothers and American daughters who love but do not understand each other, Amy Tan plays off the immigrant experience, while capturing universals. Every parent has a history, and every child a new life, that the other cannot truly grasp. America, with its defiance of history and tradition–its emphasis on individual life, liberty, and pursuit of happiness–makes the chasm between generations deeper and wider. The Kitchen God's Wife (1991) suggests what one gets for that price: a place of hope and second chances, in which even daughters are precious.
America beams itself to the world from Los Angeles and Atlanta, my actual and ancestral homes–complicated, rambunctious, racially mixed cities grown by sheer will and ambition. They, and the vast regions of which they're the capitals, are more characteristically American than the whitewashed, orderly New England popular with historically inclined pundits. The Puritans came from England; Pentecostalism was born in the U.S.A. The South and the West are the wellsprings of American culture.
So I exercise the editor's prerogative to cheat, suggesting two wise observers of America from California and the New South: Richard Rodriguez, in Days of Obligation (1992), and John Shelton Reed, in My Tears Spoiled My Aim (1993).
"Some migrants to the South," explains Reed, "make the South more Southern." By defying their own history and tradition, leaving their homelands behind, immigrants reaffirm America, make it more American.
Yet, Rodriguez warns, "Our parents came to America for the choices America offers. What the child of immigrant parents knows is that here is inevitability." Come to America, and you will have American children. They, too, will defy history and tradition–will defy your expectations–without thinking twice. It's the American way.
Virginia I. Postrel (VPostrel@reason.com) is editor of REASON.
Jonathan Rauch
If I try to be honest rather than cute, all of my books for immigrants–Tocqueville, Emerson, Huck Finn, Mencken or King or JFK's speeches–are too obvious to be interesting, and I have nothing new to say about them except that they are magnificent and essential. So the editor has given me an indulgence to say this: I would very much like to advise an immigrant to watch Star Trek.
Not–of course!–the emasculated Next Generation, but the simpler, less self-aware, much finer original. I can think of nothing that says more, more succinctly, about who the Americans believe themselves to be, or wish they were.
The gleaming ship is the Enterprise, though not (quite) the Free Enterprise. Its captain is authoritative but not authoritarian, grand but never above dirty work; he knows the rules as well as any lawyer, but he knows, too, how to run rings around Federation bureaucrats when a job needs to be done. On the Enterprise (what else would it be called?), there is no problem which ingenuity cannot crack. When other ships would be blown to dust as shields fail and engines strain, Captain Kirk and his crew bring off just a bit of the impossible by thinking fast and showing pluck. They have that most American of traits: the serene confidence that in the last extremity their luck will hold. God smiles on drunkards, America, and the Starship Enterprise.
The Enterprise is lucky because it is morally worthy, and morally worthy because it is innocent. Inside, the ship is the model of multiculturalism as multiculturalism was supposed to have been. People of every nationality and of several planets, united by the Federation's creed, form a community naturally, painlessly, with no hint that quotas might be required to bring enough Asians or Vulcans aboard. Outside, distant star systems are populated by diverse peoples most of whom, if you just scratch the surface, are American or wish they were.
The starship and its Federation have a foreign policy: tough but tender, engaged but not imperialist. Explore but do not conquer, says the Prime Directive; engage but do not interfere. Captain Kirk is as Captain Columbus ought to have been. Yet noninterference does not for a moment mean nonintervention; staying out does not mean staying away. Contradiction? What contradiction? Where aliens can be enlightened in the ways of equality and justice, so they should be: preferably by example, rather than by force.
True, the Enterprise is strong, bedecked with phasers and photon torpedoes. But its real strength is not its weaponry but its mercy. No matter how vicious the provocation, the captain chooses mercy for his enemy; faced with a seemingly murderous alien, he applies understanding and modern medical care. Thus does the Federation earn its moral hegemony. Although the Enterprise holds the steel of science (Mr. Spock), it beats other comers, in the end, because its hard logic is subservient to its good heart. And so the universe makes way before the Enterprise as the world should have made way before Christ. What is America, after all, if not the light unto nations?
I am not sarcastic, not for a moment. The universe of the Starship Enterprise is silly but also exalted. Ronald Reagan thought that if the Soviet rulers could only see America up close, they would come around to its superior virtue. That is naive, yes; but also rather grand, and utterly American. The barrel-chested culture of Victorian Britain, brilliant though it was, could never have produced a Star Trek; neither could the scintillating, cynical culture of ancient Greece, or the bluntly brutal culture of imperial Rome, or any other imperial culture before America's. I predict Star Trek will be watched 50 and 100 years from now. More than most books I can think of, it embodies the American aspiration: or, if you prefer, the American myth. It captures us, perhaps, embarrassingly well.
Jonathan Rauch is a visiting writer at The Economist.
The post The Contents of Our Character appeared first on Reason.com.
]]>Whatever does or doesn't happen in the 104th Congress, the American welfare state faces a decidedly bleak future. It suffers a central and seemingly intractable problem: Fewer and fewer people believe in it anymore. Power without legitimacy can't be sustained indefinitely, and the welfare state's legitimacy appears to be in terminal decline. Unless that decline can be reversed, a substantial contraction in the size and scope of government is inevitable.
If you're a skeptic—if you think 1994's election may have been a fluke, if you doubt there has been a generation-long shift in the political culture away from big government—you ought to read John M. Jordan's Machine-Age Ideology: Social Engineering and American Liberalism 1911-1939. This fresh and interesting examination of the origins and development of big government shows just how different things used to be.
Jordan, who teaches at Harvard, tells the story over three decades of the "rational reformers," otherwise known as technocrats or social engineers. Theirs was a particular strain of American liberalism, and a particularly influential one. They shaped the Progressive era in the 1900s and 1910s, the corporatist embrace of the "associative state" in the 1920s, and the New Deal in the 1930s. And while Jordan's narrative leaves off there, he acknowledges that the story continues: "In the indistinct but crucial realm of political culture, the engineering and managerial influence persisted well after World War II, finding its highest expression in the 1950s and 1960s…."
The rationalist visions of any era share certain key features—among them, according to Jordan, "fascination with scientific method, machine process, and large-scale managerial organizations as analogues for government." Equally characteristic is disdain in equal measure for market competition and democratic persuasion, since both are too messy and too unpredictable to fit in the social engineers' blueprints. Sound familiar, Mr. Magaziner?
Jordan traces the roots of this worldview back to the efficiency craze that took hold in the first generation of industrialization: "A discussion of the 'best and the brightest' of the Great Society must begin with what Taylorites called 'the one best way' in the first years of the century."
The preoccupation with efficiency, epitomized by Frederick Winslow Taylor's scientific management and its time-and-motion studies, produced rhetoric that sounds distinctly odd to contemporary ears. For instance, Louis Brandeis, who helped to popularize the term "scientific management" in his celebrated challenges to railroad rate increases, declared that "efficiency is the hope of democracy." Even more strangely, a 1913 article in System magazine characterized Teddy Roosevelt as "the most efficient human machine of our time."
Mechanistic metaphors were inescapable in what was widely known as the Machine Age. Born into a technological, urban society, we take industrial culture for granted; for contemporaries, it was breathtakingly novel and disorienting. Unsurprisingly, many took their bearings from the most obvious characteristic of the times: the new mechanical marvels and the engineers who built them. "[The] engineering profession generally rises yearly in dignity and importance as the rest of the world learns more of where the real brains of industrial progress are," Herbert Hoover, known as the "Great Engineer," wrote in 1909. "The time will come when people will ask, not who paid for a thing, but who built it."
It was an easy next step to conclude that engineers and engineering principles were needed to transform not just business, but the whole of society. Charles P. Steinmetz, a well-known engineer with General Electric and an avowed socialist (he kept an autographed photo of Lenin in his G.E. laboratory), summed up the viewpoint in his 1916 book America and the New Epoch: "All that is necessary is to extend the methods of economic efficiency from the individual industrial corporation to the national organism as a whole."
This one idea—that the rationality of the factory, of the machine, could be extended to society generally—was the driving impulse of the rational reformers. Though technocrats are still with us, and their instincts are still the same, they are now on the intellectual defensive, and hence vague and evasive. Jordan provides the service of pulling together a wide variety of voices, famous and obscure, from a time when Hayek's fatal conceit was still conceited. Consider the following gems:
? Stuart Chase, journalist and author, writing in Harper's in 1931: "Plato once called for philosopher kings. To-day the greatest need in all the bewildered world is for philosopher engineers."
? George Soule, an editor of The New Republic, writing in that magazine in 1931: "As more and more people—both engineers and others—come to understand the inherent superiority of the engineering approach, the traditional business way of doing things is bound to lose its popularity."
? Howard Scott, head of Technocracy Inc., in his organization's Study Course: "There is only one science, and there is no essential difference between science and engineering. The stoking of a bunsen burner, the stoking of a boiler, the stoking of the people of a nation, are all one problem."
? Rexford "Red Rex" Tugwell, Columbia University economics professor and New Dealer at the Department of Agriculture, in his 1935 book The Battle for Democracy: "There is no invisible hand. There never was….[W]e must now supply a real and visible guiding hand to do the task which that mythical, non-existent invisible agency was supposed to perform, but never did."
? Arthur E. Morgan, head of the Tennessee Valley Authority, in his 1936 book The Long Road: "[C]onsensus of judgment would not mean taking formal votes on the 'one man, one vote' principle. Consensus of judgment may be arrived at by the deference of the many who do not know to the superior judgment of the few who do."
The idea of social engineering permeated the political culture in the early 20th century. It attracted prominent intellectuals, among them John Dewey, Charles Beard, Herbert Croly, and Walter Lippmann (some of whom later became disillusioned). It inspired the establishment of such institutions as The New Republic (1914), the Institute for Governmental Research (later the Brookings Institution) (1916), the New School for Social Research (1919), and the National Bureau for Economic Research (1920).
The idea attracted doers as well as thinkers. It infused the corporate liberalism of organizations like the National Civic Foundation. And most fatefully, it lent powerful momentum to politicians—from the Progressive era, through wartime mobilization, to Hoover's dry-run New Deal and FDR's real thing—bent on a dramatic expansion of government's responsibilities and powers.
Jordan's book focuses much more on the thinkers than on the doers. Politicians, other than Hoover, are given scant attention, and business leaders who embraced social engineering are almost completely ignored. The latter omission is particularly unfortunate; more discussion of such people as George Perkins of New York Life, Elbert Gary of U.S. Steel, Gerard Swope of General Electric, and Henry Harriman of the U.S. Chamber of Commerce would have underscored the extent to which belief in central planning (and rejection of market competition) spanned the conventional political spectrum.
Nevertheless, Jordan's book does an excellent job of recreating a lost world of ideas. Moreover, he points out that there were dissidents, however lonely: He includes discussions of Frank Knight's views on the limitations of social science, Walter Lippmann's renunciation of central planning in his The Good Society, and Friedrich Hayek's unflinching defense of competition.
Admittedly, there is more to the history of American statism than technocratic folly. There have been populists who clashed with the social engineers' anti-democratic and centralizing tendencies. And of course the contemporary left has been much more concerned with social and cultural issues than with economic concerns. That said, the misplaced faith in centralized expert control was surely central to big government's rise, just as disenchantment with that control is central to the current political environment.
At bottom, the belief in social engineering grew out of a profound misunderstanding of industrialization. Confronted by the technological breakthroughs of the new age, contemporaries saw not the creative power of the free market, but rather the benefits of top-down bureaucracies and central planning. Dazzled by the role of the engineer in industry, they ignored the less obvious, but still fundamental, contribution of the entrepreneur. They did not grasp that without entrepreneurship, without decentralized and competitive investment, engineering brilliance leads not to prosperity, but to pyramid building.
The misunderstanding was total. Not only did the social engineers fail to grasp the unplanned and unpredictable process by which all the new machines were being created, they saw society itself as a giant mechanism, running according to deterministic laws which they would be able to discern and manipulate. The law they didn't count on, the one that has haunted and mocked their every attempt to control society from above, is the law of unintended consequences—which is just another way of saying that people with minds of their own make very poor machine cogs. It is a law that, at long last, we may be learning to live with.
Contributing Editor Brink Lindsey practices international trade law in Washington, D.C.
The post Machine Politics appeared first on Reason.com.
]]>Actually, there's plenty in NAFTA to make a free-trader uneasy. To begin with, the whole idea of negotiated trade agreements is mercantilist in conception. In trade negotiations we will only "give up" our trade barriers (no matter how misguided or harmful to our economy they may be) in exchange for similar "concessions" by other countries. The implication is that opening our markets is the price we pay to gain better access to markets abroad. That "exports good, imports bad" premise is at the heart of the mercantilist worldview; any good free-trader knows that open markets are their own reward, regardless of what is going on in other countries.
I have argued, in these pages (see "Reciprocity for Disaster," August/September 1991) and elsewhere, that for both theoretical and practical reasons a strategy of unilateral liberalization is generally preferable to trade negotiations. Pursuing open markets strictly as a matter of national economic policy is clearly sound in theory. Practically, it has the advantage of putting the focus of policy debate where it belongs: not on whether policies and conditions abroad are to our liking, but on whether the particular U.S. industries receiving or requesting import protection deserve special treatment at the expense of the rest of us. Furthermore, the example of the United States actually taking its own rhetoric seriously and opening its markets would do more to encourage freer trade abroad than any negotiations ever could.
Whatever my druthers, though, unilateral free trade is not exactly a happening political movement in this country. The cause of free trade, for the foreseeable future at least, rests entirely on the fate of trade negotiations: the Uruguay Round of the General Agreement on Tariffs and Trade talks and, of course, NAFTA.
If those initiatives fail, there's quite simply nothing else on the horizon. In evaluating NAFTA, then, the realistic question is not whether the agreement is perfect or even whether it represents the best approach to reform—I think it fails on both counts, but neither count is relevant. The decisive question is whether NAFTA marks a step in the right direction. The answer is a resounding yes, and therefore free-traders ought to be lending the embattled agreement their full support.
Let's look at the original agreement first, and then we'll move on to the more controversial side deals. The original NAFTA certainly has its share of warts and wrinkles: The first clue is that the document is more than 1,000 pages long, though a truly "clean" free-trade agreement could probably be squeezed into a sentence or two. Here are the agreement's main shortcomings:
So NAFTA isn't perfect—big surprise. But does that make it a "managed trade" accord, as some critics have alleged? Managed trade means quantitative controls on trade: import quotas, reserved market shares. Our textiles quotas, the semiconductor agreement with Japan, the various deals allocating international airline routes—those are managed trade.
NAFTA is nothing of the sort. The agreement eliminates tariffs and quotas across the board. It substantially liberalizes investment within the region. It opens up the Mexican financial services sector to foreign competition. It allows cross-border access in trucking. Managed trade is about asserting political control over trade and investment flows; NAFTA is about relaxing and eliminating such control. The fact that it doesn't go all the way in that direction doesn't mean it's going in the opposite direction.
The labor and environmental side agreements, on the other hand, do represent an attempt to maintain political control in the face of falling trade barriers. They are repugnant to the liberalizing thrust of the underlying agreement, and they set a very bad precedent for future trade negotiations. They ought to be roundly condemned by free-traders. But as to whether they undo all the good the original NAFTA offers—well, it's not even close.
The side agreements create special supranational bureaucracies to police the enforcement of each nation's labor and environmental laws. If a "persistent pattern of failure to effectively enforce" such laws is found, those bureaucracies can impose fines on governments that refuse to take remedial action and can ultimately impose trade sanctions if the fines aren't paid (or, in the case of Canada, obtain a court order in the Canadian judicial system mandating compliance).
That's the bad news. The good news is that the side deals are concerned only with the enforcement of laws. They do not create any new laws, and they explicitly recognize the authority of the three countries to set their own substantive labor and environmental policies. NAFTA establishes no monolithic regional standards according to which substantive national policies could be judged and found wanting. And while the side deals impose a duty to enforce laws, they contain nothing that would prevent a country from changing or repealing laws if it wanted to.
The more you look at these side agreements, the less there is to them. Only failures to enforce child-labor, workplace-health-and-safety, and minimum-wage laws are sanctionable under the labor side deal; failures to protect the rights to strike and bargain collectively are not covered. And the environmental side deal does not apply to laws governing the use of natural resources.
Moreover, it is not a violation of the side agreements for a government to fail to pursue legal violations if the inaction "reflects a reasonable exercise of the agency's or the official's discretion" or if it "results from bona fide decisions to allocate enforcement resources to violations determined to have higher priorities." That's a fairly gaping loophole. And the maximum fine that can be imposed is limited to 0.007 percent of the value of North American trade, or a whopping $20 million at present.
In sum, the side agreements have absolutely no impact on the substance of labor and environmental regulations and add only slight pressure for more vigorous enforcement. Meanwhile, the economic pressures generated by increased competition under NAFTA will be acting to constrain policy makers from imposing excessive burdens on business. In labor and environmental policy, then, NAFTA—even with the side deals—looks like a wash. In trade policy, on the other hand, NAFTA represents a substantial gain.
And that gain goes far beyond the specific reductions in trade barriers contained in the agreement. NAFTA represents a bold and unprecedented experiment: It marks the first time ever that a rich, industrialized country and a poor, developing country have agreed to open their economies to each other. NAFTA can help to show the world that free trade, even between dramatically different nations, truly does benefit both sides.
In this country, it can help to acclimate people to living in an international economy (which they already do but don't fully realize). Once Americans see that open trade with Mexico doesn't cause the world to end or all the jobs to disappear, they may be less resistant to tearing down other protectionist barriers in the future.
On the flip side, NAFTA's failure would be an enormous setback for the cause of free trade here and abroad. In Mexico and throughout Latin America, the perceived betrayal by the United States could seriously undermine support for free trade and free-market reforms generally. And if NAFTA is voted down in Congress, you can bet that the fallout in this country won't be a sudden turn toward unilateral free trade. On the contrary, politicians are going to run away from free trade as though it's radioactive. Meanwhile, NAFTA's defeat will be seen as a big win for Perot and populist know-nothingism.
Unfortunately, the free-trade opponents of NAFTA are lending credence and rhetorical cover to those reactionary political forces. That is especially true of the ones who have been flirting with the Neanderthal right on the issue. It's bad enough to be wrong, far worse to enthusiastically play the useful idiot.
Contributing Editor Brink Lindsey is director of regulatory studies at the Cato Institute.
The post Washington: Protectionist Racket appeared first on Reason.com.
]]>This book reminds me of Francis Fukuyama's The End of History and the Last Man: an absurd but engaging thesis, thoughtfully and earnestly argued. Though the central argument ultimately runs aground, the journey toward the reefs is an interesting and thought-provoking one.
The author is C. Owen Paepke, a Phoenix, Arizona, attorney and "unabashed generalist," according to the jacket sleeve. His modest proposition is that the era of material progress, and its radical transformation of living standards over the last two centuries, is coming to an end. "The children of the 1980s and 1990s," he writes, "will exert about the same effort during their lifetimes, and will have their needs and wants satisfied to about the same degree, as their parents….Man's material state has arrived at its practical limits."
As economic growth peters out, a different form of progress, what Paepke calls "human progress," is now emerging: "The new kind of progress, which makes human traits and abilities the subject rather than just the source of change, will dominate the agenda of the next century, perhaps even the next decade." Thus, as the conditions under which human beings live enter a long stagnation, qualitative improvement of human beings themselves is now on the horizon. Paepke has in mind such things as the extension of the human life span and the augmentation of intelligence through chemical, genetic, and electronic means.
Unlike Club of Rome types, Paepke rejects the notion that finite resources pose limits to economic growth. "In practical terms," he says, "the planet's resources have become not scarcer, but more plentiful." Instead, he argues, material progress is grinding to a halt because of a combination of technological limitations and consumer satiety. In other words, for most of the things we do, there just isn't any appreciable room for improving how we do them—or if there is, people aren't willing to pay for it with the necessary savings and investments.
Paepke sees the past two centuries of economic growth as the product of four converging historical forces, all of which are nearly spent: technological innovation; the emergence of free-market institutions; market expansion through cross-border trade; and the accumulation of capital.
As to technology, here's the nub of his case in one paragraph:
"But technology, however ingenious, must obey physical laws, and those laws limit the tangible benefits to be realized from further advances. Consider transportation. Crossing the United States in a Conestoga required considerable grit, some luck, and six months. It now occupies a few hours spent sitting in an easy chair. The next century could shave another hour or two from the transit time, but that improvement would be negligible compared to what technology has already accomplished. Satellites and fiber optics communicate information in all forms worldwide at the speed of light, an absolute physical limit. Energy flows freely from fossil, hydroelectric, and nuclear sources. Contrary to the common perception, it has become not scarce and expensive but abundant and cheap. Vaccines and antibiotics have nearly eliminated the threat from contagious disease in the developed world, allowing all but a small minority to live full life spans. Labor-saving mechanization and rising yields have reduced the number of farmers, once a majority of the population, to negligible levels. Manufacturing productivity increases—more than tenfold in this century alone—are doing the same for blue-collar labor. In these and other fields, future progress will be confined to ever smaller increments between the state of the art and the limits of the possible."
Paepke grossly overstates his case. New technology continues to hold the promise of not just incremental but sweeping changes in material welfare. While information may travel over wire or glass or through the air at the speed of light, it rarely makes the trip from sender to receiver without major bottleneck slowdowns at both ends—if you don't believe me, try faxing a 100-page document sometime.
The not-far-off arrival of integrated broadband communications—the much ballyhooed "data superhighway"—will break through those bottlenecks and allow the real-time communication of voice, text, and video. That capability will radically transform the way we do business, the way we entertain ourselves, and the way we educate our children.
In other areas, biotechnology can offer healthier and better-tasting food. More exotically and over the longer term, space travel can literally open up whole new worlds for human development. In addition to creating dramatic new products and services, technological innovation will continue to batter down the costs of existing goods.
Paepke notwithstanding, there remains plenty of room for improvement. For example, about 16 percent of the U.S. work force is still tied down in manufacturing; about 20 percent is in wholesale and retail distribution. Those percentages contain large chunks of blue-collar and middleman inefficiency that will eventually be rooted out by developments in automation and information technology. In turn, those cost savings will free up resources for, among other things, increased leisure, improved environmental quality, new forms of entertainment, new cultural outlets—not to mention investing in and buying all the new goodies of "human progress" that Paepke trumpets in the second half of his book.
Of course there will be industries that remain relatively untouched by technological dynamism—in particular, many face-to-face services (for instance, hotels and restaurants). Still, I have little doubt that material conditions in the 1990s will seem unbearably primitive from the perspective of the 2090s—I can barely remember how we got along 10 years ago without ATMs and fax machines and PCs.
Paepke is similarly off base in assessing the vitality of the other three forces driving material progress. He properly credits the development and spread of market institutions with launching the spectacular growth of the last 200 years, but then he argues that "the very completeness of capitalism's triumph shows that the freeing of markets has already made its major contribution."
Would that it were so. The bulk of the world's people and resources remain either trapped in subsistence agriculture or controlled by statist planners. In our own country government consumes or controls over half the national income, and we're the freest nation on the planet.
Gains in wealth from the unification of local and national markets through trade are approaching their limits, asserts Paepke, because "all the advanced nations are integrated into a single worldwide economy." Of course, the vast majority of the world's population does not live in the "advanced nations" and is only marginally involved in global trade and investment flows. Even in the developed nations, the international economy is still disproportionately dominated by commodities and manufactured goods; the potential of information technology to globalize services is just beginning to be realized. And over the much longer term, space travel can let us move beyond simply a global economy.
Capital accumulation fares no better in Paepke's view. He argues that increasing affluence diminishes people's incentives to save and thereby fund further economic growth. He has a point, but unfortunately it cuts against not only material progress but "human progress" as well. After all, genetic engineering and artificial intelligence are going to cost a lot of money, and it has to come from somewhere.
Not to worry, though: The connection between rising incomes and falling savings rates, though probably real, is less than meets the eye. First, the savings-as-percentage-of-GDP statistics commonly cited have their fair share of problems, not the least of which is their failure to include unrealized capital gains as savings. More importantly, the governments of the United States and other industrialized nations, through their policies of punitive taxation and continual inflation, are waging a war on savings. The main factor depressing savings today isn't affluence; it's bad policy. In short, though the forces of material progress may be losing a little steam, they remain immensely powerful and unlikely to fade away in anything resembling the foreseeable future.
Paepke's argument completely falls apart when he tries to exclude the "human progress" now emerging from the larger phenomenon of economic growth. These new human-ability-enhancing technologies will open up new markets, create new industries, generate new fortunes, and promote a general uplift in living standards. That's economic growth, folks. It is ludicrous to talk about "the end of economic growth" while in the same breath extolling the enormous promise of these technologies—you can't have stagnating living standards and rising life spans.
If Paepke had simply argued that human progress will soon replace material progress as the most dynamic area of economic growth and social change, he might have sold me. Instead, he goes overboard into nonsense, arguing that material progress is coming to an end, and that human progress will emerge apparently through some kind of immaculate conception unmediated by economic transactions.
So why read a book with a nonsensical argument? Well, Paepke's discussion of the historical forces behind material progress offers a solid, useful summary of "how the West grew rich," in Nathan Rosenberg and L. E. Birdzell's phrase. And his treatment of such brave new fields as genetic engineering, intelligence enhancement, and life extension provides a fascinating and accessible overview of the cutting edge of scientific research, as well as ample source references for the more technically minded.
Finally, Paepke's willingness to look at the big picture and speculate on future social development challenges the reader to widen his own perspective, to get past the buzz and clutter of daily headlines and take a longer view. Provoking if not convincing, The Evolution of Progress is worth a read.
Contributing Editor Brink Lindsey is director of regulatory studies at the Cato Institute.
The post Immaterial World appeared first on Reason.com.
]]>Say it ain't so. After all those record-breaking, gold medal–winning performances in the late 1980s, it turns out the Japanese economy tested positive for steroids. Take away the medal for perpetual dirt-cheap capital, take away the medal for a gravity-free stock market, take away the medal for zaiteku financial wizardry, and take away the medal for buying up America. Our favorite bogeyman shrivels before our eyes as the drugs wear off.
I'm referring, of course, to the bursting of Japan's "bubble economy." The rollicking growth that Japan enjoyed during the late '80s—over 5 percent a year—was fueled in large part by an upward spiral of land and stock values that some thought would never end. It ended. The Nikkei fell more than 60 percent from its 1989 high, and real-estate values in the Tokyo and Osaka markets dropped 30 percent or more. As boom turned to bust, it was hoped that the bubble's extravagances would not affect the underlying "real economy"—most prominently, the formidable manufacturing sector. Hopes were dashed.
Hence all the dolorous economic news coming out of Japan over the past year. For the first time since the 1973 oil embargo, Japanese GNP declined for two straight quarters; total growth for 1992 was an anemic 0.5 percent. The money supply has been shrinking. Corporate profits are down for the third straight year. Capital spending dropped an estimated 4 percent last fiscal year. Monthly statistics for business failures have been rising for more than two years. Unemployment, still low by Western standards at 2.4 percent, has been edging upward, and overtime payments, a big part of total employee compensation, are down sharply. Consumer confidence is at its lowest mark in a decade.
Even Japan's best-known and strongest corporate giants are being squeezed. Fujitsu, NEC, and Sony have all reported losses. Nissan, also in the red, is shutting down a major plant in Zama—the first such plant closing in Japanese auto-industry history. Nippon Telegraph & Telephone has announced plans to shrink its payroll by more than 10 percent—that's 30,000-plus people—over the next three years.
With rough times at home, the supposed takeover of the U.S. economy has been put on hold. Direct investment (i.e., establishing new companies in the United States or buying up existing ones) plummeted from $19.9 billion in 1990 to $800 million in 1992; purchases of U.S. real estate dropped from a high of $16 billion in 1988 to $5 billion in 1991. The Japanese were net purchasers of $2 billion worth of U.S. Treasury securities in 1989; in 1990 and 1991, they were net sellers of $15 billion and $8 billion worth, respectively.
In key industries where Japanese domination was thought inevitable, things are now looking rather different. The Japanese share of the U.S. auto market has slipped from 30 percent to 27 percent. For the first time since 1986, American semiconductor manufacturers in 1992 edged out their Japanese competitors for the largest chunk of aggregate world market share. Japanese computer makers remain far behind the U.S. industry; even in the laptop segment, where they have been competitive, Japanese companies' U.S. market share has dropped from almost 40 percent in 1988 to under 25 percent in 1992.
No one is suggesting that Japan is about to fall apart. Its fundamentals are still very strong—an excellent, innovative manufacturing sector; a well-trained, hard-working labor force; and continuing high levels of saving and investment. Still, the Japanese are looking distinctly mortal right now; where once only their strengths were noticed, now their weaknesses are getting some attention. Which gives us in this country a good opportunity to reassess Japan, its place in the world, and its relationship to the United States, with—finally—a dose of realism.
A good place to begin is by reading Christopher Wood's The Bubble Economy, a vigorous and intelligent account of the spectacular excesses and rampant corruption of the late-'80s boom years, and the painful retrenchments of the current bust. Wood, an editor at The Economist, describes a Japan Inc. that has virtually nothing in common with the country depicted in such Japanophobic screeds as Rising Sun and Zaibatsu America. (See "Samurai and Sexual Deviants," December.) In fact, it's hard to believe that Wood's book and those books were copyrighted in the same year.
According to Wood, Japan in the late '80s was far from the conspiracy-theory image of a disciplined, masterfully orchestrated industrial/financial army, moving inexorably toward world economic domination. It was, rather, an economy gone off the deep end. In Wood's words: "It was the twentieth century's best example of the dictum of Charles Mackay, the celebrated nineteenth-century historian of speculative manias, who observed that men think in herds, go mad in herds, but recover their senses one by one."
The craziness started with the Plaza Accord of 1985, which drove down the dollar versus the yen in a vain effort to "cure" the U.S. trade deficit. The yen doubled in value by early 1988, and consequently Japanese wealth doubled in value in international markets. The rise of the strong yen, known in Japan as endaka, pinched the profits of Japanese exporters and led to an economic slump in 1986. The Bank of Japan reacted by cutting the discount rate from 5 percent to 2.5 percent and allowing money-supply growth to exceed 10 percent a year. Japan was now both rich and flush with cash. What resulted was a speculative boom in stock and real-estate values—in Wood's estimation, "the biggest financial mania of this century."
The stock market tripled in value between 1986 and 1989. At its height, it accounted for 42 percent of the total capitalization of world stock markets (compared to 15 percent in 1980). Stock prices became unhinged from reality; average price-earnings ratios exceeded 60 (in the United States P:E ratios of 20 are considered high). Nippon Telegraph & Telephone, privatized in 1987, was trading at an astronomical P:E ratio of 300.
Real estate was more absurd yet. Land values in Japan's six largest cities doubled in less than two years. Prices were rising at 50 percent a year in Tokyo, even faster in certain areas. By 1990 the total stock of property in Japan—a country the size of California—was estimated to be worth four times the total stock of property in the United States. The Imperial Palace grounds in Tokyo had a higher value than all of Canada.
Expanding with the stock and real-estate bubble was the Japanese financial sector. Banks poured money into the stock market and real estate, thereby driving up values and, in effect, creating new capital reserves and collateral to lend against. The main "city banks" increased their assets by 80 percent between 1985 and 1989. The 10 largest banks in the world were now all Japanese. Meanwhile, life-insurance companies, the biggest investors in Japan, with 13 percent of the Tokyo stock market, rode that market's rise to new heights of size and power. And the "Big Four" securities companies—Nomura, Daiwa, Nikko, and Yamaichi—made a killing on fixed brokering commissions. Nomura, the biggest (indeed, the biggest stockbroking firm in the world), saw a fourfold rise in profits between 1983 and 1987, earning nearly $4 billion in 1987.
Swimming in money, the Japanese flooded the world with it; capital exports, after all, are the flip side of trade surpluses, and we all know about those. In 1981 Japan had less than $11 billion in overseas assets; by 1988 the figure exceeded $200 billion. In the United States, Japanese life-insurance companies and other investors bought an estimated 10 percent of U.S. Treasury bonds. Industrial corporations established new factories and bought up existing companies, including blockbuster deals like Matsushita's purchase of MCA and Sony's acquisitions of CBS Records and Columbia Pictures. Japanese real-estate firms poured billions into the U.S. market, buying up huge chunks of Hawaii, much of the Los Angeles skyline, and such high-profile "trophy" properties as Rockefeller Center, Tiffany's, and Pebble Beach.
The inflow of Japanese money gave rise to predictable paranoia mongering about American economic decline, typified by books like Martin and Susan Tolchin's Buying Up America and Robert Kearns's aforementioned Zaibatsu America (with its charmingly subtle subtitle: "How Japanese Firms Are Colonizing Vital U.S. Industries"). As it turns out, the people who should have been paranoid were the Japanese, many of whom were flushing money down the toilet so fast they were clogging the sewers.
In Wood's telling of the story, Japanese financial institutions were simply out of their league when they ventured overseas. "Japan may have a first-rate economy that is the envy of the world, but it has a second-rate financial system," he writes. "Leading Japanese financial institutions border on the feudal compared with their Western counterparts." According to Wood, in throwing their money at the U.S. market the Japanese "were entering an investment world about which they had scant knowledge and in which they had almost zero experience." The results bear Wood out.
Japanese life-insurance companies, for example, began their heavy investments in U.S. Treasury bonds just as the dollar was free-falling against the yen. From 1985 to 1988, the life-insurance industry's combined losses on those investments totaled some $30 billion. In real estate, Japanese developers bought at the height of markets that have since collapsed, most spectacularly in Hawaii and California. Among the more impressive wastes of money was the construction of the Grand Hyatt Wailea on Maui, with a price tag of $600 million. It's been estimated that the hotel will have to charge $700 per room per night at 75-percent occupancy rates just to break even. Nothing, though, can top the fiasco of Minoru Isutani's purchase of Pebble Beach. He bought it in September 1990 for $841 million and sold it in February 1992 for $500 million—a loss of $341 million in just 17 months.
Cushioning even the clumsiest business moves, though, was the unceasing rise of stock and land prices back home. Until, of course, it ceased. As inflation approached 4 percent—low enough by our standards, but abnormally high in Japan—the Bank of Japan finally decided enough was enough, pulled out its needle, and pricked. Late in 1989, the central bank started raising the discount rate, which eventually reached 6 percent. Money-supply growth descended rapidly. The bubble burst.
The predictable chain reaction ensued. Bankruptcies have shot up: $63 billion in 1991, an estimated $100 billion in 1992. Banks have been staggered by an ever-increasing portfolio of nonperforming loans. On top of that, banks must scramble to meet the so-called Basle Accord international capital-adequacy standards. The 1987 Basle Accord required all international banks to meet an 8-percent capital-to-assets ratio by March 1993; Japanese banks arranged for 45 percent of their (at the time massive) unrealized stock gains to be recognized as capital. Many of those unrealized gains they were counting on have subsequently disappeared, forcing banks either to add capital (hard to do in a bear market) or shrink assets. Can you say credit crunch?
For Japanese companies generally, the era of free money is over. During the bubble, companies were able to go to London's Euromarket and issue warrant bonds (giving investors the chance to buy rapidly appreciating stock at a set price) at rates as low as 1 percent. Those bonds are beginning to come due, and companies must now refinance at dramatically higher rates. For example, Toyota recently sold a $1.5-billion bond issue with a 5.2-percent interest rate. The Japanese advantage over Americans in the cost of capital, so frequently cited in the competitiveness debates of the '80s, has been eliminated.
Meanwhile, the end of the bubble is having a direct effect on many companies' bottom lines. Japanese firms actively played the stock market during the boom; the game was called zaiteku, or financial engineering. In 1989, zaiteku profits made up an astounding 15 percent of reported earnings of companies listed on the Tokyo Stock Exchange. For many companies, these bubble gains made the difference between good times and bad. In the 1988 fiscal year, for example, securities profits accounted for over 58 percent of pretax profits at Matsushita Electric, 65 percent at Nissan, 73 percent at Sharp, 134 percent at Sanyo, and 1,962 percent at Isuzu.
Ending a speculative boom is like kicking over a rotten log—all kinds of creepy crawlers are brought out into the unwelcome light of day. So it was in Japan, with falling stock and real-estate values exposing a nest of scandals. Wood describes these various affairs in engaging detail: securities firms reimbursing favored clients' losses, banks issuing phony certificates of deposit as collateral to lend against, shady dealings with yakuza gangsters, and so forth.
Especially amusing is Wood's account of the mini-bubble in golf-club memberships. Memberships are traded like securities in Japan and run into the millions of dollars; there is even a Nikkei Golf Club Membership Index. At the height of the bubble Japan's 1,700 golf clubs boasted a total membership market value of $200 billion. New courses were developed by preselling memberships; the temptation to oversell was in many cases irresistible. The ill-fated Ibaraki Country Club sold 49,000 memberships instead of the 2,800 figure promised to investors. And the Gatsby Golf Club promised a membership of 1,800—to 30,000 buyers.
The mood in Japan today is one of self-flagellation. A book called The Philosophy of Honest Poverty is currently a best seller. And it is possible that the worst is yet to come. The Bank of Japan has once again slashed the discount rate; the Diet last fall passed an $86-billion "stimulus" spending package and is considering another $120 billion or so of Keynesian sugar high. More ominously, the Ministry of Finance has been actively propping up the stock market, pouring public-pension funds into the market and discouraging big private investors from selling. These short-term fixes may only be postponing necessary corrections and restructuring, thereby prolonging and deepening the pain. As Wood argues, taking the pessimistic viewpoint, "Japan's managed economy may delay the impact of market forces, but it can never repeal them."
Whether or not the economy has yet hit bottom, it will eventually bounce back. Binges may end in hangovers, but hangovers end too. Beneath the cyclical ups and downs, there remain the well-known fundamental strengths of the Japanese economy—and also some less well-known fundamental weaknesses. Over the longer term, the strengths will provide the motive power for continued economic growth, but the weaknesses will contort that growth and undermine its benefits for the Japanese people.
If there is one basic flaw in the Japanese economic miracle, it is the combination of government policies whose concerted effect is to stifle personal consumption. In the real-estate market, tax policies (very low landholding taxes, combined with high taxes upon sale), zoning restrictions (about one-seventh of the Tokyo metropolitan land base is zoned for agriculture), and rent controls make housing artificially scarce and expensive, even without a speculative boom. Agricultural protectionism inflates the cost of food. Restrictions on large retailers (now being relaxed) have helped to perpetuate an archaic and inefficient distribution system, increasing the cost of consumer goods generally. Many of the major consumer-service industries—air travel, telecommunications, financial services—have traditionally been shielded from competition by government protection and are only now seeing that protection decline. And high personal income-tax rates—with no deductions for consumer or mortgage interest expenses—channel income out of consumption.
With consumer demand stunted, investment to meet that demand is likewise stunted. The result is a country with the highest per-capita economic output in the world but decidedly less impressive living standards; a country with a world-beating manufacturing sector but a backward, uncompetitive service sector; a country with chronic trade surpluses (or, put another way, chronic domestic investment deficits).
In short, the result is a country that accumulates more savings than it knows what to do with. Some are poured into ruinous manufacturing overcapacity, some are poured into speculation (à la the bubble), and some are poured into overseas investments, not all of which are well-advised. The foreigner's perception of Japan, usually viewed through a mercantilist squint, is of a "predatory" competitor conquering the world with its goods and money. The reality, though, is a nation that is structurally hindered from investing enough in its own future and its own well-being, and so cannot hold on to its capital.
The United States, on the other hand, has precisely the opposite problem. Its tax policies actively discourage savings while promoting consumption. And its regulatory environment, while dreadful enough, is by world standards relatively open to competition (and thus to investments in promoting consumer welfare). In the mirror image of Japan, then, the American economy has an attractive domestic market but not enough capital to feed it. Fortunately for us, the capital can be imported—hence our trade deficit.
The United States and Japan thus compensate for each other's primary economic weaknesses. To Japan, the United States provides an outlet for abundant capital. To the United States, Japan provides a source of scarce capital. This is the reality of the U.S.-Japan economic relationship: not one swallowing the other but two economies leaning on each other. And there are only two ways out of this mutual imbalance: righting ourselves, or falling on our faces.
Contributing Editor Brink Lindsey is director of regulatory studies at the Cato Institute.
The post Pop! appeared first on Reason.com.
]]>Zaibatsu America, by Robert Kearns, New York: The Free Press, 256 pages, $22.95
"The philosophy of protectionism," Ludwig von Mises once wrote, "is a philosophy of war." Protectionism—or, more broadly, economic nationalism—rests on the belief that there are irreconcilable conflicts between nations, that one country's prosperity comes necessarily at the expense of others. It's a zero-sum vision of the world, in which every winner implies a corresponding loser. Mises was right: This is the kind of thinking that gets armies shooting at each other.
If you think this sounds a little overblown, you ought to read Rising Sun and Zaibatsu America. The first is a best-selling novel by Michael Crichton, a well-known writer, with an afterword asking for the book to be taken seriously and even a bibliography of nonfiction sources. The second is authored by Robert Kearns of the Economic Strategy Institute, the think tank headed by Clyde Prestowitz that has become a major voice for economic nationalism. Both books take as their subject the rise of Japanese investment in America, which both regard with alarm.
What is striking about these books, though, is neither their subject nor their point of view, but rather their rhetoric and tone. These books read like war propaganda.
For starters, just look at the titles. The "Rising Sun," of course, was the name of the Japanese battle flag during World War II—the implication being that there is some connection between Japanese business today and Japanese aggression of a half century ago. The zaibatsu reference is more obscure: These were the huge, family-dominated, vertically integrated industrial groups that supplied the imperial Japanese war machine. Broken up during the MacArthur occupation, they were replaced by keiretsu, large conglomerates linked through a dense network of cross-shareholding.
Keiretsu has now become a popular buzzword in trade-policy debates, with critics of Japan commonly deriding these groupings as exclusionary and cartelistic. One wonders why Robert Kearns didn't title his book Keiretsu America; that would have been inflammatory enough, suggesting Japanese control of our economy, and many more people would have understood the reference. Apparently Kearns, like Crichton, couldn't resist linking current Japanese business practices with the crimes of World War II.
Throughout Zaibatsu America, Kearns uses military imagery and metaphors to suggest that the Japanese are a hostile threat. Here are some representative turns of phrase: "sudden onslaught of Japanese capital"; "at stake is nothing less than America's economic sovereignty"; "arsenal of almost unlimited cheap financing"; "this historic invasion of the U.S. economy"; "Japanese companies wage business in the old samurai tradition"; "Japan's massive sortie into high-tech America." Get the picture?
Kearns even stoops to crude racial stereotyping. He gets in digs such as "the competitive pack instinct, a key ingredient in the zaibatsu way"; "the Japanese herd mentality"; "groups of Japanese flowing amoeba-like in and out of Tiffany's"; and "the enigmatic, zaibatsu monolith." Why not just come out with it and call them the inscrutable yellow horde? After all, Kearns is not above implying that the Japanese are perverts: In one passage he describes typical Japanese commuters, "read[ing] their pornographic comic books, filled with bound women." (Comic books, called manga, are very popular in Japan; only a tiny fraction of them, though, are sexually explicit.)
As bad as this stuff is, it pales in comparison with the rhetoric in Rising Sun. I imagine that Crichton could use the Sister Souljah/Ice-T defense and claim that these are just fictional characters saying these things. Let me assure you, though, that the following statements are meant to be read sympathetically:
"This country is in a war and some people understand it, and some other people are siding with the enemy. Just like in World War II, some people were paid by Germany to promote Nazi propaganda. New York newspapers published editorials right out of the mouth of Adolf Hitler. Sometimes the people didn't even know it. But they did it. That's how it is in a war, man. And you are a [expletive] collaborator."
"As our economic power fades, we are vulnerable to a new kind of invasion. Many Americans fear that we may become an economic colony of Japan, or Europe. But especially Japan. Many Americans fear that the Japanese are taking over our industries, our recreation lands, and even our cities….And in doing so, some fear that Japan now has the power to shape and determine the future of America."
"The American press reports the prevailing opinion. The prevailing opinion is the opinion of the group in power. The Japanese are now in power. The press reports the prevailing opinion as usual."
"You realize that Japan is deeply into the structure of American universities, particularly in technical departments."
"The government. They own the government. You know what they spend in Washington every year? Four hundred million [expletive] dollars a year….Now you tell me. Would they spend all that money, year after year, if it wasn't paying off for them?"
"You know, I have colleagues who say sooner or later we're going to have to drop another bomb. They think it'll come to that." He smiled. "But I don't feel that way. Usually."
Sexual deviance, which Kearns alludes to in passing, actually provides the central plot device in Rising Sun. It is the investigation of an L.A. call girl's murder—she is a white woman, catering to Japanese clients, who is killed during kinky sex—that ultimately uncovers a foul Japanese conspiracy to buy up some of America's high-tech crown jewels. Along the way, a friend of the victim—another prostitute who services a Japanese clientele—describes in lurid detail the various bizarre acts that her customers request. She concludes this way: "A lot of them, they are so polite, so correct, but when they get turned on, they have this…this way….They're strange people."
Here, then, is the image of the Japanese that these books convey: hostile and warlike; lacking individuality (and thus not fully human?); conspiratorial and devious; sexually deviant. Kearns and Crichton press all the right subconscious buttons; they appeal to our lowest and most dangerous emotions—fear and hatred—just like classic wartime propagandists. They even sink to cashing in on sexual taboos, in particular discomfort with interracial sex. The clear purpose of their rhetoric is to demonize Japan, to make it the Enemy.
Let's look now at the factual assertions made in these books. Basically, the Japan-bashing mantra goes like this: The Japanese industrial economy is more or less one giant conspiracy, united through the keiretsu and guided and subsidized by MITI, the Ministry of International Trade and Industry. The purpose of this "Japan Inc." conspiracy is to accomplish through commerce what imperial Japan failed to do by force of arms: dominate the world.
The first step is to ensure a secure home base by making the Japanese domestic market all but impenetrable to foreign goods. The next step is to launch an export "invasion," carefully targeted at key basic industries like textiles, steel, consumer electronics, and automobiles. In the United States, this invasion destroys entire industries, primarily through predatory "dumping"—prolonged sales below cost financed by monopoly profits back in Japan. Next the conspiracy moves into high-tech, wiping out the U.S. memory-chip industry and challenging American supremacy in computers.
The ground now softened up, Japan Inc. begins exporting its capital: setting up "transplant" factories, buying up major companies and real estate, financing the federal deficit, and generally transforming America into its economic colony. To facilitate this takeover, Japanese money insinuates itself into government, the media, the universities, and think tanks, buying "agents of influence" to lull the American people into complacency while economic sovereignty is lost.
It's a good script; just think what Oliver Stone could do with it. It would take a book to unravel all the distortions and inaccuracies on which this conspiracy theory rests (a book that, unfortunately, has not yet been written). Let me take a few shots, though, at some of the more glaring problems:
1. Japan's market is by no means hermetically sealed. Japan is the No. 2 customer in the world for American exports, trailing only Canada; Japan imports more American goods than Britain, France, and Italy combined. Yes, Japan runs a trade surplus with the United States; however, on a per-capita basis, the average Japanese buys more from the United States than the average American buys from Japan ($372 versus $357 in 1990). With regard to trade barriers, Japan's tariffs and quotas are comparable to those in the United States—a little worse, maybe, in agriculture, but probably a little better in manufactured goods. And everyone acknowledges that Japan has become less protectionist in the last decade than it was in the 1960s and '70s. This is inconvenient for the conspiracy theorists, since the huge trade surpluses of the '80s came at a time of admitted liberalization.
2. MITI is not omniscient. MITI tried to stop Sony from making transistor radios; it urged the consolidation of Japan's decentralized auto industry into a Big Three–style oligopoly; it discouraged Honda from getting into the car business. More recently, the ballyhooed Fifth Generation Computer Project just completed its 10-year run, a fizzling flop. Meanwhile, Japan has wasted its fair share of money subsidizing "sunset" sectors of the economy such as shipbuilding, mining, and agriculture. Japanese business success is a function primarily of entrepreneurial vision and managerial excellence, as well as a relatively favorable tax and regulatory environment—not conspiratorial string-pulling by infallible bureaucrats.
3. There's nothing unfair or predatory about Japanese import penetration of the U.S. market. In general, and it is embarrassing even to have to say this, low prices (including low import prices) are good for consumers and good for competition: Getting the best at the lowest price is what economic activity is all about. The only way that low prices can be harmful is when they are part of a successful predatory pricing strategy—that is, when one producer drives all his competitors out of the market so that he can then charge inflated, monopoly prices. The harm in that case is the long-term high prices that outweigh the short-term low prices. Economists generally agree that predatory pricing is seldom attempted and almost never succeeds; the costs are too high, the gains are too speculative, and reentry into the market by displaced or new competitors is too easy.
None of the major Japanese import successes—color televisions, automobiles, computer chips—can be characterized as examples of successful predatory pricing. Yes, these successes came at the expense of American companies. And yes, price competition played a part in these successes (though quality and reliability were important, too). But the simple fact is that in none of these cases have Japanese companies attained monopoly positions; collectively, Japanese companies have about 30 percent of the American color-TV market, about 30 percent of the American car market, and about 25 percent of the American computer-chip market. No monopoly, no predation—without monopoly there is no opportunity to recoup earlier losses through jacked-up prices. All the Japanese companies have done is engage in hard-hitting, beneficial competition.
4. Japanese competition has strengthened, not weakened, American industry. Detroit and Big Steel were bloated, bureaucratized, and inefficient before the Japanese came to play. The shakeout has been traumatic, and isn't finished yet, but gains in productivity and quality have been impressive. Meanwhile, the Japanese challenge has forced U.S. semiconductor manufacturers to improve factory efficiency and to focus on their strengths in design innovation. American computer makers, too, have been kept on their toes by dogged Japanese pressure. Throughout the U.S. economy, companies are adopting Japanese business methods—just-in-time inventory, greater cooperation with suppliers, greater involvement of workers in decision making—to improve their performance.
5. Japanese influence isn't all it's cracked up to be. During the 1980s, when the Japanese supposedly started pulling our strings, American policy actually veered sharply against Japan. The U.S. government erected all sorts of new barriers against Japanese imports; it imposed quantitative limits on automobiles, steel, and machine tools; it instituted price controls on computer chips; and it socked dozens of different products with punitive "antidumping" duties. At the same time, the United States became increasingly bellicose about real and imagined trade barriers in Japan. In particular, Japan was designated an "unfair trader" under "Super 301" and threatened with sanctions unless it opened its markets. On the whole, Japanese interests have been taking a beating in the American political process.
6. The Japanese juggernaut has hit a snag. The real-estate and stock-market bubble, which helped supply Japanese industry with cheap capital during the '80s, has finally burst—the Nikkei has lost over half its value since 1990. Banks are staggering under the weight of bad loans; corporate giants, with profits pinched, are cutting back on R&D and capital spending and slowing down product cycles. Economic growth, which had been rollicking along at 6 percent to 8 percent a year during the '80s boom, is not expected to exceed 2.5 percent this year. With trouble at home and the United States in recession, the supposed Japanese takeover of America has been waylaid: New direct investment was down nearly 75 percent, to $5.1 billion, in 1991. It appears the Japanese aren't 10 feet tall and bulletproof after all.
All of the misconceptions of the Japan bashers, and all of their hate-filled emotional baggage, can be traced back to one central fallacy: namely, that trade benefits one party by beggaring the other. This fallacy leads its dupes to see the wealth gains and competitive stimulus of imports as an "invasion"; the new blood of foreign investment as a "takeover"; the blessings of a global division of labor as "dependency." If unchecked, as it was in the 1930s, this fallacy can lead to war.
Brink Lindsey is director of regulatory studies at the Cato Institute.
The post Samurai and Sexual Deviants appeared first on Reason.com.
]]>There is no doubt that these convulsive times have thrown forth giants and heroes: Walesa, Wojtyla, Havel, Sakharov, Yeltsin. As the lives of these men epitomize, there is greatness and inspiration to be found in the overthrow of a monstrous tyranny. What is less clear, however, is whether the new order now being established will itself be capable of greatness, whether it can provide new sources of inspiration.
We are currently witnessing, not only in Eastern Europe but around the world, the triumph of capitalism, the system of economic freedom. Communism, the great revolt against spontaneous market order, has finally been quashed. Elsewhere, in what has been called the developing world, socialistic autarky is being abandoned in favor of linkage to the international capitalist economy. The free-market system has become, to an extent never before matched, an integrated global phenomenon. Moreover, its fundamental institutions have at present no serious rivals. It is timely, therefore, to step back from the buzz of recent events and think about the larger significance of the capitalist ascendancy. Communism may be terrible, Third World poverty may be terrible, but how good is bourgeois commercial society?
Capitalism since its inception has been derided as a spiritually stunted system. The bill of indictment is familiar: Commercial society is driven by the base motivation of greed; it replaces the vital and organic human connections of family, community, nation, and faith with the attenuated and flimsy bond of the cash nexus; it debases life by reducing everything in its sphere to dollars and cents; it panders to the lowest common denominator of mass tastes, elevating the tawdry and vulgar over the lofty and original; and, in the end, it serves no higher end than the mindless accumulation of things.
These attacks on commercial society have come from both left and right. According to Marx, the bourgeoisie "has left no other bond between man and man but crude self-interest and callous 'cash payment.' It has drowned pious zeal, chivalrous enthusiasm and popular sentimentalism in the chill waters of selfish calculation."
Nietzsche, operating from diametrically opposed premises, arrived at an equally vociferous denunciation of capitalist society. He heaped contempt on the conformist banality of the "last man": "No shepherd and one herd! Everybody wants the same, everybody is the same: whoever feels different goes voluntarily into a madhouse."
To be sure, capitalism has its characteristic vices; even the most fervent defender of the market economy would be hard-pressed to argue that the litany of complaints cited above is wholly without merit. The pettiness of greed is visible at every socioeconomic level in the obsession with status symbols and the foolishness of brand-name snobbery; the destructiveness of greed can be seen in the lives of all those who feel trapped in well-paying but unrewarding jobs. The hollowness of the cash nexus is well-known to anyone who was raised in the rootedness and familiarity of a small town and who now runs the rat race of anonymous and impersonal urban existence.
To take just one example of metastatic commercialism: College football was long ago corrupted by money, but the corruption has now attained an almost sublime absurdity with the renaming of bowl games after corporate sponsors, e.g., the "USF&G Sugar Bowl" and—unbelievably—the "Poulan Weedeater Independence Bowl." As to cultural vulgarity, take your pick: Geraldo, professional wrestling, the National Enquirer, the Elvis cult, Teenage Mutant Ninja Turtles, and so on ad nauseum.
Although obsessive materialism and crass commercialism are undeniably a part of modern capitalism, they do not constitute its whole. To condemn commercial society as nothing but an empty rush for things is to engage in caricature and distortion. There is much more to capitalism than things: Capitalism is also about creativity, ingenuity, dedication, and perseverance; it is about teamwork and competition; it is about the fulfillment gained from working hard to do a job well; it is about pursuing your dreams, however humble or grand. Commercial life, at its best, generates spiritual as well as material abundance.
This spiritual element of capitalism has been obscured by pervasive misunderstanding of how the wealth-creation process works. The enemies and disparagers of capitalism have generally made the mistake of regarding the creation of wealth as a mechanistic and automatic process. In the Marxist view, productivity and growth result from the operation of unalterable historical laws. The capitalist phase of development has "solved" the "problem" of production once and for all; all that remains is to ensure that the fruits of this production are enjoyed by the right people. Never was this determinist conception of wealth-creation more glaringly evident than in Lenin's hopelessly naive picture of socialist production: He thought that the requirements for planning and running an economy had already been "simplified by capitalism to the utmost, till they have become the extraordinarily simple operations of watching, recording and issuing receipts, within the reach of anybody who can read and write and knows the first four arithmetical rules."
In today's dominant conventional wisdom, economic vitality is seen as a function of macroeconomic variables such as interest rates, trade balances, budget deficits, and exchange rates. In what Tom Bethell calls "hydraulic economics," bureaucrats keep the big GNP machine humming along by moving these macroeconomic levers about as changing conditions dictate.
Such mechanistic understandings of economic life miss entirely the continuing and ever-expanding dependence of capitalism on human creativity. In particular, they fail to grasp the central role of the entrepreneur in driving capitalist production. The source of all capitalist wealth creation is the new idea: the invention of a new product, the development of a new production technique, the exploitation of a new market. It is the entrepreneur who takes the idea, often his own, and transforms it into reality, staking his time and money, as well as others', all on the belief that the idea has value.
In the words of Joseph Schumpeter: "To undertake such new things is difficult and constitutes a distinct economic function, first, because they lie outside of the routine tasks which everybody understands and, secondly, because the environment resists in many ways that vary, according to social conditions, from simple refusal either to finance or to buy a new thing, to physical attack on the man who tries to produce it. To act with confidence beyond the range of familiar beacons and to overcome that resistance requires aptitudes that are present in only a small fraction of the population and that define the entrepreneurial type as well as the entrepreneurial function."
The motive force of capitalism, then, is not some historical autopilot, or the fine-tuning of technocrats, but rather the power of entrepreneurial imagination: first, the power to conceive some new vision of untapped possibilities, and then the will to remold reality in conformity with that vision. Apposite in this regard is Michael Novak's observation that the root of the word capitalism is caput, or head: The market process, contrary to what its detractors say, is fundamentally a spiritual phenomenon.
The great innovators of capitalism possess a species of genius no less real than the genius that animates great works of art, or great discoveries of science, or great acts of statesmanship. We see this genius, historically, in such people as Rockefeller, Carnegie, Edison, and Ford, men whose vision carried them "beyond the range of familiar beacons" and into new worlds of their own making.
Interestingly, Schumpeter himself, the great champion of the entrepreneur, mistakenly believed that entrepreneurship was becoming obsolete, that innovation was being routinized within the R&D departments of giant corporations. Inspiration and intellectual daring, he feared, were giving way to tepid bureaucratic rationality. What Schumpeter failed to see was that bureaucratic inertia would frequently render large corporations resistant to necessary change. As a result, the creative genius, the outsider with a vision and no stake in the status quo, remains an essential element of capitalist vitality: Witness, in our own day, such examples as Steven Jobs of Apple, Bill Gates of Microsoft, Ken Iverson of Nucor, and Ted Turner of CNN.
The spiritual energy that produces material wealth is by no means limited to a handful of tycoons, or even to the entrepreneurial economic function. Those who implement the entrepreneurial vision must also contribute their creativity and dedication if the vision is to succeed. The amount of knowledge and skill required to run a modern capitalist economy, with its amazing complexity and diversity of production, is enormous and must by necessity be widely distributed. While manual, unskilled labor is still needed, it has been consigned to the margins of economic life. To an ever-increasing extent, the continuing vitality of commercial society hinges on the mental effort of vast numbers of people.
Commercial society thus requires a large spiritual investment from its participants. Workers must do more than use their muscles or follow explicit instructions by rote; they must hone their skills, think things out for themselves, take initiative, assume responsibility. This spiritual investment brings spiritual benefits: namely, the fulfillment that arises from developing and exercising one's capabilities to surmount a challenge.
Charles Murray, in his book In Pursuit, builds his discussion of the preconditions of happiness around Abraham Maslow's hierarchy of human needs. At the summit of this hierarchy, above simple subsistence and physical security, above emotional intimacy and self-respect, is something Maslow called "self-actualization." In essence, this concept refers to the basic need of human beings to "realize their potential"—to develop talents and abilities and then use them, to be good at something that is hard to do. Murray quotes the philosopher John Rawls, who in turn was paraphrasing Aristotle: "Other things equal, human beings enjoy the exercise of their realized capacities (their innate or trained abilities), and this enjoyment increases the more the capacity is realized, or the greater its complexity."
In other words, human beings need challenges; they need to take on tasks that stretch and expand their abilities. As Murray says, "Challenge is a resource for meeting the human need called enjoyment, just as food is a resource for meeting the human need called nourishment. If one measure of a good society is its production and distribution of food, another measure of a good society is its production and distribution of challenges."
Capitalism, then, by creating opportunities for demanding and challenging work, creates opportunities for "self-actualization." By asking that people apply themselves to develop special skills or expertise, the free market gives them the chance to savor the mastery of something difficult.
The spiritual richness that is possible in commercial life—the interplay between effort and reward—is nowhere more grippingly portrayed than in Tracy Kidder's Pulitzer Prize–winning (and aptly titled) Soul of a New Machine. Kidder tells the story of engineers at Data General Corp. and their efforts during the late '70s to build a new generation of minicomputers. The book reads like a thriller, yet there are no chase scenes or shootouts or mysterious women or dark conspiracies; most of the action takes place in a windowless basement computer lab. What drives the plot is the competitive threat of archrival Digital Equipment Corp., corporate intrigue within Data General, and, above all, the intellectual drama of designing and debugging a complicated new machine under crushing time constraints.
What comes across so compellingly in Kidder's account is that this year-and-a-half-long project—with its long hours and no overtime pay and all its stress and frustrations—was an ennobling experience for those who participated. As Kidder says near the end of the book:
"Presumably the stonemasons who raised the cathedrals worked only partly for their pay. They were building temples to God. It was the sort of work that gave meaning to life. That's what West and his team of engineers were looking for, I think. They themselves liked to say they didn't work on their machine for money. In the aftermath, some of them felt that they were receiving neither the loot nor the recognition they had earned. But when they talked about the project itself, their enthusiasm returned. It lit up their faces….
"Many looked around for words to describe their true reward. They used such phrases as 'self-fulfillment,' 'a feeling of accomplishment,' 'self-satisfaction.' Jim Guyer struggled with those terms awhile with growing impatience. Then he said: 'Look, I don't have to get official recognition for anything I do. Ninety-eight percent of the thrill comes from knowing that the thing you designed works, and works almost the way you expected it would. If that happens, part of you is in that machine.'"
This kind of feeling about one's job is familiar to anyone who enjoys challenging work. It's not necessary to be building a new computer to devote your talent and energy to achieving some goal, and to experience the fulfillment that comes from this devotion. Moreover, as Murray points out, it's not necessary that your work involve abstract analysis or require "book knowledge." You can enjoy being a truck driver or steelworker or carpenter with the skills and know-how necessary to do your job well. What matters is that you find the work interesting and engaging, and that you have some control over and responsibility for what you do. Without these things, there's no you in your job; with these things, working for a living can involve your soul as well as your body and mind.
Of course, fulfillment on the job is by no means universal, and may even be the exception rather than the rule. There are jobs so menial or routine that almost nobody could enjoy them. There are organizations and bosses that, whether through malevolence or incompetence, can make working life miserable. There are workaholics, whose compulsive devotion to their jobs leaves the rest of their lives to atrophy. There are people stuck in the wrong line of work. And there are all too many people who don't commit enough of themselves to make their jobs enjoyable: the lazy, the incompetent, the time servers, and the buck passers.
No social system, though, can guarantee happiness for everybody. What capitalism does accomplish is to create wide and varied opportunities for rewarding and satisfying work; moreover, it does a reasonably good job of meshing external, material inducements with the conduct that generates internal, spiritual rewards. First of all, from the individual's perspective, the incentives of commercial society (material benefits and the status that comes from being successful) encourage people to take precisely those actions—working hard, taking initiative, assuming responsibility—that make work fulfilling. Moreover, commercial enterprises that address the spiritual needs of their workers—by giving them some degree of control over what they do—tend to be more productive, and hence more successful, than businesses that treat their workers like machines.
The spiritual richness of the market economy is most apparent when comparing it, not against some imagined utopia, but against other real-life social systems. Communism, the attempt to subsume all economic life within the centralized state, was not only a failure at material production; more fundamentally, it was a spiritually impoverished system. In the first place, the pervasive, leaden bureaucracy necessitated by central control stifled initiative and the assumption of responsibility, thereby robbing work of the pleasures that come from committing oneself to the job. Bureaucracy is a serious problem even in the capitalist workplace; it is ubiquitous and fatally hypertrophic under communism.
More basically, the suppression of economic incentives radically transformed the nature of work: Neither the quality of one's own work, nor the overall productiveness of one's organization, had much if any connection with one's job security or advancement. Ideology and terror were occasional substitutes for economic motivation, but the former took hold only with a small minority, and the latter was inflicted systematically only for limited periods of time. For most people and most of the time, the communist modus vivendi was "they pretend to pay us and we pretend to work." Nothing was at stake in one's working life; job success was no longer a value to be earned or lost. There was no social context within which it made sense to do a job well, or take pride in a job well done; work was drained of all its meaning and reduced to absurd, Sisyphean labor. Communism, which purported to redeem working life from alienation, consigned it instead to an alienation virtually universal and complete.
Now that communism, the self-proclaimed system of the future, has been thoroughly discredited, those who reject commercial society are increasingly turning to the precapitalist past in search of an alternative. In particular, the growing radical environmental movement derives much of its power from nostalgia for the simplicity and certainty of the traditional village economy. All of which raises the question: Capitalism has clearly brought material riches, but does it represent a spiritual improvement over rural communal life?
Admittedly, the village economy offered some spiritual advantages that are currently in rather short supply: most notably, rootedness in kin and community and organic connection to one's work. There was no conflict between pursuing one's career and living close to family and friends: People were born, lived, and died within a few miles' radius. There was none of the dehumanizing impersonality that afflicts urban life today: Within the village, everybody knew everybody else. And there were no nagging doubts about one's career choices: Most people did exactly the same thing—grow their own food—and the "meaning" of this work was as obvious as a stomach pang. While traditional society may have been afflicted with oppression, misery, and ignorance, alienation wasn't a problem.
But if life used to be less unsettling and fretful than today, it was also less interesting and spiritually challenging. God knows that life was challenging enough in the physical sense: Most people had to engage in unremitting, backbreaking toil just to keep fed, clothed, and housed. The demands on people's mental abilities, though, were modest. There were skilled artisans and craftsmen, yes, but they were a demographically marginal lot; the vast majority of the population was absorbed in basic subsistence agriculture. This work, by and large, was a matter of exhausting physical labor, requiring minimal skills or knowledge.
To some extent peasant life did require craft and folk wisdom, but such demands were limited and essentially static. The average peasant lived just as his parents and grandparents before him; the crops he planted, the tools he used, the farming methods he followed were all part of a received tradition passed down from time immemorial. Moreover, economic life was as simple as it was unchanging. Since the elemental task of growing food consumed so much energy, the division of labor was necessarily rudimentary; only a few basic goods and services were produced for exchange. Accordingly, there was no need or place in the village economy for specialized technical knowledge, or complex analysis, or original thinking, or independent judgment, or personal initiative, or responding to swiftly changing circumstances—in short, for any of those forms of spiritual exertion that are ubiquitous in the dynamic complexity of modern capitalism.
Thus far, we have focused only on the spiritual qualities inherent in the actual process of capitalist wealth creation. Now it is time to look at the motivations that underlie this process. The caricature of market society identifies simple avarice as the low and vulgar foundation on which all rests. Is that really all there is?
Greed, no doubt, is an all too familiar presence in modern commercial life, and this is what makes the caricature seem plausible. But there is something else driving capitalist society—something much more vital and inspiring, and arguably much more potent, than mere acquisitiveness. This thing is ambition, or competitive spirit: the desire to better oneself and be better than others.
Clearly, people do work in order to acquire things: most basically, to put food on the table and to pay the rent, but also to get that CD player, or a new car, or a family vacation, or a bigger house. But just as clearly, people work in order to compete: to be a success, to measure up, to move up in the world, to make one's mark. When someone gets a coveted job or wins a big promotion, he is likely to feel competitive exhilaration at having won, as well as excitement about things he can now buy (indeed, the coveted job may entail a pay cut). Likewise, a person who is fired or laid off feels not only the threat of economic hardship but also the spiritual emptiness of failure. Sometimes, the act of acquisition itself satisfies a competitive urge. For example, buying a home or an expensive car can give you the feeling of having "arrived."
Ambition in commercial life is most obvious in the lives of corporate moguls, particularly those who started their own businesses. Such individuals put the lie to the notion that commercial life is all pettiness and crabbed, narrow calculation. Simple avarice cannot explain why billionaires continue to strive to expand their enterprises, and indeed it is commonplace for business giants to say that they aren't in it for the money, that money is simply a means of keeping score. What drives such people is ambition: the desire to build an empire, or even to remake the world.
Over 200 years ago, Adam Smith identified the role played by ambition in driving commercial life when he noted that "the rich man glorifies in his riches" while "the poor man, on the contrary, is ashamed of his poverty." According to Smith, "It is the vanity, not the ease or the pleasure, which interests us."
Commercial ambition is by no means an unalloyed virtue. It has its darker and dangerous side. Obsession with status, with what other people think of you, is unhealthy and repellent. And winning at all costs—forsaking family, friends, and outside interests for the sake of career success—is a spiritually Pyrrhic victory. Nevertheless, competitive spirit, in its proper place, is a powerful force for good in commercial society; it unleashes human energies and imparts to life a bracing and vital dynamism.
The salutary effects of ambition can be seen in the struggles of immigrants who have come to this country to make new lives for themselves; in the overtime and scrimping that allow a couple to put their children through college; in the man who builds a family business that he can pass on to another generation; in the woman who goes to night school while holding down a full-time job; in the conscientiousness and dedication it takes to make the sale, land the contract, meet the deadline, or turn out the defect-free product; and in the titanic productivity of the entrepreneurial innovator. The common theme here is the pursuit of a dream, whether modest or grandiose, of bettering oneself or one's lot in life.
Again, comparison with other social systems is instructive. In traditional rural society, there was no place for ambition among the great preponderance of the population. With a more or less static economy, the large peasant class had no prospect for upward mobility. People were enmeshed in obligations that kept them securely in their proper station: ties to the land, to one's lord, to family and community. The whole idea of breaking from your past and "reinventing" yourself was utterly foreign; personal identity was based on knowing your place, not making your own way.
Competitive spirit was confined to the ranks of the nobility and found its outlet predominantly in the quest for military glory. This chivalric code enshrined ambition in its most extreme form: namely, the willingness to kill or be killed to prove one's superiority to others.
It may be argued that the aristocratic ethos represented human ambition at its most sublime: There is perhaps no act more inspirational than the willingness to risk one's life. Accordingly, it is tempting to romanticize the social system that produced this ethos and, by comparison, to scorn our own commercial order, in which people risk only money. Edmund Burke, who in his cooler moments was an admirer of Adam Smith, expressed this sentiment in his famous lines: "But the age of chivalry is gone. That of sophisters, economists, and calculators, has succeeded; and the glory of Europe is extinguished for ever."
Thomas Paine, though, in his reply to Burke, got the better of the exchange: "He pities the plumage, but forgets the dying bird." While the highs may have been higher in the ancien régime, the lows were abysmally lower, and the lows were the general rule. Chivalry may have been exquisite, but its beauty fades in the larger view when one sees the killing and waste it produced and the stagnation, passivity, and resignation on which it rested. Commercial society, by allowing general participation in a competitive and dynamic social order, offers the ambitious pursuit of self-improvement to high and low alike. The stakes may be lower, and thus the winning less glorious, but many more people get to play, and the contest now produces affluence and comfort rather than death and destruction.
In his provocative new book, The End of History and the Last Man, Francis Fukuyama identifies ambition, which he calls thymos or the desire for "recognition," as the crux of historical conflict. Influenced by Hegel, he bases his philosophy of history on a version of the state of nature, in which men battle not for simple self-preservation, but for recognition—to have their dignity as human beings recognized by others. In other words, men fight for ambition, to prove their superiority. Instead of leading to a consensual social contract, this battle leads to the relationship of lordship and bondage: The masters are those who were willing to risk death for honor, the slaves, those who succumbed to fear of death.
Fukuyama treats this battle for prestige between "first men" as both theoretical construct and quasi-historical: "Many traditional aristocratic societies initially arose out of the 'warrior ethos' of nomadic tribes who conquered more sedentary peoples through superior ruthlessness, cruelty, and bravery. After the initial conquest, the masters in subsequent generations settled down on estates and assumed an economic relationship as landlords….But the warrior ethos—the sense of innate superiority based on the willingness to risk death—remained the essential core of the culture of aristocratic societies the world over, long after years of peace and leisure allowed these same aristocrats to degenerate into pampered and effeminate courtiers."
Thus, the initial resolution of the struggle for recognition resulted in the traditional social order: the large class of peasant "slaves" underneath and the small band of aristocratic "masters" on top. This dispensation was inherently unstable, though, for it was riven with internal "contradictions": The vast majority of the population was denied recognition altogether, and the masters had won recognition only from their inferiors, whom they did not regard as fully human.
The historical solution to this dilemma emerged in the form of liberal commercial society. This new order abolished the distinction between master and slave by making the former slaves their own masters and by establishing the principles of popular sovereignty and the rule of law. The inherently unequal recognition of masters and slaves is replaced by universal and reciprocal recognition, where every citizen recognizes the dignity and humanity of every other citizen and where that dignity is recognized in turn by the state through the granting of rights.
Liberalism, through its system of rights, represents the optimal solution to the conflict over recognition: No one is recognized as superior, but everyone is recognized as equal (under the law, at least). And by opening up unlimited prospects for economic growth, liberalism tames the unruly force of ambition by exalting instead the force of desire (i.e., greed). With ambition thus sated and defanged, historical conflict comes to an end, according to Fukuyama, in the modern commercial republic: "The historical process that begins with the master's bloody battle ends in some sense with the modern bourgeois inhabitant of contemporary liberal democracies, who pursues material gain rather than glory."
Fukuyama wonders, though, whether this is such a good thing. He questions whether history's end point is Nietzsche's contemptible "last man," who seeks nothing but comfortable self-preservation and forsakes everything noble about humanity: daring, risk, inspiration, and struggle. To use George Will's turn of phrase, he questions whether liberalism has escaped from barbarism only to fall into banality. In the end, Fukuyama defends liberalism against the Nietzschean critique, arguing that modern commercial society contains sufficient outlets for ambition—namely, entrepreneurship, democratic politics, and such purely "formal" activities as athletic competition—for it to retain at least a moderate vitality.
It is certainly appropriate to worry about the banality of modern life, and Fukuyama's view of liberalism's triumph as problematic is refreshingly bracing. Nevertheless, I believe he misconstrues the role of ambition in commercial society, and accordingly takes a bit too dim a view of the capitalist ascendancy. (My doubts about whether history has in fact ended are beyond the scope of this article.)
Fukuyama properly regards commercial society as having domesticated ambition: Where ambition once sought martial glory, it now serves the pursuit of gain. But this is a crucial point: Ambition has not been replaced by desire (as Fukuyama contends); it has been married to it. As discussed above, commercial life is motivated as much by competitive spirit—the desire to win, to improve oneself, to exceed others—as it is by mundane acquisitiveness. Ambition that risks money may be less lofty than that which risks life, but it is no less real.
Accordingly, liberalism has in fact ushered in a fantastic expansion of ambition's role in social affairs: Where once it was the preserve of a tiny aristocratic minority, it is now ubiquitous. Daring, risk, inspiration, and struggle have not been extinguished; they can be found in every nook and cranny of the capitalist economy. The apparent decline of thymos is an illusion caused by focusing only on the fate of the old nobility.
Liberalism, far from enervating ambition, awoke and roused it in quarters where before it had never stirred. It is woefully incomplete to portray the liberal project, as Fukuyama does, as an effort "to convince the aristocratic warrior of the vanity of his ambitions, and to transform him into a peaceful businessman." The far greater part of the project was to liberate the mass of mankind from torpor and stagnation by incorporating it into the market.
This wider view of liberalism, and the greater good it serves, was movingly described by Alexis de Tocqueville in the concluding chapter of his Democracy in America: "When the world was full of men of great importance and extreme insignificance, very wealthy and very poor, very learned and very ignorant, I turned my attention from the latter to concentrate on the pleasure of contemplating the former. But I see that this pleasure arose from my weakness….
"It is natural to suppose that not the particular prosperity of the few, but the greater well-being of all, is most pleasing in the sight of the Creator and Preserver of men. What seems to me decay is thus in His eyes progress; what pains me is acceptable to Him. Equality may be less elevated, but it is more just, and in its justice lies its greatness and beauty."
Let's descend now from these metaphysical heights, and, in closing, consider the issue again in a more concrete way. The skeptical reader may still harbor the suspicion that all this rhetoric about creative genius and self-actualization and vaulting ambition is just whistling in the dark, that at bottom there is still the very mundane reality of fast-food restaurants, accountants, and toaster salesmen.
Fair enough, to a degree. I never wanted to argue that commerce could replace art or religion or philosophy in the quest for transcendent meaning. Commerce does concern the worldly, and thus will always have a practical and prosaic quality to it. But if you think this means that capitalism is spiritually empty, you're wrong—as any sports fan should understand.
If you are a sports fan—say, a devoted follower of college basketball—then you understand that what makes the game enjoyable and worthwhile has very little to do with its immediate object. The standard complaint of the nonfan—"Why would anyone want to watch a bunch of overgrown men running around and bouncing a ball and trying to stick it through a hoop?"—will convince you of nothing except that the speaker doesn't understand the game. If you were feeling analytical, you might explain to him that his grasp of the game is stuck in a reductionist rut, that what makes the game so fun is the amazing skill, the fluidity of teamwork, the excitement of competition. Or you might just turn up the volume on the remote control and hope he goes away.
So it is with commerce. Yes, it's about buying and selling things, just as basketball is about overgrown men and bouncing balls. But it's also about much more: It, too, is about amazing skill, the fluidity of teamwork, the excitement of competition. If you can't see this, you just don't understand the game.
Brink Lindsey is director of regulatory studies at the Cato Institute in Washington, D.C.
The post Personal Best appeared first on Reason.com.
]]>On the industry side, Sematech's 14 member companies account for 80 percent of combined U.S. semiconductor sales. These companies contribute half of Sematech's roughly $240-million annual budget. Government participation consists of $100 million a year from the Defense Department's Defense Advanced Research Projects Agency (DARPA), as well as various state and local subsidies. The declared goal of this government-industry "partnership" is to develop new technologies that will help U.S. industry to regain (from Japan, of course) world "leadership" in this economically and militarily "strategic" industry.
From the standpoint of industrial-policy supporters, Sematech would appear to have all the makings of an ideal test case. "Sematech was an experiment, but also a good model," says Daniel F. Burton Jr., executive vice president of the Council on Competitiveness, an umbrella group sympathetic to a more active federal role in industrial policy.
In the first place, Sematech's beneficiary is not some declining smokestack industry; rather, the focus here is on the cutting edge of high technology. Thus, the usual knock against industrial policy—that it favors dying "sunset" industries at the expense of emerging "sunrise" ones—is seemingly inapplicable. Furthermore, the fact that Sematech is dedicated to "precompetitive" R&D apparently dispenses with the other stock argument against industrial policy—namely, that government should not be in the business of picking winners and losers.
And indeed, the basic idea behind Sematech—that government should extend its R&D spending into overtly commercial areas and should do so by working directly with private companies—has been gaining popularity. The Reagan and Bush administrations have generally opposed anything that looks like industrial policy. Due largely to the influence of presidential science adviser D. Allan Bromley, however, the White House has now signaled that it supports government funding for development of commercial technologies, so long as they are sufficiently "generic" to be considered "precompetitive."
One notable example of this kind of policy is the Advanced Technology Program (ATP), administered by the Commerce Department's Technology Administration. ATP, which gives grants to private companies conducting high-tech R&D, started with a 1990 budget of only $10 million; the budget rose to $36 million in 1991 and will increase $46 million in 1992. Meanwhile, the push is now underway to extend federal support for Sematech, which expires at the end of 1992, for another five years.
Sematech's actual track record, though, should serve as a warning rather than a blueprint. Cutting through the high-tech jargon and reassurances about "precompetitive" assistance, a close look at Sematech confirms all the darkest suspicions of industrial-policy critics. For as it turns out, even microelectronics has its sunset industries, and even precompetitive R&D has its winners and losers.
Semiconductors, or "chips," are electronic devices that store, retrieve, and process information. They provide the hardware "smarts" not only for computers but also for televisions, fax machines, telephones, microwave ovens, cameras, car ignitions, and antilock brakes—not to mention Patriot and cruise missiles and all the other dazzling high-tech weaponry on display in the Gulf War. Usually made from silicon, with microscopic aluminum wiring deposited on the surface in multiple layers, chips are miracles of miniaturization: A single chip the size of a postage stamp can contain millions of electronic components, with features on its surface measured in fractions of a micron (a human hair is about 75 microns thick).
Semiconductors may be divided into two basic categories: memory chips and logic chips. Memory chips, as their name implies, store and retrieve data. The biggest-selling of these is the DRAM (dynamic random access memory, pronounced dee-ram), which provides short-term data storage for computers. DRAMs are relatively simple to design but excruciatingly difficult to manufacture—at least at production yields high enough to make selling them commercially viable. Accordingly, the key to competitive success in DRAMs is high-volume, low-defect, low-cost production. DRAMs and other high-volume memory chips have been dubbed "commodity" chips to reflect their fungibility and fluctuating prices.
Logic chips, on the other hand, juggle and manipulate data. They make decisions, route information to different destinations, perform calculations, and relay instructions. The best-known logic chips are microprocessors, which act as central control centers in personal computers. Since logic chips perform complex functions and frequently have highly specialized or even customized applications, the premium is on design rather than raw manufacturing efficiency. Unlike memory chips, logic chips compete in the marketplace based on what they can do, not how much they cost.
The origin of Sematech goes back to the mid-1980s, when U.S. companies staged a wholesale evacuation from the DRAM business. Although DRAM technology was pioneered in the United States (Intel invented the DRAM in 1971), by the early 1980s American chipmakers were seeing their profits and market share slip away in the face of fierce Japanese competition.
A glut in worldwide DRAM capacity, combined with a sharp drop in demand, caused prices to plummet in 1985. Faced with huge and mounting losses (though not as large as the losses being suffered by Japanese producers), one U.S. chipmaker after another bailed out of DRAM production. By 1986, only two American-owned companies were left making DRAMs for sale, Micron and Texas Instruments. The American industry, which had virtually monopolized the world market only a decade before, now claimed less than a 10-percent share. The Japanese, on the other hand, had increased their market share to more than 80 percent.
(By the way, these statistics do not take into account the U.S. "captive" producers—namely IBM and AT&T—that produce chips for their own use rather than for sale on the open market. Both companies have continued to manufacture DRAMs, and IBM remains the world's largest producer.)
The loss of DRAMs sent the industry into a panic. Many regarded commodity memory chips as the key "technology driver" in semiconductor production. In this view, the unceasing race to cram more and more memory onto less and less silicon at lower and lower cost spurs the innovations that are needed to stay competitive in all areas of chip manufacturing. Thus, the state-of-the-art DRAM in the late 1970s contained 16,000 bits of memory; today companies are beginning to sell DRAMs with 16 million bits of memory—a thousandfold increase. Drop out of this race, it was thought, and competitiveness in other more specialized semiconductors would soon falter as well.
Having failed in the marketplace, the big U.S. chipmakers turned to Washington for help. They prevailed upon the U.S. government to bring antidumping cases against Japanese producers, accusing the Japanese of selling below cost. (Since both American and Japanese chipmakers were losing money in the mid-1980s, they were all selling below cost in a sense. This is all that's required to trigger antidumping tariffs.) The antidumping cases threatened Japanese chip imports with punitive duties as high as 108 percent. To avoid this outcome, the government of Japan struck a deal with the U.S. trade representative in July 1986.
The agreement imposed worldwide controls on Japanese semiconductors. It established price floors for sales to the United States and third countries and targeted 20 percent of the Japanese market for U.S. and other foreign suppliers. To implement the agreement, Japan's Ministry of International Trade and Industry leaned on Japanese chipmakers to reduce output and shipments of DRAMs. The result was an acute worldwide shortage of DRAMs during 1988 that raised prices, bestowing windfall profits on Japanese chip companies and inflicting serious harm on U.S. computer makers, computer buyers, and anyone else who needed DRAMs. (Last summer this agreement, in somewhat altered form, was extended for another five years.)
Sematech represented the next step in the Washington strategy. After using political means to restrain foreign competitors, industry leaders now campaigned for outright government assistance. In March 1987, 14 U.S. chipmakers announced the formation of a consortium to take on the Japanese in developing advanced manufacturing techniques. They also announced that they wanted government funding for the project.
Initial planning for the consortium, called Sematech for "semiconductor manufacturing technology," had envisioned actually manufacturing DRAMs for sale. This idea was scuttled, first because the remaining American DRAM producers didn't want to create a new competitor, but also because IBM was afraid it would get stuck buying Sematech chips if other purchasers could not be found (not exactly a ringing endorsement of the consortium's prospects). Accordingly, project planners settled on the more limited goal of cooperative R&D, which members could then use in their own manufacturing operations. Even this degree of collaboration marked a dramatic shift from the rugged entrepreneurship that had always characterized the American microelectronics industry.
(The idea of a DRAM-making consortium was later resurrected in the form of U.S. Memories. Plans for this consortium, which was not to receive any direct federal funding, collapsed in 1990 due to the unwillingness of key computer companies to participate.)
Sematech's formation coincided neatly with the release the previous month of a Pentagon-sponsored study on "defense semiconductor dependency." The report concluded that "it is simply no longer possible for individual U.S. semiconductor firms to compete independently against world-class combinations of foreign industrial, governmental and academic institutions." As a result, "a direct threat to the technological superiority deemed essential to U.S. defense systems exists." The report's top recommendation: DOD funding of $200 million a year for five years to support the establishment of a "Semiconductor Manufacturing Technology Institute." This report was prepared by the Defense Science Board, whose advisory panel just happened to include a number of representatives from Sematech member companies.
The combination of competitiveness and national security concerns carried the day for Sematech, though the consortium got only half the money it hoped for: Congress authorized $100 million a year for five years. The money would come from DARPA, a small agency within DOD devoted to high-tech weapons research. Notwithstanding the national security justifications and the defense budget funding, the focus of Sematech's R&D would be explicitly commercial. Industrial policy had sneaked in through the Pentagon back door.
As sold to Congress, Sematech's mission was to create a "world-class" manufacturing facility that would serve as a model for the industry. To this end, the consortium constructed a large chip factory (known in industry jargon as a "fab") in Austin, Texas, and hired more than 700 engineers and staff. The idea was that Sematech's demonstration fab could develop new manufacturing processes that would then be implemented by member companies. The idea didn't work.
Sematech had planned to pursue its mission in three phases. Phase one would involve experimental chipmaking at linewidths of 0.8 micron, then the state of the art. In phase two and phase three, Sematech would move on to 0.5 and 0.35 micron, respectively.
To help get Sematech started in phase one, IBM donated its designs and proprietary processes for making four-megabit DRAMs; in addition, AT&T contributed the technology for its 64-kilobit SRAM (static random access memory, a type of high-speed memory chip). With this assistance, Sematech achieved its phase-one goal in 1989. Meanwhile, private companies, including U.S. firms that didn't belong to Sematech, had been selling chips with 0.8-micron linewidths since 1986. In other words, Sematech was able to borrow technology from private companies and reproduce manufacturing results that other private companies had achieved years before—and do it with taxpayers' money.
After phase one, Sematech shifted attention away from process R&D. It retained the 0.5- and 0.35-micron goals, but now the focus was on the manufacturing equipment needed to make chips with those linewidths. Sematech had originally intended to spend 80 percent of its money on in-house research; after 1989, it began allocating more than half its budget to outside R&D contracts with equipment manufacturers. Within the Sematech fab, efforts now concentrate on evaluating the performance of new tools rather than the actual how-to of making chips.
The problem with the demonstration fab concept was simple: Sematech wasn't making real products for sale. To produce commercial chips would be to admit that Sematech's work wasn't really "precompetitive" and so wasn't a public good just like government-funded basic R&D. But not to produce salable chips was to undermine the whole enterprise.
"What really matters with a [semiconductor] technology is shipping it in production—that means you have it down," explains T.J. Rodgers, president of Cypress Semiconductor and probably Sematech's most vocal critic. Indeed, the relation between production efficiency and selling is so well established in high-tech industries that it has a name: the "learning curve." The theory of the learning curve is that production costs fall at a fixed rate (usually thought to be around 30 percent) with every doubling of cumulative production volume. Sematech, though, can at best simulate this process.
"There's no one in this business who believes you can go down the learning curve without manufacturing," says Rodgers. "But Sematech's kickoff charter, approved by Congress, was to learn without manufacturing. It was a preposterous charter, and I said so at the time."
Sematech's new mission, then, is to help American chipmakers indirectly—namely, by helping the American-owned companies that supply them with chip manufacturing equipment. And indeed, the U.S. equipment industry is besieged.
In 1983, U.S. companies supplied 69 percent of the world market in semiconductor manufacturing and test equipment; by 1990, the U.S. share had dropped to 45 percent. Meanwhile, Japanese market share has increased from 25 percent to 44 percent over the same period. In 1985, seven of the 10 largest equipment companies were American; now five of the six largest are Japanese.
Sematech has set out to arrest this decline. Its goal is to preserve at least one viable American-owned supplier in each of several key equipment areas. To this end, Sematech is now spending over $100 million a year in outside R&D contracts with equipment suppliers, either to improve existing equipment or to develop equipment for the next generation of semiconductor manufacturing.
Sematech justifies its new mission by trumpeting the dangers of depending on Japanese suppliers. In a controversial move last May, Sematech and Sen. Lloyd Bentsen (D–Tex.) charged Japanese equipment companies with intentionally withholding state-of-the-art technology from American chipmakers. Even some Sematech members felt obliged to distance themselves from these allegations: Intel, Texas Instruments, and Motorola all declared that they had never experienced difficulties getting top equipment from Japanese companies.
Furthermore, many of the specific examples of technology withholding dissolved under scrutiny. For example, Sematech had accused Nikon of withholding its G-5-D stepper (a machine that imprints the circuit pattern on the silicon wafer) from American buyers. As it turned out, only one of these machines was ever sold in the United States because the product had been so defective that it was soon replaced by another model. In other instances, Japanese suppliers had never received any U.S. orders for the machines they were accused of withholding. Nevertheless, Bentsen commissioned a General Accounting Office study to look into the issue.
The GAO report, issued last September, is a model of mushy equivocation. While the GAO did find that a number of U.S. chipmakers had experienced delays in getting advanced equipment from Japan, it was unable to cite any evidence that such delays were intentional or in any way commercially abnormal. Indeed, the report makes this sweeping caveat:
"GAO could not verify much of the information provided. The U.S. companies interviewed requested that GAO not discuss their specific problems with other U.S. firms or with foreign suppliers. Also, U.S. companies were not required to provide GAO with documented information. Moreover, GAO did not assess whether the practices of foreign suppliers were common business practices or whether they would violate any laws or international agreements." In other words, the GAO's findings and a couple of bucks will buy you a beer.
To the extent that Japanese equipment companies do supply their domestic market first, it is largely a matter of the way new equipment is developed in Japan. Chipmakers there tend to work closely with equipment companies in evaluating and "debugging" new machines. By contrast, U.S. semiconductor companies have traditionally maintained an arms-length relationship with their suppliers. Thus, Japanese companies buy equipment at an earlier stage of product development than American companies do. The Japanese approach has its advantages—early access to new technology—but it also requires a substantial commitment to working with equipment that is not yet fully operational.
"It really cuts both ways," says George Gilder, author of Microcosm. "I mean, do you really want to have that leading-edge piece of equipment that doesn't quite work perfected on your line? You have to have a very good relationship with a company to want to do that. It's not that big an advantage."
Sematech's assistance to equipment suppliers is premised on a "food chain" theory, according to which noncompetitiveness in the equipment industry leads inexorably to noncompetitiveness "up the food chain" in the chip industry. Even if this theory is faulty (and certainly the sensationalistic version peddled by Sematech, with its sinister Japanese conspiracies, is pure hokum), there is nonetheless a general consensus that Sematech has been doing some useful work, both in evaluating new equipment and improving working relations between chipmakers and suppliers.
"The major impact of Sematech is the communication that has been opened up between manufacturers and suppliers, allowing them to sit across the table in the board room and ask whether the manufacturers' needs are being met," says Eric Winkler, a spokesman for the Semiconductor Equipment and Materials Institute, a trade association. "Sematech has served as a conduit to allow our members access to information about the industry that would otherwise not be available to them." Now that suppliers and manufacturers have begun talking, however, Winkler says he isn't sure Sematech is still necessary to promote that cooperation; the move toward closer relations may just continue on its own.
Indeed, if U.S. semiconductor producers truly feel threatened by a growing dependence on foreign-owned suppliers, they can do something about it without government aid. After all, total U.S. purchases of semiconductor equipment and materials came to several billion dollars last year. Chipmakers can easily use their purchasing decisions to ensure a continued U.S supplier base, if they think this is a priority. Sematech's $100 million a year in government-subsidized contracts may help a few favored suppliers, but overall Sematech can add very little to what private industry is already capable of doing for itself.
While Sematech was spending the last four years worrying about linewidths and equipment-supplier market shares, the U.S. chip industry has quietly gone about making a very impressive comeback. Since 1987, the combined U.S. share of the total world semiconductor market, adjusted for exchange-rate fluctuations, has been holding steady at around 35 percent. In 1990, U.S. companies actually gained market share on the Japanese.
To accomplish this turnaround, U.S. companies ignored just about everything Sematech's supporters have ever said about semiconductor competitiveness. The conventional wisdom held that commodity memory chips were the key to success in semiconductors generally. American chipmakers, though, have based their comeback on the growing market for complex logic chips. The conventional wisdom held that staying ahead in the chip business was possible only through constant incremental improvements in manufacturing technology. American companies instead concentrated on their strengths in innovative chip design. The conventional wisdom held that only vertically integrated giants or cartel-like consortiums could go head-to-head with the Japanese. The American resurgence, however, has been led by small, entrepreneurial start-ups.
Recall that commodity memories like DRAMs were supposed to be the "technology driver" upon which the whole future of the industry hinged. To quote from the Defense Science Board report that helped launch Sematech:
"The U.S. semiconductor industry may very soon, in fact, be competitive only in very small, 'specialty' segments of the overall market. This situation has arisen partly because of loss, in some areas, of technological leadership, resulting in an inability to compete with high-quality products in commodity markets."
Of course, this doom-and-gloom has not come to pass. Japanese companies do still dominate the market for commodity memory chips, holding a 62-percent share, compared to 23 percent for U.S. firms. DRAMs, though, have become a very ugly business. Not only are there a number of new Japanese entrants, but the Koreans and Taiwanese have also jumped into the game. Furthermore, sales of memory chips sank 17 percent last year. So more and more companies are chasing less and less money. As it turns out, the exodus of American companies from this high-anxiety, low-margin market—decried at the time by the industrial-policy crowd—looks in retrospect like a smart business move. (Interestingly, Intel, a major backer of Sematech and of industrial policy in general, was one of the first companies to walk away from DRAMs.)
Meanwhile, American companies have been thriving in those supposedly marginal "specialty" markets derided by the Defense Science Board. U.S. companies currently hold 48 percent of the market for complex logic chips, compared to 43 percent for the Japanese, and the American lead is increasing. Worldwide sales in this area jumped some 15 percent last year and are now one-third larger than sales of commodity memory chips. And unlike look-alike commodity chips, the distinctive features of logic chips allow their sellers to command big price premiums that translate into high profit margins.
What happened? Why were the DRAM devotees so wrong? The answer has to do with a revolution in the process of chip design that has dramatically accelerated the product development cycle. Through the use of "silicon compilers"—powerful software that automates major aspects of chip design—a small team of engineers using desktop computer workstations can now accomplish in a few months what droves of their colleagues using bulky centralized mainframes would have taken years to complete.
Faster, cheaper chip design has triggered an explosion of new products made for specialized applications. "Because of the new design tools, there has been over a tenfold rise in the number of chip designs generated every year—from around 10,000 in the mid-'80s to over 100,000 today," says Gilder. "And all these new designs tend to be unique and thus for higher value-added products."
As a result, generic chips needing customizing software are giving way to already-customized hardware. This growing specialization of production means that design innovation, rather than manufacturing process, has become the key to creating new value for customers.
"We have just gone through a period over the past 30 years where fielding competitive electronic products has in large measure been determined by the ability to innovate in semiconductor manufacturing," explains Andrew Rappaport, president of The Technology Research Group and a leading industry consultant. "Today, though, competitiveness is much more determined by being able to transform broadly available semiconductor technology into some kind of useful product."
This novel situation may be described as a "silicon glut." The race to cram more and more transistors onto a single chip continues, but the field is crowded, the pace is brutal, and the rewards of winning are greatly diminished. "The ability to manufacture semiconductors has evolved to the point that marginal improvements in [the] manufacturing process don't necessarily contribute to increased value in all semiconductors," says Rappaport. Furthermore, "these improvements spread so quickly that the advantages to the company or country that is first to achieve this marginal advantage are very short-lived."
The competitive edge in chipmaking now belongs to companies that can take advantage of the silicon glut, not those that simply add to it. By focusing their resources on specialized, design-intensive logic chips, American companies have exploited the glut and cashed in accordingly.
When high-volume, standardized production was the name of the game, it made sense to think that large, vertically integrated companies had a competitive advantage. Commodity memories still adhere to this production model, so it's not surprising that Japanese conglomerates dominate the market. The growing prominence of design-intensive chips, however, gives the advantage to smaller or more nimble companies that can respond quickly and innovatively to changing market conditions. The silicon glut has played into the strengths of Silicon Valley's entrepreneurial start-up culture.
Indeed, much of the recent growth in the American semiconductor industry has come from new companies. If you take a look at the companies with the highest returns on equity last year, you will see names like Altera, Cirrus Logic, Cypress Semiconductor, Weitek, and Xilinx—names that no one had ever heard of back in the mid-1980s, when DRAMs were lost and the sky was falling. All of these companies have made their money by coming up with specialized products, particularly design-intensive logic devices.
Indeed, only Cypress actually manufactures its own chips. The other companies are "fabless" chipmakers; these firms contract out production of their designs to other chipmakers with excess fab capacity or to specialized "foundries" that only make other companies' chips. Rappaport, a strong (and controversial) booster of the fabless chipmakers, notes that "so long as aggregate investment in semiconductor manufacturing technology worldwide is large enough to continue the evolution of technology in a predictable and rapid way, then there's very little reason for a company exploiting that technology to control the investment in how that manufacturing improvement occurs."
Rappaport cites Xilinx, a company that makes logic devices that customers can program (and reprogram) for themselves: "Although the company farms out production mostly to Japan, it retains all the intellectual assets that have been created around its chip architectures. The low-value, commodity aspects have been farmed out to Japan, where the fabs make very low margins on the work they do for Xilinx. Xilinx, meanwhile, has increased margins and volumes on its own business, and therefore has more to invest in its own R&D."
Sematech has been at best irrelevant to this exciting revitalization of American chipmaking. Indeed, to the extent that Sematech has had any impact at all, it has actually hindered these positive developments by favoring older, more-established companies over innovative newcomers.
There is a fault line in the industry that separates the established, billion-dollar giants—the "dinosaurs," as T.J. Rodgers calls them—from newcomers like Cypress and the fabless companies. In contrast to the entrepreneurial dynamism on the newcomer side, the establishment side is characterized by sluggishness and even stagnation. (Intel, with its commanding position in microprocessors, is a spectacular exception.)
Six of the eight largest U.S. chipmakers lost money last year. Advanced Micro Devices, with $1.1 billion in annual sales, has made a profit only two out of the past six years; National Semiconductor, with $1.7 billion in annual sales, has been profitable only once in the past six years. In addition to their financial woes, the established giants also share another common trait: They all belong to Sematech.
When asked about niche companies like Cypress and the fabless chipmakers, the Council on Competitiveness's Daniel Burton gives the typical pro-industrial policy response. "My hat's off to them," he says, "but I think that especially in the semiconductor market not everyone can be a niche player." And Sematech, it seems, is designed to subsidize the companies that eschew niches.
But even with a restricted membership—its dues structure favors large companies—Sematech was supposed to generate "spillovers" that would benefit not only the larger chip companies but the entire U.S. economy. Yet Sematech's members appear to have kept spills to a minimum. Specifically, in testimony before Congress last July, Rodgers accused Sematech of 1) giving its members unfair advantages through "technology holdback" agreements and 2) using "kickback" schemes to funnel money back to members.
Rodgers tells the following story: "Back in 1989 my engineers were visiting a company called Westech, which makes wafer polishing equipment. They came back and told me that there was a piece of equipment in a back room they weren't allowed access to. When they asked about the equipment, all they got were evasive answers. I then called the president and V.P. of sales, but I got the same waffling answers."
The next year, Rodgers solved the mystery when he became involved as an expert witness in litigation between Sematech and Travis County, Texas. (Sematech was claiming that as a "charitable organization" it was exempt from local taxes.) Rodgers was able to get access to subpoenaed documents, including an R&D contract between Sematech and Westech regarding the equipment that Cypress had been unable to purchase.
According to Rodgers, the contract contained an explicit requirement that Westech withhold equipment developed under this contract "for a period of one year from the time of normal introduction" from all companies except Sematech members. Rodgers says he saw another similar contract with Westech, as well as one with Applied Materials, a major equipment supplier. Sematech admits that these "holdback" agreements existed and still defends them. "Since members were paying $100 million [for new technology], they should get the first chance to buy and use it," says Sematech spokesman Buddy Price. Sematech's current policy, however, allows companies contracting with Sematech to sell to anyone at any time.
Rodgers also objects to the way Sematech's equipment R&D contracts benefit Sematech members. In a number of "equipment improvement projects," Sematech has purchased newly developed machines and installed them at the fabs of member companies. In exchange for free use of a machine, the member evaluates it and reports back to Sematech. (At the end of the project, the member has the option of buying the machine from Sematech at a discounted price.) In other words, Sematech members are getting state-of-the-art equipment, free or on the cheap, that they might well have bought anyway.
Using his access to court documents, Rodgers got the details on one such deal, which involved the installation of an advanced wafer-etching machine from Applied Materials at Intel. For assisting in the project, Intel got a $ 1.5 million piece of equipment for free. It also received $700,000 to defray installation costs and another $1.2 million to evaluate the machine. This bag of goodies was equivalent to a 23-percent reduction of Intel's yearly Sematech dues.
Other examples can be cited. National Semiconductor received a chemical vapor deposition system (worth around $2 million) from Applied Materials and a vertical furnace (worth over $400,000) from SVG. In the largest such project to date, Sematech installed 14 GCA steppers (about $2 million apiece) at four different members' fabs.
These examples suggest that the $120 million figure for the annual industry contribution to Sematech's budget may be misleading; when the giveaways are deducted from dues, the government proportion of the consortium's budget may rise substantially. Unfortunately, this public-private partnership considers its financial records proprietary. When REASON filed a Freedom of Information Act request for the audited annual reports Sematech is required by law to submit to the secretary of defense, we were first bounced from the GAO to the DOD and back, then told that Sematech is considered a government contractor—not a government agency—and therefore isn't subject to FOIA requirements. As of this writing, REASON has yet to obtain the records, although our efforts are continuing; congressional scrutiny during the upcoming hearings on Sematech's reauthorization could also turn up the elusive financial reports.
Sematech portrays itself as helping "the U.S. semiconductor industry" take on the Japanese. In fact, though, Sematech looks much more like a clique of large, established, high-profile companies using government money to fend off not just foreign competition but also up-and-coming rivals here at home. Burton, of the Council on Competitiveness, says that Sematech has made U.S. companies more competitive with their domestic rivals, not just with foreign firms. It "is not a trade protection group," he insists.
Buzzwords like precompetitive notwithstanding, Sematech is yet another example of government meddling in an industry to pick winners and losers. And as usual, the bureaucrats have backed the wrong horse. Within the sunrise industry of microelectronics, the government has managed to locate and subsidize the sunset companies, to the detriment of those young and dynamic companies that represent the industry's future.
But even if Sematech is high-tech pork barrel, what about national security? Maybe a government-funded consortium doesn't make sense from an economic point of view, but isn't it worth spending some money to preserve a high-volume chip manufacturing base in this country to service our defense needs—particularly in light of reports that all that wonderful Gulf War weaponry was chock full of Japanese semiconductors?
Sematech would certainly have you think so. Flip through its PR literature, and you'll find constant references to national defense. One old annual report goes so far as to feature this quotation from Shintaro Ishihara's The Japan That Can Say No, blown up and set against a red background: "Should Japan decide to sell its chips to the Soviet Union instead, that would instantly alter the balance of military power." Sematech even has a martial flag: It's a reworking of the coiled rattlesnake, "Don't Tread on Me" flag, except this time the snake has 14 rattles. More seriously, Sematech got its government funding in large part on the strength of national security concerns, as reflected in the Defense Science Board report.
Even before the Cold War ended, this line of argument was utterly without merit. Now it borders on the disingenuous. Simply put, it may be stated categorically that the United States is not now, nor in the foreseeable future will it be, militarily "dependent" on imported semiconductors or vulnerable to supply disruptions.
In the first place, the chips in which the Japanese are dominant—commodity memories mass-produced for the commercial market—are of limited military significance. The semiconductors that fly aboard advanced aircraft and missiles are highly specialized devices designed and tested to withstand radiation exposure, dramatic temperature fluctuations, and other extreme conditions completely irrelevant to production for the commercial market. This kind of specialized production remains an American specialty.
The chips that do the real heavy lifting on high-tech weapons systems are not commodity memories but complex logic chips. Devices that can compute trajectory, control guidance systems, recognize targets, and so forth contribute the real systems value to "smart" weapons—not bulk memory. The American lead in these products remains undisputed.
Furthermore, even in DRAMs the U.S. military has plentiful sources of supply. Among American companies, Texas Instruments, Motorola, and Micron all sell DRAMs commercially. Additionally, IBM and AT&T are large captive producers; they could certainly provide chips if the need arose. A number of Japanese manufacturers—NEC, Mitsubishi, and Fujitsu—make DRAMs on U.S. soil. Finally, if the Defense Department wants to import chips, it can turn to suppliers in Europe, Korea, and Taiwan in addition to Japan. It is pure nonsense to think that the United States could get cut off from all of these sources.
Sematech's original plan was to take government assistance only for the first five years, after which it would be self-sufficient. Like many of Sematech's plans, though, this one has changed.
With federal funding due to expire at the end of 1992, the consortium has decided that five more years of "partnership" with the Defense Department will be needed. The new five-year plan envisions continued funding levels of $100 million a year. This time, Sematech will put a greater emphasis on software and so-called computer-integrated manufacturing—yet another change of direction.
Whatever Sematech's future, its past has at least served to reaffirm some tried-and-true rules regarding government intervention:
• Rule number one: Whenever government decides to step in and "help" an industry, the effect, whether intentional or not, is usually to preserve the status quo and stifle beneficial change. This isn't because bureaucrats are stupid; it's because of the nature of politics. Government naturally favors interests with political clout, which means interests that are well-organized and well-funded. Accordingly, the political contest between industry giants—with their trade associations and Washington offices and PR offensives—and the entrepreneurial start-ups that are trying to upend them will always be a skewed one.
• Rule number two: "Strategic" industries are a dime a dozen. Every decent lobbyist can come up with several plausible-sounding reasons why the industry he represents is a linchpin of American economic strength and must therefore be preserved at all costs. The only real validation of such claims, though, is ongoing wealth-creation and growth. And if an industry meets this definition of "strategic," it doesn't need government help.
• Rule number three: Patriotism is the last refuge of scoundrels. National security may indeed take precedence over economic considerations, but arguments that the free market is undermining us militarily should be assessed skeptically. In most cases, what is at stake is the security of special interests, not the nation.
• Rule number four: Nothing lasts forever, but "temporary" federal assistance comes close. Whenever government does intervene in an industry, there is almost irresistible pressure for it to remain there. Not only do beneficiaries within the industry become addicted to government support, but bureaucrats become convinced that the industry can't run without them.
Sematech demonstrates that these rules apply just as much to high-tech industries as to agriculture, textiles, steel, automobiles, or any other sector of the economy. With these lessons learned, it's time to pull the plug on Sematech—if rule number four will allow it.
Brink Lindsey is an attorney who represents foreign clients in international trade matters.
The post DRAM Scam appeared first on Reason.com.
]]>Now, however, free-traders are on the offensive with two major initiatives: 1) multilateral talks to liberalize global trade under the Uruguay Round of General Agreement on Tariffs and Trade (GATT) negotiations; and 2) a proposed North American free-trade agreement involving the United States, Canada, and Mexico. Though these are separate and distinct sets of negotiations, their fates became linked in the recent congressional battle over extending fast-track authority.
Fast-track authority allows the president to negotiate trade agreements that are then subject to an up-or-down vote in Congress (no congressional amendments to the agreement are allowed). Fast track is essential if these negotiations are to proceed, since no country will bargain seriously with us if the resulting agreement can be picked apart on Capitol Hill. Opposition to extension, spearheaded by organized labor, was fierce; nevertheless, heavy White House pressure enabled both the Uruguay Round and the North American FTA to clear this preliminary hurdle. Whether agreements can actually be reached, and whether these agreements can get through Congress, remain highly uncertain.
The current free-trade strategy suffers from more than these practical difficulties, however. The strategy is flawed in its basic conception. Whether unwittingly or through an excess of cleverness, supporters of free trade have adopted the same basic assumptions as those that underlie the protectionist position. As a result, these supporters are unlikely to succeed in any significant opening of markets. Furthermore, this strategy may actually end up making matters worse.
The case for free trade, when made properly, doesn't depend on whether other countries adopt like policies. In other words, it pays for us to maintain open markets for foreign imports and investment even when our trading partners refuse to reciprocate. Similarly, the government should allow Americans to enjoy cheap imports even when their low price is due to foreign government subsidies. In sum, free trade enriches and invigorates our economy regardless of whether the rest of the world engages in "fair" trade, however that term is defined.
This unilateralist position, however, isn't advocated by supporters of free trade, at least not within the public-policy establishment. Instead, the official free-trade position—as evidenced by both the Uruguay Round and the North American FTA—is to negotiate with other countries for reciprocal reductions in trade-impeding and trade-distorting measures. The underlying assumption here is that free trade is worthwhile only if everybody does it. More specifically, these negotiations proceed on the premise that open-import markets are the price a country must pay to obtain freer access for its exports.
Thus, in the terminology of GATT, trade liberalization is expressly characterized as a "concession," as if it's contrary to national interest, has no intrinsic merit, and can only be justified by reciprocal concessions. Likewise, free-trade agreements are based on a one-for-one swap of market access; it's presumed that a country has no interest in removing its trade barriers unless another country agrees to reciprocate. In this regard, it's telling that the metaphor of disarmament is now commonly used in discussing trade negotiations. Carla Hills, the U.S. trade representative and the leading official voice of free trade, has repeatedly pledged that she won't engage in "unilateral disarmament" during trade talks.
This "imports bad, exports good" mind-set is pure mercantilism and is flatly incompatible with a proper understanding of international trade. Contrary to mercantilist thinking, the biggest gains from trade occur on the import side, since through open markets we are able to buy cheaper and better products than we could make for ourselves. Exports, on the other hand, are desirable primarily because they allow us to pay for more imports. The reciprocity-based approach now adopted by the free-trade side rests on economic assumptions inimical to those who truly support open markets and free trade.
Of course, it's always possible that a policy can work well in practice even when it sounds rotten in theory. Indeed, the reciprocity approach to free trade has a plausible argument to support it: Unilateral free trade is desirable in and of itself, to be sure, but it's just as clear that multilateral free trade is preferable, not only for the world as a whole but also for our own country. A unilateral and unconditional declaration of open markets on our part, though, would rob us of the leverage we need to convince other countries to drop their mercantilist policies. So let's reduce our own trade barriers, but let's do so in a way that persuades other countries to do likewise. Maybe this approach relies on mercantilist assumptions, but if you're going to try to persuade mercantilist countries to change their ways you have to speak their language.
This argument, though plausible, won't bear up under scrutiny. To understand why, it's necessary to look more closely at the background of the current free-trade initiatives. GATT, since its founding in 1947, has served as the basic institutional framework for the postwar era's relatively open world trading system (relative, that is, to the 1930s and 1940s). GATT imposes limits on what member countries can do to impede or distort trade and provides a mechanism for negotiating further reductions in trade restrictions. The key to this mechanism is the "most-favored nation" principle that says any member country must extend "concessions" to all GATT members on an equal basis.
GATT's primary mission has been to reduce tariffs on manufactured goods, and in that narrow task it has been highly successful. During the course of seven negotiating rounds, average duty rates have dropped from over 40 percent when GATT was founded to around 5 percent today. Unfortunately, loopholes in GATT's coverage and the rise of nontariff trade barriers have severely undermined its effectiveness. GATT is now commonly denigrated as the "General Agreement to Talk and Talk."
From the beginning, GATT made an exception for trade in agricultural products. This was primarily at the insistence of the United States, which needed high tariffs and import quotas to maintain its system of farm price supports. And starting in the 1960s, international trade in textiles has been subject to an increasingly complex web of quantitative restrictions, first under the Short Term Arrangement, then the Long Term Arrangement, and finally the Multifiber Agreement (currently in its fourth incarnation and about to be extended for a fifth). In these areas, then, the free-trade principle has been abandoned entirely in favor of "managed trade" alternatives.
In addition to opening up these explicit loopholes, countries have also managed to outflank GATT. In other words, they have replaced old-fashioned tariffs and quotas with new, more subtle means of protectionism. Beginning in the 1970s, the United States and the European Community began resorting to "voluntary export restraints" to limit import competition. Under these arrangements, foreign exporters "agree" to place quantitative limits on their shipments to the country in question. Since these restraints are allegedly voluntary, they fall outside GATT strictures against government-imposed import quotas. The United States has used voluntary export restraints to control imports of color televisions, automobiles, steel products, and machine tools.
Meanwhile, the 1980s saw the rise of "unfair trade" laws as a protectionist device. The most prominent of these laws is the antidumping law, in which the government imposes special duties on companies that it says sell for less in the export market than they do at home. Again, the United States and the E.C. have been the most aggressive users of this law, but proliferation is now well under way; at last count, 28 countries had adopted antidumping laws, including even the Soviet Union. In an ironic twist, other countries are now beginning to turn these laws against U.S. and E.C. producers.
By the mid-1980s, then, the effectiveness and prestige of GATT had fallen to an all-time low. The Reagan administration, rather than allowing GATT to trail off into oblivion, decided to commit the United States to leading a major new negotiating round designed to rejuvenate the GATT system. Thus, under strong American encouragement, an eighth round of talks was formally launched in 1986 at a meeting of GATT members in Punta del Este, Uruguay (hence the "Uruguay Round"). The United States pursued an ambitious negotiating agenda; it put forth proposals to bring agriculture and textiles under multilateral control and to extend GATT coverage to such novel areas as intellectual property rights and international trade in services. The Bush administration continued on this same tack, maintaining that the successful completion of the round was its number-one trade policy priority.
It now appears, however, that the final outcome of the Uruguay Round will fall well short of initial expectations. The round was supposed to conclude in December of last year, but negotiations broke down over the E.C.'s refusal to reduce agricultural export subsidies. Talks have started up again, but because the U.S. fast-track negotiating authority was due to expire at the end of May, everything was put on hold by the battle in Congress over fast-track extension. Even now that fast-track authority has been restored, agriculture is by no means the only subject that remains to be settled; there are 15 separate negotiating areas, and in most of them there are significant, and in some cases huge, differences that must be bridged. Thus, there is every indication that if agreements in these areas can finally be reached, they will contain only modest, incremental reforms, not the sweeping liberalization envisioned earlier in the process.
There is little reason to think that future rounds can do any better. Even if the United States were totally committed to the goal of worldwide free trade—and at present, of course, there are deep political divisions over whether achieving this goal is desirable—it quite simply lacks the clout to bring the other 96 GATT member countries along with it. Economic power is too evenly distributed these days and mercantilist policies are too deeply entrenched. Recall that trade negotiations are now discussed in terms of "disarmament." Try to picture arms-control talks involving 97 countries, and you'll have a fairly good idea of the likelihood of major breakthroughs in GATT.
As a parallel to negotiations in GATT, the Reagan administration also pursued liberalization on a bilateral or regional basis by entering into free-trade agreements, or FTAs. The United States negotiated its first free-trade agreement—with Israel—in 1984, though here foreign policy rather than economic considerations were primary. In 1988, the United States signed an FTA with Canada, its biggest single trading partner. Here again, the Bush administration is following in its predecessor's footsteps, having announced plans for a North American free-trade zone including Canada and Mexico.
Free-trade agreements offer a workable method for achieving significant liberalization between the signatory countries. Unlike GATT with its 97 members, a meaningful FTA is much easier to negotiate, given that it involves only two or three countries. The U.S.-Canada FTA, for example, provides for the elimination of all tariffs within 10 years. Also included, for the first time ever in a trade agreement, are rules guaranteeing market access in a wide range of service industries. Furthermore, the agreement prohibits most import and export restrictions on energy trade.
Nevertheless, FTAs have serious limitations. First, even under the best conditions they will still leave substantial trade barriers in place. The United States retains the right to bring antidumping and other unfair-trade cases against Canada; U.S. dairy quotas are still intact; the Canadian beer industry continues to receive protection; and the agreement leaves major industries such as trucking, railroads, and shipping uncovered. In any agreement with Mexico, restrictions on foreign investment in Mexico are sure to remain in force. On the U.S. and Canadian sides, there would assuredly be mechanisms to discourage the relocation of labor-intensive manufacturing operations into low-wage Mexico.
Such holdover protectionism isn't the main problem. There simply aren't that many viable FTA partners. Consequently, attempts to push the FTA concept beyond its limited range are likely to lead in one of two directions. First, there is the prospect of watered-down agreements, with perhaps tariff elimination and a few cosmetic changes here and there, but otherwise leaving the status quo comfortably in place. This might be what a U.S.-E.C. free-trade agreement would look like.
The second, more dangerous possibility is that the United States will negotiate FTAs that are in fact managed-trade accords, with market access in various sectors doled out on a quota basis. Such full-scale cartelization of trade would be far worse than the mess we have now, and it's by no means unthinkable that we could wind up with such agreements, particularly with Japan.
In sum, the reciprocity-based free-trade strategy provides only limited benefits; it tinkers at the margins. Meanwhile, it diverts political energy away from dismantling all the protectionist policies from which our country currently suffers. Worse, it actually reinforces existing trade barriers and even increases the likelihood that politicians will add new ones.
In the first place, trade negotiations strengthen current mercantilist policies by turning them into "bargaining chips." Even the most blatantly indefensible measures are shielded from reform in order to avoid "unilateral disarmament." A perfect example of this can be seen in last year's budget agreement. Congress agreed to cut farm subsidies by roughly $15 billion from projected levels over the next five years. But lawmakers expressly conditioned many of these cuts on achieving a trade agreement that cuts E.C. subsidies. If this condition isn't met, up to $7 billion in subsidies could be reinstated.
The reciprocity approach also increases the chances of additional protectionism. Starting with a policy that conditions improved access to American markets on reciprocal improvements in other countries, it's only a small step to conditioning existing market access on liberalization abroad. Indeed, Washington has already taken this step. Under heavy pressure from the protectionist Congress, both the Reagan and Bush administrations have made frequent use of Section 301 of the trade law and a number of related provisions, all of which authorize retaliation against our trading partners unless they reduce barriers to American exports.
The United States has managed to wring some rather minor concessions out of other countries by means of such threats. On the other hand, in some instances intimidation has failed and the United States has actually instituted new trade barriers as punishment. It is entirely possible one of these disputes could someday blow up into a large-scale trade war. By accepting the principle of reciprocity, free-traders have fallen into a trap in which it becomes extremely difficult to resist ever-more-aggressive applications of that principle.
More generally, the reciprocity-based free-trade strategy helps to frame the whole trade debate in terms that favor the protectionist lobby. The special interests that seek a protectionist bailout rarely admit that they were out-competed by their foreign rivals. Rather, they claim that they are the victims of "unfair competition." "We aren't afraid to face foreign competition," they say. "But it has to be on a level playing field." A policy of trade negotiations lends credence to this ploy by focusing attention on other countries' import barriers and "unfair" practices. Protectionists need only point to the latest U.S. negotiating agenda to make a case that the playing field is indeed unfairly slanted against American companies.
Logically, of course, this argument doesn't wash. Trade barriers abroad have little or no connection to whether U.S. industries can beat out foreign competition in the American market. The Big Three aren't losing out to Honda, Toyota, and Nissan because they are unable to export to Japan. Likewise, "unfair competition" is generally a bogeyman. The death of the U.S. consumer-electronics industry can't be blamed on Japanese "dumping" or industrial policy. Japanese companies dominate the market for the simple reason that they offer high-quality products at reasonable prices. Nothing unfair about that.
Logic, however, isn't what matters here; rhetoric is. Protectionists need some sort of cover to disguise their special-interest pleading. The reciprocity approach to free trade, by ceding to protectionists the "fairness" issue, helps to give them the cover they need.
Advocates of free trade should therefore get off the reciprocity bandwagon. Rather than worrying about the rest of the world, we should urge the unilateral elimination of all U.S. trade restrictions. Such a dramatic shift in policy would not only be best for the American economy generally but would also promote U.S. exports. Moreover, unilateral free trade may actually do more to encourage liberalization abroad than reciprocity ever could. By opening our markets unconditionally, we would allow American companies to purchase components and materials at the lowest possible prices, thus lowering the companies' costs and making them more competitive in export markets. In addition, when we import more, our trading partners earn more dollars in foreign exchange, thereby allowing them to buy more goods and services from American firms.
And if the United States did adopt an unconditional free-trade policy, its calls for open markets around the world would gain the moral authority that they now lack. Instead of trying to cajole, browbeat, and coerce other nations into curtailing their mercantilist policies, our approach would be much simpler, more eloquent, and ultimately more persuasive: We would practice what we preach and lead by the power of example. No doubt many countries would continue in their foolish and destructive policies. The experience of the past two years in Eastern Europe, however, should teach us not to discount the possibility of dramatic moves toward freedom, nor to underestimate the importance of role models in inspiring and guiding those moves.
What are the chances that the United States will embrace unilateral free trade? At present, absolutely nil. The chances won't get any better, though, until supporters of open markets give up the reciprocity temptation. It's time to concentrate our energies on perfecting liberty here at home.
Brink Lindsey is an international trade attorney with Wilkie Farr & Gallagher in Washington, D.C.
The post Reciprocity for Disaster appeared first on Reason.com.
]]>In Agents of Influence, Pat Choate adds an embellishment to the Japanophobia that characterizes so much of current protectionist rhetoric. The Japanese are no longer just destroying our economy; now, as a means to that goal, they are subverting the integrity of our political system as well. The book points fingers and names names, and as a result it has stirred up a fair amount of controversy. But like most conspiracy theories, Choate's thesis doesn't hold up under scrutiny.
Choate was until recently a policy analyst at TRW and is a longtime advocate of protectionist policies. He claims that the Japanese now spend $400 million a year in this country on lawyers, lobbyists, and public relations, manipulating political processes and public opinion to favor Japanese over American economic interests. This massive infiltration campaign, says Choate, "threatens our national sovereignty."
But the villains in this book aren't so much the Japanese themselves, who after all merely play the American game of influence buying as they find it. The real bad guys are the American "agents of influence" who choose to represent Japanese interests. Choate heaps particular scorn on former U.S. officials who now argue the free-trade position for Japanese and other foreign clients and "have supported the progressive cheapening—even the fundamental corruption—of the value of national service that used to guide the conduct of our public life."
In the first place, Choate completely mischaracterizes the nature and effectiveness of Japanese influence. He gives the impression that the 1980s were a time when the United States, under the spell of foreign-paid lobbyists, tore down its trade barriers, while meekly refusing to criticize the protectionist policies of Japan and other trading partners.
In fact, precisely the opposite occurred. The last decade marked a sharp increase in U.S. protectionism, as well as a growing intolerance of obstacles that hindered U.S. exports to other countries' markets. From 1980 to 1988, the percentage of imports into the United States subject to substantial trade restrictions rose from 12 percent to 23 percent. Washington aimed many of these restrictions specifically at Japan. During the '80s, the United States forced Japan into imposing export limits on automobiles, steel, and machine tools. Over the same period, the United States also levied special "anti-dumping" duties on dozens of different Japanese products. In addition, the threat of anti-dumping liability forced price increases on an untold number of other Japanese goods.
At the same time, the United States hammered away relentlessly at real and perceived market barriers in the Japanese economy. Most notably, Washington named Japan an "unfair trader" under "Super 301" and threatened it with sanctions unless it removed trade barriers on supercomputers, satellites, and forest products. As an adjunct to the Super 301 process, the United States and Japan held shotgun negotiations pursuant to the so-called Structural Impediments Initiative. In these talks the United States called for changes in such purely domestic policies as antitrust enforcement, public works spending, and regulation of the distribution system, on the ground that these policies indirectly impeded U.S. exports. This subjection to Super 301 is a telling comment on the relative clout of Japanese influence. The European Community, with which the United States has as many trade disputes as it does with Japan, escaped targeting under Super 301, simply because the E.C. never would have tolerated such browbeating.
The truth, then, is that the Japanese presence in American politics has been essentially a defensive one, aimed at countering the move toward greater protectionism. There have been a few tactical successes in this effort, but the general pattern has been one of steady reverses.
All of this raises a deeper objection to Choate's thesis: namely, that the Japanese political presence, considered on the whole, has been beneficial rather than harmful. For the United States to maintain open markets helps Japanese and other foreign exporters, to be sure. But more important, it helps American consumers by giving them the freedom to buy the best products at the best prices; it also helps American industry by promoting greater productivity and efficiency under the spur of foreign competition. On matters of trade policy, then, there is a congruence between foreign "special interests" and America's long-term general interest.
Until recently, there was very little organized private lobbying in favor of free trade. The main identifiable beneficiaries of open markets—consumers—are far too dispersed to make an effective lobbying force. Free trade has therefore had to depend on ideology and foreign-policy concerns for its support. By contrast, those who stand to gain from high import barriers—industries facing foreign competition—know who they are and can easily organize to apply concerted political pressure. Accordingly, in the game of insider politics, the protectionists have long held the advantage. The results can be seen in our trade policies on agriculture, textiles, and steel, where powerful domestic lobbies have succeeded in exempting themselves from the laws of the marketplace, beggaring the rest of the country for their own private benefit.
But the rise of foreign lobbies has helped to counteract this built-in bias. Furthermore, a growing portion of U.S. industry depends on foreign sources for components and materials and therefore has a vested interest in open markets. Thus, foreign producers and downstream U.S. industries, as well as American importers and retailers, now form a coalition of interests with sufficient political resources to at least challenge the cozy entrenchment of the protectionist lobby.
John Stuart Mill once noted that "a good cause seldom triumphs unless someone's interest is bound up in it." The good cause of free trade, always befriended by economists, is at last attracting some support with political muscle.
The trench warfare of special-interest lobbying is admittedly an ugly sight, whether the checks to pay for it are cut here or in Japan. The current system is riddled with abuses—in particular, the "revolving door" between public service and private lobbying. But these problems aren't confined to foreign lobbying; they apply also to domestic special interests and on a much larger scale. To single out the "Japan Lobby," as Choate does, is to engage in invidiously selective indignation.
There is a very simple solution to any problems caused by foreign influence over trade policy. All we need to do is tear down the many barriers that still hinder access to our markets and stop spoiling for a trade war with other countries. The "Japan Lobby" would close shop overnight.
Brink Lindsey is an attorney specializing in international trade regulation with Willkie Farr & Gallagher in Washington, D.C.
The post Japanese Agent Man appeared first on Reason.com.
]]>