You might not be able to tell by looking at it on the page, but deregulation has become a four-letter word in Washington. In October’s vice presidential debate, Sen. Joe Biden (D-Del.) practically spat it out: “If you need any more proof positive of how bad the economic theories have been, this excessive deregulation, the failure to oversee what was going on, letting Wall Street run wild, I don’t think you needed any more evidence than what you see now.” Speaker of the House Nancy Pelosi (D-Calif.) echoed the sentiment in her floor speech before the first vote on the bailout bill: “It’s really an anything-goes mentality. No regulation, no supervision, no discipline.”
In reality, regulation as a whole has thrived under President George W. Bush. Between 2001 and fiscal year 2009, the federal regulatory budget increased 65 percent in real terms, to about $17.2 billion. (For more see “Bush’s Regulatory Kiss-Off,” page 24.)
But what about Wall Street in particular? What specific acts of deregulation are being blamed for the financial crisis, and what role if any did they play? Let’s look at the accusations one by one:
1) The partial repeal of the Glass-Steagall Act in 1999 allowed commercial banks to get involved in risky investments, such as mortgage-backed securities.
The Glass-Steagall Act of 1933 prohibited investment banks from acting as commercial banks, and vice versa. Signed by Bill Clinton (who continues to defend the legislation), the Gramm-Leach-Bliley Act of 1999 repealed those aspects of the law. Many on the left blame at least part of our current woes on that move. With the repeal, Barack Obama said in a March economic address, “we have deregulated the financial services sector, and we face another crisis.”
In fact, multiple exemptions to Glass-Steagall had been granted for years before Gramm-Leach-Bliley was signed into law. Most European financial markets, not normally known as more “deregulated” than the U.S., never separated commercial and investment banks in the first place. And there is no correspondence between institutions that benefited from the repeal and those that recently collapsed. Institutions that didn’t take advantage of the Glass-Steagall repeal, such as Lehman Brothers and Bear Stearns, were the ones that failed most spectacularly, in part because they lacked the stability provided by commercial banking deposits.
If anything, Gramm-Leach-Bliley may have softened the blow. The George Mason economist Tyler Cowen argues that Gramm-Leach-Bliley made way for more diversity in the financial sector, and “so far in the crisis times the diversification has done considerably more good than harm.” Under the Glass-Steagall rules, Bank of America and J.P. Morgan Chase would not have been able to acquire Merrill Lynch and Bear Stearns. Nor would Goldman Sachs and Citibank have their current unified form, which may have helped them survive.
There is a significant body of academic work supporting this idea. The Rutgers economist Eugene Nelson White, for example, has found that national banks with security affiliates—the sort of institutions Glass-Steagall was designed to prevent—were much less likely to fail than banks without affiliates.
2) The Commodity Futures Modernization Act of 2000 guaranteed that high-risk tools such as credit default swaps remained unregulated, opting instead to encourage a “self-regulation” that neverhappened.
In late September, Securities and Exchange Commission (SEC) Chairman Christopher Cox estimated the worldwide market in credit default swaps—pieces of paper insuring against the default of various financial instruments, especially mortgage securities—at $58 trillion, compared with $600 billion in the first half of 2001. This is a notional value; only a small fraction of that amount has actually changed hands in the market. But the astounding growth of these instruments contributed to the over-leveraging of nearly all financial institutions.
In the late 1990s, the fight over these and other exotic new derivatives pitted a committed regulator named Brooksley E. Born, head of the Commodity Futures Trading Commission, against the powerhouse triumvirate of Federal Reserve Chairman Alan Greenspan, Treasury Secretary Robert E. Rubin, and Securities and Exchange Commission Chairman Arthur Levitt Jr. Unsurprisingly, Greenspan, Rubin, and Levitt won. The result was the Commodity Futures Modernization Act of 2000, which gave the SEC only limited anti-fraud oversight of swaps and otherwise relied on industry self-regulation. The Washington Post has closely chronicled the clash, concluding that “derivatives did not trigger what has erupted into the biggest economic crisis since the Great Depression. But their proliferation, and the uncertainty about their real values, accelerated the recent collapses of the nation’s venerable investment houses and magnified the panic that has since crippled the global financial system.” In other words: The absence of a regulation didn’t cause the crisis, but it may have exacerbated it.
Part of the problem was a technicality. Instruments such as credit default swaps aren’t quite the same thing as futures, and therefore do not fall under the Commodity Commission’s purview. But the real issue was that Greenspan, Rubin, and Levitt were concerned that the sight of important figures in the financial world publicly warring over the legality and appropriate uses of the derivatives could itself create dangerous instability. The 2000 law left clearing-house and insurance roles to self-regulation. Without a clearinghouse, the market for credit default swaps was opaque, and no one ever really knew how extensive or how worthless the derivatives were.
In congressional testimony on October 23, Greenspan seems to have admitted error: “Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity, myself included, are in a state of shocked disbelief,” he told the House Committee on Oversight and Government Reform. But Greenspan still wasn’t convinced that regulation is the solution: “Whatever regulatory changes are made, they will pale in comparison to the change already evident in today’s markets,” he said at the same event. “Those markets for an indefinite future will be far more restrained than would any currently contemplated new regulatory regime.”
3) A 2004 rule change by the SEC permitted big firms to keep too much debt on their balance sheets.
In 2004, the international Committee on Banking Supervision issued Basel II, an accord on banking regulation. In its wake, the SEC revised its regulations to allow five broker-dealer firms with more than $5 billion in capital—Lehman Brothers, Bear Stearns, Merrill Lynch, Goldman Sachs, and Morgan Stanley—to participate in a voluntary program that changed the way their debt was calculated. The existing net-capital rules required firms to keep their debt-to-net capital ratios below 12-1 and to issue warnings if they started to get close to that. Under the new rules, broker dealers increased these ratios significantly. Merrill Lynch, for instance, hit 40-1. This was possible because the rule changed the formula for risk calculations and instituted more subjective, labor-intensive SEC oversight in place of hard and fast guidelines. “They constructed a mechanism that simply didn’t work,” former SEC official Lee Pickard told The New York Sun on September 18. “The SEC modification in 2004 is the primary reason for all of the losses that have occurred.”
The SEC believed that it could more effectively and subtly manage risk under the new regime. Instead, the five firms took advantage of the new scattered oversight to extend themselves further than they could under the previous rules. In short, the commission—not to mention quite a few banks—bit off more than it could chew. As The Wall Street Journal editorialized in October: “As for the SEC, if commissioners took on a massive burden in 2004 without realizing they had signed up to safeguard the world’s financial system, then they overreached. But they sure didn’t ‘deregulate.’ ”
There is a sense in which the change was a deregulation. It did, after all, loosen a rule. But it did so in the context of what Adam C. Pritchard of the University of Michigan Law School calls a “rather Rube Goldberg apparatus for regulating financial services, with regulatory functions allocated to an alphabet soup of regulatory agencies” full of perverse regulatory incentives—including several new incentives in the rule change itself that encouraged dishonesty and obfuscation in SEC reporting. This is the kind of false, constrained deregulation that gives free markets a bad name.
It may have actually been a new regulation that kicked off the crisis. In 2007, the Financial Accounting Standards Board issued a statement about mark-to-market practices, FAS 157, clarifying the ways that publicly traded companies should value their assets. Under the new rules, assets had to be valued at the price they could fetch if sold today. If the market for a particular type of asset, such as credit default swaps, happened to be illiquid for the moment, the accounting value was now zero, regardless of any inherent worth. This rule change was a follow-on to the Sarbanes-Oxley reforms of 2003, where the idea was to stop companies from inserting optimistic guesses about the worth of their investments into their balance sheets as Enron had done, a practice that was sometimes affectionately described as “mark-to-make believe.”
When investments that probably have long-term value are assessed at zero in the short term, balance sheets and capital ratios turn ugly and panicked selling ensues to make them prettier. (For more on this, see “Anatomy of a Breakdown,” page 26.) This is what happened with collateralized debt obligations, stuffed with subprime mortgages, which suddenly lacked for buyers. Steve Forbes put it best: “How can you mark to market something when there’s no longer any market?”
4) Through all this time, Fannie Mae and Freddie Mac were allowed to run wild.
Unlike the banks and other market institutions that went down in the crisis, the government-sponsored enterprises Fannie and Freddie were always creatures of the federal government. As such, they were regulated less than their fully private counterparts when it came to such crisis-impacting phenomena as derivatives trading and capital requirements. Because of their size and politicized culture, they were able to fend off periodic attempts at reform. And because of the government’s implicit (and eventually explicit) guarantees of their multi-trillion-dollar portfolios, they lacked the discipline of worrying about failure.
Letting Freddie and Fannie get away with murder wasn’t deregulation. It was bad governance. And letting deregulation take the primary blame for a credit-fueled housing bubble and its aftermath isn’t an argument. It’s misdirection.
Katherine Mangu-Ward is an associate editor at reason.