Many popular and academic commentators identify deregulation as a cause of the 2007-08 financial crisis. Some argue that the Gramm-Leach-Bliley Act of 1999 (GLBA) and the Commodity Futures Modernization Act of 2000 (CFMA) removed barriers to risk-taking by commercial and investment banks, while others contend that these statutes limited regulators’ ability to respond to changing market conditions. A more general argument is that stringent regulation of banking from the New Deal to the late 1970s produced a quiet period in which there were no systemic banking crises, but subsequent deregulation led to crisis-prone banking.
This Article examines the deregulation hypothesis in detail and concludes that it is incorrect. GLBA and CFMA did not remove existing restrictions that would have prevented the principal practices implicated in the subprime crisis, but instead codified the status quo. Although the two statutes prevented regulators from banning affiliations between commercial banks and securities firms and curbing over-the-counter derivatives markets, those actions would likely not have prevented the crisis or significantly reduced its severity.
The Article further argues that the era of stable banking was the result of a benign and predictable macroeconomic environment, not regulation of deposit interest rates. That era ended with the severe inflation and interest rate volatility of the 1970s. Policy makers had either to ease restrictions on the interest rates banks could pay their depositors or force savers to lend to banks at negative real rates of return. Interest rate risk caused both bank failures and bank deregulation.
Paul G. Mahoney, Deregulation and the Subprime Crisis, 104 Virginia Law Review, 235–300 (2018).