Proponents of narrow banking have argued that lender of last resort policies by central banks, along with deposit insurance and other government interventions in the money markets, are the primary causes of financial instability. However, as we show in this post, non-bank financial institutions (NBFIs) triggered a financial crisis in 1772 even though the financial system at that time had few banks and deposits were not insured. NBFIs profited from funding risky, longer-dated assets using cheap short-term wholesale funding and, when they eventually failed, authorities felt compelled to rescue the financial system.
Did the 2007-09 financial crisis or the regulatory reforms that followed alter how banks change their underwriting standards over the course of the business cycle? We provide some simple, “narrative” evidence on that question by studying the reasons banks cite when they report a change in commercial credit standards in the Federal Reserve’s Senior Loan Officer Opinion Survey. We find that the economic outlook, risk tolerance, and other real factors generally drive standards more than financial factors such as bank capital and loan market liquidity. Those financial factors have mattered more since the crisis, however, and their importance increased further as post-crisis reforms were phased in in the middle of the following decade.
U.S. households accumulated record-high levels of debt in the 2000s, and then began a process of deleveraging following the Great Recession and financial crisis.