The man often called the father of 19th-century central banking, Walter Bagehot, is most well-known for his maxim that, as former chairman of the Federal Reserve Ben Bernanke summarizes, “In a [financial] panic, [central banks should] lend freely at a penalty interest rate to solvent borrowers on good collateral.” This function is what Bagehot meant when he said they should act as lender of last resort. Bagehot’s famous book, "Lombard Street," was noteworthy for its critique of the Bank of England’s hesitancy to use this power to mitigate earlier financial crises (in part because, in another echo of the 2008 crisis, the central bank failed to recognize the “magnitude” of changes wrought in a system that had been “fit to regulate a few millions, and yet quite inadequate when it is set to cope with many millions”). Bagehot’s dictum still applies today, although in a world where most credit intermediation is done in the capital markets via securitization, rather than traditional bank lending, it makes more sense to describe the central banker’s role as the “dealer of last resort,” rather than lender.
But the other part of Bagehot’s maxim is still equally important: In their 21st-century role as counterparty/dealer/insurer of last resort, central bankers must not simply use their balance sheets indiscriminately to provide a liquidity backstop during the downturns. They must also be prepared to proactively charge private market participants variable risk premiums commensurate with the risk of the underlying activity they are undertaking when providing credit. It is questionable whether the central banks are doing an adequate job in respect to the latter function, and the global economy could therefore face a reckoning as big as or worse than 2008 as a consequence. In order to make this case, some background is necessary.
In a recent review of the 2008 financial crisis, commentator Gillian Tett of the Financial Times highlighted a fatal flaw of the securitization model, which seemed like such a great means of dispersing, rather than accentuating, risk: “The idea was a modern twist on the old adage, ‘a problem shared is a problem halved.’ In the past, banks had gone bust when borrowers defaulted because the pain was concentrated in one place; slicing and dicing spread the pain among so many investors that it would be easier to absorb. Or so the theory went. But there was a catch. Since the techniques that bankers were using to slice and dice the loans were desperately opaque, it was hard for anyone to know who held the risks. Worse still, because bankers were so excited about repackaging debt, they were stimulating a new mania for making loans, seemingly with government blessing. What all this financial innovation concealed was an old-fashioned credit boom, particularly in American subprime mortgages.”
By the end of the 2008 lending cycle, many of these tranched subprime mortgages had become the financial equivalent of nuclear waste. And because all private banking and non-banking counterparties knew what kind of toxic junk was loaded up on their collective balance sheets, and quite legitimately assumed that their counterparties were in a comparably dire position, the credit markets froze. Nobody trusted anybody else. That is, until the Fed (the one institution able to produce dollars at will and therefore never at risk of insolvency) was willing to act as “dealer” or “counterparty of last resort,” thereby unfreezing the system. Recognition that only the monopoly supplier of currency (whether dollars, euros, yen, pound sterling, etc.) can credibly backstop a credit system is a major reason why central banks have remained a focal point in today’s existing financial architecture, however much many decry central banking overreach, or dream of a world full of privately created cryptocurrencies, free of government control.
Exploring the extent to which our global financial system has changed in the past 10 years, Martin Wolf, chief economics commentator at the Financial Times, recently posed the question: “Have politicians and policymakers tried to get us back to the past or go into a different future?” Wolf concluded that powerful interests acted as much as possible to restore the status quo ante, even though on the face of it, the gravity of the 2008 financial crisis argued for more robust wholesale structural changes going forward. Self-interested parties certainly played a large role in getting us back to the past, although Nicholas Gruen, CEO of Lateral Economics, has also made the case that would-be financial reformers in aggregate failed to achieve any kind of intellectual consensus “beyond a vision of reform that was already stale at its zenith in the 1980s and 90s.” That left the ground wide open for the preservation of the status quo ante as the default option. In the aftermath of the crisis, there weren’t figures of the status and intellectual magnitude of J.M. Keynes, who, along with Harry Dexter White, helped to construct a new post-war financial and monetary architecture that superseded the pre-WWII system known as Bretton Woods, and whose success contributed to decades of unparalleled financial stability.
Why not, then, simply going back to a Bretton Woods type of model if it worked so well in the past? For one thing, Bretton Woods rested in part on a now non-existent gold standard, in a world characterized by regulated and restricted capital flows, a far cry from our 21st-century turbo-charged financial world of floating fiat currencies and almost total capital mobility.
Another major problem preventing its resurrection is that the financial system has evolved considerably, especially in the last three decades or so, notably in regard to credit intermediation. In the U.S., for example, Professors Randy Wray and Eric Tymoigne estimate that as much as 75 percent of all credit intermediation is now done via securitization as opposed to traditional bank lending; by the same token, a significant proportion of credit generation is now done via traditionally non-financial corporations (e.g., GM, GE), which means that simply reviving Glass-Steagall per se doesn’t represent a holistic “one size fits all” panacea because while it covered a bank like Wells Fargo, it didn’t regulate the leasing activities of General Motors Acceptance Corporation (GMAC). By the same token, Glass-Steagall was designed for a time when commercial banks largely monopolized lending, and the only “slicing and dicing” that occurred took place in butcher shops or kitchens. By contrast, much of today’s credit issuance comes from non-bank financial institutions via an interconnected system of securities-trading dealers, who in turn are backstopped by central banks.
The failure to appreciate the extent of how much our credit system had diverged from the classic banking models taught in the Economics 101 textbooks (as well as the sheer speed and magnitude at which the crisis spread post–Lehman’s demise) meant that solutions crafted during the early stages of the crisis were often improvised and “worked” only to the degree that the actions undertaken by central bankers corresponded with the underlying structural realities of the current financial system. Professor Perry Mehrling, among economists, was one of the first to recognizethat the Federal Reserve was largely able to unblock the frozen credit system when it placed itself in the role of “dealer of last resort” to ensure not only the liquidity, but the very solvency of securities markets that had locked up. As I noted in an analysis of Mehrling’s book, "The New Lombard Street," a few years ago:
“In essence, [Mehrling] proposed a modern day version of the old ‘Bagehot Rule’ — lend freely, but at a high rate, in a crisis. Mehrling argued that simply floating the system with money market liquidity, which is what the Fed initially did, failed to mitigate the intensifying financial crisis, because it wasn’t getting to the capital markets. That’s why we need a credit insurer of last resort, to put a floor on the value of the best collateral in the system. In Mehrling’s view, the 21st century equivalent of the Bagehot Rule should be: Insure freely but at a high premium. The Fed, in other words, should be backstopping the market for securitized products simply because the government is the only entity which can freely create new net financial assets and thereby cover the potential insurance liabilities during a crisis, in a way which AIG clearly could not.”
As the monopoly suppliers of currency, central bankers are in an ideal position to provide liquidity to markets or address solvency issues, in a manner that a private sector entity cannot. By the same token, the Fed (to take the most prominent central banker) must charge premiums that properly reflect the risks involved in the underlying transaction.
One prominent illustration from the 2008 crisis that highlights this fact is the case of the insurance company AIG and the role that credit default swaps (CDSs) played in its demise. A CDS is an instrument used by a buyer of corporate or sovereign debt. It was designed to eliminate possible loss arising from default by the issuer of the bonds. In theory, the swap acts like an insurance policy, the only difference being that (in the words of Mehrling): insurance is “organized as a network of promises to pay in the event that someone else doesn’t pay whereas our own world [the credit default swap] is organized as a network of promises to buy in the event that someone else doesn’t buy.” Of course, as we learned from the AIG fiasco, it becomes impossible to act as a credible writer of insurance, if you don’t have the financial resources to make good on the insurance payment if and when disaster strikes. Unable to make good the insurance payments arising from the swaps, AIG eventually had to be bailed out.
By contrast, the Treasury/Federal Reserve (it’s useful to think of them as a unified whole in this instance) is uniquely placed to make the CDS a credible instrument, as it can always create the dollars required to make good the payment in the event of default (or financial accidents). But for the CDS system to work going forward, the Fed (or any other central banker/dealer of last resort) has to “charge” the right premium to reflect the risks being undertaken by the parties who enter into a contract to buy and sell the CDSs. And if that means charging such an extortionate premium that the underlying activity (or event) isn’t undertaken, so much the better for financial stability.
Consider the example of earthquake insurance. A recent report in the New York Times on this subject quoted Dave Jones, the California insurance commissioner, who expressed alarm at the fact that just 13 percent of California homeowners have earthquake insurance. Part of that is likely complacency, given the relative paucity of serious earthquakes that have occurred over the past few decades. But it also reflects the sheer cost of the insurance, in particular, the huge attendant risks and costs associated with a disaster the scale of, say, the 1989 earthquake that rocked the Bay Area. Today, if one buys an expensive home in the San Francisco area, such as the Marina, the likelihood is that earthquake insurance premiums are exorbitant. As a result, many homeowners who have bought properties there post-1989 have simply elected to go uninsured, recognizing that in the event of another major earthquake, there’s no insurance to cover the loss. So the risk here is borne solely by the homeowner, rather than the insurance company. Bad for the former, good for the latter in terms of mitigating a spillover financial impact beyond that individual homeowner (although problematic for any California banks, which may have real estate exposure in those areas).
Ideally, one would prefer that people don’t buy homes in areas prone to earthquakes, or construct new developments near the floodplains of New Orleans or Houston. Unfortunately, many people wish to live in these areas (because they are pleasant when not subject to flooding or earthquakes). At the same time, if home buyers do wish to purchase in these areas, notwithstanding the considerable risks to the property, those risks should be properly reflected in the insurance premiums. If these premiums are sufficiently high that they influence the behavior of the actors concerned (either by declining purchase, or willing to assume the risk entirely), then the risk premium is likely priced correctly. At the very least, it will filter out a number of ill-suited buyers who haven’t got the means to cover the potential loss in the way that (unfortunately) was not the case in 2008. In the case of these uninsured buyers, we have a simple case of caveat emptor, whereby the affected party will totally bear the loss with no attendant socialization of risk.
Other financial reform ideas that advocate higher equity requirements for banks are excellent proposals, but insufficient in and of themselves for two reasons: first, non-bank financial institutions also extend credit (and those institutions won’t be subject to the same rigorous “set aside” capital requirements); and, second, because while higher equity requirements (or higher capital adequacy ratios) provide buffers that facilitate a financial institution’s ability to absorb losses that occur in the normal course of doing business (without resort to bailouts), they do not proactively restrict or disincentivize the activity that gives rise to the need for those higher equity safeguards in the first place. By contrast, a central bank that charges a sufficiently high risk premiums theoretically might help to mitigate the activity that gives rise to the need for that additional capital safeguards in the last instance just by virtue of the cost of the premiums demanded.
Which raises another point discussed by Gillian Tett: “It pays to remember that the roots of the word ‘credit’ comes from the Latin ‘credere,’ meaning ‘to believe’: finance does not work without faith.” The same applies for regulation. Central bankers should approach financial innovations with much more skepticism. They must insist that the private credit practitioners advocating further types of innovation are consistent with broader public purpose. Innovation for innovation’s sake is not a reason to allow it.
More fundamentally, central bankers and other public officials have in the past been notoriously prone to “regulatory capture,” as Professor Bill Black, a former Savings & Loan regulator, has illustrated. The degree to which this insidious tendency toward accommodating, rather than seriously regulating, obviously impacts on the ability of the central bankers to act credibly as dealer/counterparty/insurer of last resort going forward. There is a theoretical elegance to the “dealer of last resort” concept, and it also works with the grain of the existing financial architecture (therefore being less disruptive). But it won’t work if the “gamekeepers” continue to let the animals run riot. And if the world’s monetary authorities fail during this next crisis, the preservation of the prevailing financial architecture will become politically unsustainable. Inevitably, there will a much more wholesale restructuring of finance coming in our future if central bankers do not embrace this counterparty role as an umpire, rather than an enabler. And this time, the self-interested parties won’t have a hope of stopping it.
Marshall Auerback is a market analyst and commentator.