Austerity never works: Deficit hawks are amoral -- and wrong

The 1 percent and the financial class caused the Great Recession. So why do we keep allowing them to shape policy?

Published May 5, 2013 4:00PM (EDT)

Grover Norquist, President of Americans for Tax Reform                                (AP/Yuri Gripas)
Grover Norquist, President of Americans for Tax Reform (AP/Yuri Gripas)

In this, the fifth year of a prolonged downturn triggered by a financial crash, the prevailing view is that we all must pay for yesterday’s excess. This case is made in both economic and moral terms. Nations and households ran up unsustainable debts; these obligations must be honored — to satisfy creditors, restore market confidence, deter future recklessness and compel people and nations to live within their means.

A phrase often heard is “moral hazard,” a concept borrowed by economists from the insurance industry. In its original usage, the term referred to the risk that insuring against an adverse event would invite the event. For example, someone who insured a house for more than its worth would have an incentive to burn it down. Nowadays, economists use the term to mean any unintended reward for bad behavior. Presumably, if we give debt relief to struggling homeowners or beleaguered nations, we invite more profligacy in the future. Hence, belts need to be tightened not just to improve fiscal balance but as punishment for past misdeeds and inducement for better self-discipline in the future.

There are several problems with the application of the moral hazard doctrine to the present crisis. It’s certainly true that under normal circumstances debts need to be honored, with bankruptcy reserved for special cases. Public policy should neither encourage governments, households, enterprises or banks to borrow beyond prudent limits nor make it too easy for them to walk away from debts. But after a collapse, a debt overhang becomes a macroeconomic problem, not a personal or moral one. In a deflated economy, debt burdens undermine both debtors’ capacity to pay and their ability to pursue productive economic activity. Intensified belt-tightening deepens depression by further undercutting purchasing power generally. Despite facile analogies between governments and households, government is different from other actors. In a depression, even with high levels of public debt, additional government borrowing and spending may be the only way to jump-start the economy’s productive capacity at a time when the private sector is too traumatized to invest and spend.

The idea that anxiety about future deficits harms investor or consumer confidence is contradicted by both economic theory and evidence. At this writing, the U.S. government is able to borrow from private money markets for 10 years at interest rates well under 2 percent and for 30 years at less than 3 percent. If markets were concerned that higher deficits 5 or even 25 years from now would cause rising inflation or a weaker dollar, they would not dream of lending the government money for 30 years at 3 percent interest. Consumers are reluctant to spend and businesses hesitant to invest because of reduced purchasing power in a weak economy. Abstract worries about the federal deficit are simply not part of this calculus.

“Living within one’s means” is an appealing but oversimplified metaphor. Before the crisis, some families and nations did borrow to finance consumption — a good definition of living beyond one’s means. But this borrowing was not the prime cause of the crisis. Today, far larger numbers of entirely prudent people find themselves with diminished means as a result of broader circumstances beyond their control, and bad policies compound the problem.

After a general collapse, one’s means are influenced by whether the economy is growing or shrinking. If I am out of work, with depleted income, almost any normal expenditure is beyond my means. If my lack of a job throws you out of work, soon you are living beyond your means, too, and the whole economy cascades downward. In an already depressed economy, demanding that we all live within our (depleted) means can further reduce everyone’s means. If you put an entire nation under a rigid austerity regime, its capacity for economic growth is crippled. Even creditors will eventually suffer from the distress and social chaos that follow.

Take a closer look at moral hazard ex ante from ex post and you will find that blame is widely attributed to the wrong immoralists. Governments and families are being asked to accept austerity for the common good. Yet the prime movers of the crisis were bankers who incurred massive debts in order to pursue speculative activities. The weak reforms to date have not changed the incentives for excessively risky banker behaviors, which persist.

The best cure for moral hazard is the proverbial ounce of prevention. Moral hazard was rampant in the run-up to the crash because the financial industry was allowed to make wildly speculative bets and to pass along risks to the rest of the society. Yet in its aftermath, this financial crisis is being treated more as an object lesson in personal improvidence than as a case for drastic financial reform.

Austerity and its alternatives

The last great financial collapse, by contrast, transformed America’s economics. First, however, the Roosevelt administration needed to transform politics. FDR’s reforms during the Great Depression constrained both the financial abuses that caused the crash of 1929 and the political power of Wall Street. Deficit-financed public spending under the New Deal restored growth rates but did not eliminate joblessness. The much larger spending of World War II — with deficits averaging 26 percent of gross domestic product for each of the four war years — finally brought the economy back to full employment, setting the stage for the postwar recovery.

By the war’s end, the U.S. government’s public debt exceeded 120 percent of GDP, almost twice today’s ratio. America worked off that debt not by tightening its belt but by liberating the economy’s potential. In 1945, there was no panel like President Obama’s Bowles-Simpson commission targeting the debt ratio a decade into the future and commending 10 years of budget cuts. Rather, the greater worry was that absent the stimulus of war and with 12 million newly jobless GIs returning home, the civilian economy would revert to depression. So America doubled down on its public investments with programs like the GI Bill and the Marshall Plan. For three decades, the economy grew faster than the debt, and the debt dwindled to less than 30 percent of GDP. Finance was well regulated so that there was no speculation in the public debt. The Department of the Treasury pegged the rate that the government would pay for its bonds at an affordable 2.5 percent. The Federal Reserve Board provided liquidity as necessary.

The Franklin Roosevelt era ushered in an exceptional period in the dismal history of debt politics. Not only were banks well regulated, but the government used innovative public institutions such as the Reconstruction Finance Corporation to recapitalize banks and industrial enterprises and the Home Owners’ Loan Corporation to refinance home mortgages. Chastened by the catastrophe of the reparations extracted from Germany after World War I, the victorious Allies in 1948 wrote off nearly all of the Nazi debt so that the German economy could recover and then sweetened the pot with Marshall Plan aid. Globally, the Bretton Woods accord created a new international monetary system that limited the power of private financiers, offered new public forms of credit and biased the financial system toward economic expansion.

In 1936, John Maynard Keynes provocatively called for “the euthanasia of the rentier.” He meant that once an economy was stabilized into a high-growth regime of managed capitalism, combining low real interest rates with strictures against speculation, and using macroeconomic management of the business cycle to maintain full employment, capital markets would efficiently and even passively channel financial investment into productive enterprise. In such a world, there would still be innovative entrepreneurs, but the parasitic role of a purely financial class reaping immense profits from the manipulation of paper would dwindle to insignificance. Legitimate passive investors — pension funds, life insurance companies, small savers, and the proverbial trust accounts of widows and orphans — would reap decent returns, but there would be neither windfalls for the financial middlemen nor catastrophic risks imposed by them on the rest of the economy. Stripped of the hyperbole, this picture describes the orderly but dynamic economy of the 1940s, 1950s and 1960s, a time when finance was harnessed to the public interest, true innovators were rewarded, most investors earned merely normal returns and windfall speculative profits were not available — because the rules of the game gave priority to investment in the real productive economy.

In today’s economy, which is dominated by high finance, small debtors and small creditors are on the same side of a larger class divide. The economic prospects of working families are sandbagged by the mortgage debt overhang. Meanwhile, retirees can’t get decent returns on their investments because central banks have cut interest rates to historic lows to prevent the crisis from deepening. Yet the paydays of hedge fund managers and of executives of large banks that only yesterday were given debt relief by the government are bigger than ever. And corporate executives and their private equity affiliates can shed debts using the bankruptcy code and then sail merrily on.

Exaggerated worries about public debt are a staple of conservative rhetoric in good times and bad. Many misguided critics preached austerity even during the Great Depression. As banks, factories and farms were failing in a cumulative economic collapse, Andrew Mellon, one of America’s richest men and Treasury secretary from 1921 to 1932, famously advised President Hoover to “liquidate labor, liquidate stocks, liquidate farmers, liquidate real estate ... it will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life.” The sentiments, which today sound ludicrous against the history of the Depression, are not so different from those being solemnly expressed by the U.S. austerity lobby or the German Bundesbank.

The great conflation

Austerity economics conflates several kinds of debt, each with its own causes, consequences and remedies. The reality is that public debt, financial industry debt, consumer debt and debt owed to foreign creditors are entirely different creatures. The prime nemesis of the conventional account is government debt. Public borrowing is said to crowd out productive private investment, raise interest rates and risk inflation. At some point, the nation goes broke paying interest on past debt, the world stops trusting the dollar and we end up like Greece or Weimar Germany. Deficit hawks further conflate current increases in the deficit caused by the recession itself with projected deficits in Social Security and Medicare. Supposedly, cutting Social Security benefits over the next decade or two will restore financial confidence now. Since businesses don’t base investment decisions on such projections, those claims defy credulity.

Until the collapse of 2008, most government debts were manageable. Spain and Ireland, two of the alleged sinner nations, actually had low ratios of debt to gross domestic product. Ireland ran up its public debt bailing out the reckless bets of private banks. Spain suffered the consequences of a housing bubble, later exacerbated by a run on its government bonds. The United States had a budget surplus and a sharply declining debt-to-GDP ratio as recently as 2001. In that year, thanks to low unemployment and increasing payroll tax revenues, Social Security’s reserves were projected to increase faster than the claims of retirees.

The U.S. debt ratio rose between 2001 and 2008 because of two wars and gratuitous tax cuts for the wealthy, not because of an excess of social generosity. The deficit then spiked mainly because of a dramatic falloff in government revenues as a result of the recession itself. The sharp increase in government debt was the effect of the collapse, not the cause.

The United States and other nations had far higher ratios of public debt to GDP at different points in their histories, and those debts did not prevent prosperity — as long as other sensible policies were followed. Britain’s debt was well over 200 percent of GDP after the Napoleonic Wars, on the eve of the Industrial Revolution. It rose to more than 260 percent at the end of World War II, a period that ushered in the British economy’s best three decades of performance since before World War I.

Along with government borrowing, consumer debt is the other villain of the orthodox account. Supposedly, people went on a borrowing binge to finance purchases they couldn’t afford, and now the piper must be paid. This contention is a half-truth that leaves out two key details.

One is the worsening economic situation of ordinary families. In the first three decades after World War II, wages rose in lockstep with productivity. As the economy, on average, became more prosperous, that prosperity was broadly shared. American consumers took out mortgages to buy homes (with very low default rates) but engaged in little other borrowing. However earnings stagnated in the 1970s, and that trend worsened after 2001. Nearly all the productivity gains of the economy went to the top 1 percent. Wages began to lag because of changes in America’s social contract. Unions were weakened. Good unemployment insurance and other government support of workers’ bargaining power eroded. High unemployment created pressure to cut wages. Corporations that had once been benignly paternalistic became less loyal to their employees. Deregulation undermined stable work arrangements. Globalization on corporate terms made it easier for employers to look for cheaper labor abroad.

During this same period, housing values began to increase faster than the rate of inflation, as interest rates steadily fell after 1982. Many critics ascribe the housing bubble to the subprime scandal, but in fact subprime loans accounted for just the last few puffs. The rise in prices mostly reflected the fact that standard mortgages kept getting cheaper, thanks to a climate of declining interest rates. Low-interest mortgage loans meant that more people could become homeowners and that existing homeowners could afford more expensive houses. With 30-year mortgages at 8 percent, a $2,000 monthly payment finances about a $275,000 home. Cut mortgage rates to 4 percent and the same payment buys a $550,000 home. Low interest rates bid up housing prices. And the higher the paper value of a home, the more one can borrow against it. (It’s possible to temper asset bubbles with regulatory measures, such as varying down-payments or cracking down on risky mortgage products. But the Fed has resisted using these powers.)

The combination of these two trends — declining real wages and inflated asset prices — led the American middle class to use debt as a substitute for income. People lacked adequate earnings but felt wealthier. A generation of Americans grew accustomed to borrowing against their homes to finance consumption, and banks were more than happy to be their enablers. In my generation, second mortgages were considered highly risky for homeowners. The financial industry rebranded them as home equity loans, and they became ubiquitous. Third mortgages, even riskier, were marketed as “home equity lines of credit.”

State legislatures, meanwhile, paid for tax cuts by reducing funding for public universities. To make up the difference, they raised tuition. Federal policy increasingly substituted loans for grants. In 1980, federal Pell grants covered 77 percent of the cost of attending a public university. By 2012, this was down to 36 percent. Nominally public state universities are now only 20 percent funded by legislatures, and their tuition has trebled since 1989. By the end of 2011, the average student debt was $25,250. In mid-2012, total outstanding student loan debt passed a trillion dollars, leaving recent graduates weighed down with debt before their economic lives even began. This borrowing is anything but frivolous. Students without affluent parents have little alternative to these debts if they want college degrees. But as monthly payments crowd out other consumer spending, the macroeconomic effect is to add one more drag to the recovery.

Had Congress faced the consequences head-on, it is hard to imagine a deliberate policy decision to sandbag the life prospects of the next generation. But this is what legislators at both the federal and state levels, in effect, did by stealth. They cut taxes on well-off Americans and increased student debts of the non-wealthy young to make up the difference. The real debt crisis is precisely the opposite of the one in the dominant narrative: efficient public investments were cut, imposing inefficient private debts on those who could least afford to carry them.

During this same period, beginning with the Reagan presidency, other government social protections were weakened and employer benefits such as retirement and health plans became less reliable. People were thrown back on what my colleague Tamara Draut calls “the plastic safety net” of credit card borrowing. In short, debt became the economic strategy of struggling workaday Americans. For the broad middle class, the ratio of debt to income increased from 67 percent in 1983 to 157 percent in 2007. Mortgage debt on owner-occupied homes increased from 29 percent to 47 percent of the value of the house. When housing values collapsed, debt ratios increased further.

From the 1940s through the 1970s — a period when real wages and homeownership rates steadily rose — the habit of the first postwar generation had been to pay down mortgages until homes were owned free and clear and then to use the savings to help finance retirement. By contrast, the custom of the financially strapped second postwar generation, who came of age in the 1970s, 1980s and 1990s, was to keep refinancing their mortgages, often taking out cash with a second mortgage as well.

Increasingly, young adults facing income shortfalls turned to credit cards and other forms of short-term borrowing. By 2001, the average household headed by someone between 25 and 34 carried credit card debt of over $4,000 — twice as much as in 1989 — and was devoting a quarter of its income to interest payments. As Senator Elizabeth Warren of Massachusetts has documented, most of the debt increase went to life’s basic necessities, not luxuries. As health insurance coverage dwindled, the biggest single category was medical debt.

As a matter of macroeconomics, the practice of borrowing against assets sustained consumption in the face of flat or falling wages — until the music stopped. When housing prices began to tumble, the use of debt to finance consumption did not just halt; the process went into reverse as households had to pay down debt. Rising unemployment compounded the damage. Consumer purchasing power took a huge hit, and the economy has yet to recover from this.

According to the Federal Reserve, household net worth declined by 39 percent from 2007 to 2010. The ratio of debt to household income has declined from a peak of 134 percent in 2007 to about 114 percent in 2012, and it is still falling. Borrowing to sustain consumption is no longer viable.

After the fact, it is too facile to cluck that people who suffered declining earnings should have just consumed less. As a long-term proposition, stagnant wages and rising debts were a dubious way to run an economy, but in a short-run depression, paying down net debt only adds to the deflationary drag. The remedy, however, is not to redouble general austerity but to restore household purchasing power and decent wages with a strong recovery.

The real villain of the story is financial industry debt. During the boom years, investment banks, hedge funds, commercial banks with “off-balance-sheet” liabilities and lightly regulated hybrids such as the insurance giant American International Group (AIG) were typically operating with leverage ratios of 30 to 1 and in some cases of more than 50 to 1. “Leverage” is a polite word for borrowing. In plain English, they borrowed $50 for every $1 of their own capital. They incurred immense debts, substantially in very short-term money-market loans that had to be refinanced daily. In the case of AIG, which underwrote credit default swaps (a kind of insurance but with no reserves against loss), the leverage was literally infinite. When panic set in, the access to credit dried up in a matter of days.

With the collusion of credit rating agencies that blessed their opaque and risky securities with triple-A ratings, these financial engineers sold their toxic products to investors around the world. Sometimes the financial engineers even borrowed money to bet against the same securities they created — marketing them as sound investments while they shorted their own creations. When the boom turned out to be a bubble, the highly interconnected financial system crashed, with trillions of dollars in collateral damage to bystanders.

Innovation, investment and speculation

Apologists for the recent crash argue that all financial innovations are virtuous and that all investments are in a sense speculative. An entrepreneur, after all, is defined as someone who takes a risk. An investor gambles that an enterprise will flourish. Damp down speculation via financial regulation and you will snuff out innovation. As Edward Chancellor, the historian of speculation, archly observed, “The line separating speculation from investment is so thin that it has been said both that speculation is the name given to a failed investment and that investment is the name given to a successful speculation”

However, a closer look reveals that speculation is not the same as ordinary enterprise. Three telltale features differentiate speculation, especially the most toxic kind, from productive forms of investment. First, speculation is typically done with borrowed money. In finance, there is nothing new under the sun. The financial innovations of recent decades were all variations on techniques that were familiar to 13th-century Venice, the Dutch Republic, Elizabethan England and early America — and all involved very high degrees of borrowing. The degree of leverage was typically concealed or disguised, and for good reason. If the pyramiding and true risks had been understood by investors, they would not likely have parted with their money.

Secondly, speculations usually are bets on short-term fluctuations in prices or temporary asset inflation (or, in the case of short-selling, temporary deflation). Often the speculation itself is designed promote that inflation. This is known in the trade as “pump-and-dump.”

Third, speculation is all about quick killings. The speculator is often a middleman positioned to exploit privileged knowledge or an outsider with a very short time horizon hoping to game market trends. As Keynes astutely observed, productive investment entails “forecasting the prospective yield of assets over their whole life,” while speculation is merely “forecasting the psychology of the market.” A popular expression on Wall Street during the last financial bubble was “IBGYBG,” which stood for “I’ll Be Gone, You’ll Be Gone” — meaning, “Let’s do this deal before the rubes figure out the game, then quickly cash in and get out before it collapses.”

Nearly all of the supposedly innovative abuses that crashed the financial system in 2008 had antecedents in earlier centuries — before computers: extreme leverage, collateralized debt obligations, speculation in derivatives, insider trading, off-balance-sheet special purpose vehicles and shadow banks not backed by deposits or proper equity. The schemes just went by different names.

Government bond futures were traded almost as soon as the Venetian republic issued debt securities, before 1300. These were the first derivatives, and like all derivatives, they provided an opportunity for concealed leverage and insider trading. On 17th century financial exchanges in Amsterdam and elsewhere, options and futures in products as diverse as whale oil, sugar, silks and herring were used both to hedge investments and to speculate in paper. Securitized loans appeared in the 1600s and regularly recurred. Off-balance-sheet vehicles would have been familiar to William Duer, the failed speculator in Bank of the United States shares, who financed his stock manipulations in the 1790s with personal notes of credit totaling some $30 million. In the 1920s, bank loans to foreign governments were regularly converted to bonds and sold off to unsuspecting clients, often with the sponsoring banks betting against them.

More than 170 years ago, American speculators like Jacob Little, the original Great Bear of Wall Street, and Daniel Drew, known as Ursa Major, were selling stock they didn’t own, hoping to drive down the price so they could then buy it back at a profit — recognizable today as short selling. They were known as bears because they “sold the skin of the bear before they caught the bear.”

Little and Drew were only employing a technique whose first recorded use was on the Amsterdam stock exchange in 1609 by a Flemish speculator named Isaac Le Maire. The 21st century’s shadow banks, unregulated hedge funds and outfits like AIG had exact counterparts in 19th century financial institutions known as agency houses, which made loans but took no deposits, thus evading reserve requirements. This practice was refined in the 1890s with the invention of trust companies, which did most of what banks did but without federal or state charters or reserve requirements. Call loans from brokers to investors who played the stock market on margin date to the 1830s. All of these schemes recurred with new creative concealment in the 1920s. The common elements were extreme leverage, insider trading, misrepresentation of risks to investors and manipulation of prices.

Defenders of financial speculation contended that the fruits of the financial engineering of the 1980s and 1990s — which would lead to the collapse of 2008 — were valuable innovations that increased the economy’s liquidity (a polite word for leverage) and hence made the economy more efficient. The extensive technical literature on the market-enhancing benefits of liquidity was ignorant of economic history and attributed the latest forms of disguised risk to the marvels of the computer. But these techniques were not novel at all: Each one was an Internet-age variation of centuries-old scams. As former Fed chairman Paul Volcker — no radical — observed, the last useful financial innovation was the ATM.

It’s true that all participants in a market economy take risks. But non-speculative investments are of an entirely different character. Patient investors may hope for asset inflation in the sense of capital gains, but they typically anticipate merely a normal rate of return, not a windfall. If investors guess wrong and the investment loses money, they are not contributing to a wider financial disaster. The loss is simply their own. An ordinary manufacturer, wholesaler or proprietor of a small business may borrow money to finance inventory or expansion, but not to play financial markets. All businesses face risks, say, of a bad year or an innovative competitor. But these are fundamentally different from the risks of highly leveraged financial speculation.

Even the occasional outlier entrepreneur, such as a Steve Jobs or a Bill Gates, may earn immense profits, but these derive from genuine productive innovation, not financial speculation. A true venture capitalist who invests his own money in the hope that an innovator will yield high returns is another creature altogether from the leveraged buyout artist looking for a fast gain and tax breaks by using borrowed money to flip control of a company to which he adds little or no value.

By the same token, ordinary commercial bankers never got filthy rich and never crashed the economy. A bank that pays its depositors 4 percent and charges its business borrowers 7 percent will hire loan officers who extend credit with great diligence and care, not traders operating on inside tips and formulas. A bank seeking a normal rate of return to meet expectations of its shareholders and pay for its operating costs cannot afford more than an occasional loan loss. If the bank conducts its business prudently, it has a reasonable expectation that most of its commercial loans will be repaid. Commercial banks typically have leverage ratios of 8 or 10 to 1. Their own capital cushions their lending and tempers their recklessness. Their actions are straightforward and transparent to bank examiners. Even though the business of taking deposits and making commercial loans is leveraged and incurs risks, it is not speculative. If anything, it is rather humdrum. The trouble began when ordinary bankers started envying hedge funds.

Homeowners, likewise, may hope that the value of the house increases faster than the general rate of inflation. If it does, that is frosting on the cake. But the cake is what economists call the use value of having an investment that accumulates equity and is also a place to live. Financial speculators — the inventors of the subprime daisy chain — spoiled this system of slow, steady and broadly distributed property wealth accumulation for at least a generation of Americans.

The Glass-Steagall Act of 1933 was a work of political genius and financial radicalism because it separated the speculative part of the economy from the real part. The law constructed a wall between commercial banking and investment banking. Speculators were free to gamble to their hearts’ content, as long as they put only their own money at risk. The rest of the financial economy was freed to perform its essential but less lucrative daily functions of channeling capital to productive investment. With a well-regulated banking sector doing its job, the real economy of the regulated era had no difficulty financing its expansion.

Since the inception of modern capitalism, the central challenge of financial policy in a market economy has been to keep capital costs low for the real economy of factories, farms, consumers and entrepreneurs without allowing that same cheap money to promote asset bubbles and other forms of purely speculative windfall gain. More often than not, financial policy has failed that challenge. Either it has allowed or promoted cheap credit without adequate controls on excessive leverage and speculation, or it has kept credit too tight generally, constraining speculation but choking off the productive economy. Often it has oscillated between those two poles.

Many commentators contend that the great policy error of the decade before the collapse was to allow interest costs to drop to very low levels. That climate of cheap money supposedly bid up asset levels, engendered speculative uses of credit, and fairly invited the crash. That, however, is exactly the wrong lesson to draw. The real economy — as opposed to the financial one — needs cheap capital in order to grow. The lesson of the era of managed capitalism is that the economic sweet spot is the combination of plentiful credit and tight regulation, so that low interest rates finance mainly productive enterprise. The mistake of Federal Reserve chairmen Alan Greenspan and chief economic advisers Robert Rubin and Lawrence Summers and others was not to loosen money; it was to loosen regulatory constraints on its speculative use. And this was no innocent technical mistake: It was the result of relentless industry pressure for deregulation coupled with the financial sector’s success in installing allies in key government posts, regardless of whether the administration was nominally Republican or Democrat.

Today’s fiscal alarms are less a legitimate economic concern than an expedient way to starve and stifle government, preserve a lucrative if toxic business model and assure that even minute amounts of inflation do not disturb the comfort of creditors.

The core claim is that budget discipline is the royal road to recovery. However, in a deflated economy, recovery is the precondition for fiscal balance. In the usual framing of the debate, not only are the cause and effect backward, but several distinct issues are being deliberately blurred. They are:

  • How to bring about a rapid and sustainable economic recovery
  • How to relieve private debt burdens that are prolonging the downturn, such as mortgage debt and student debt
  • How to achieve an acceptable level of public debt once the crisis is behind us
  • How to set a level of public spending adequate to address social needs that have only been intensified by the recession’s hardships and budget cuts
  • How to finance those social needs
  • How best to address projected imbalances in our two largest and most redistributive programs of social insurance, Social Security and Medicare
  • How to restore adequate regulation so that the productive economy can have the low interest costs that it needs to encourage growth without promoting the next round of reckless financial speculation

The austerity scenario blurs the short term with the long term, confuses the issue of social insurance reform with the question of the best recovery strategy, makes improbable claims about what is depressing business and consumer confidence, and inverts cause and effect. The level of public spending and the degree of budget balance are two entirely separate issues. A mistaken premise is that high levels of public spending produce high deficits. But a government can have declining domestic spending and rising deficits, as Ronald Reagan showed. Conversely, a country can opt for high spending and low deficits. The Nordic nations, for instance, have prudent fiscal policies yet devote almost half of their GDP to social spending. They pay for that spending with taxes. The real issues are the best path to recovery from crisis, the desired level of budget balance and social spending for the long term, and how that spending is paid for. In a deflated economy, an increase in the short-term deficit to finance investment is better medicine than austerity.

Generational justice reconsidered

At stake in these debates is our economic future. A huge part of the austerity crusade has been based on moral claims of generational justice. We are said to be selfishly passing along massive public debts to our children and grandchildren. As these debts come due and payable, interest rates and taxes will rise, and future generations will suffer reduced living standards because of our own profligacy and shortsightedness. This story has become a staple of popular imagery and political rhetoric. Even the relatively liberal New Yorker magazine, on its October 8, 2012, issue depicted an elderly man literally taking candy from a baby. As Judd Gregg, a former senator from New Hampshire, warned, “This issue [debt] represents the potential fiscal meltdown of this nation and it absolutely guarantees if it's not addressed that our children will have less of a quality of life than we've had; that they will have a government they can't afford, and that we will be demanding so much of them in the area of taxes that they will not have the money to send their kids to college or buy that home or just live a good quality life.”

The economics of this story are just about backwards. The well-being of our children and grandchildren in 2023 or 2033 is not a function of how much deficit reduction we target or enforce in this decade but of whether we get economic growth back on track. If we cut the deficit, reduce social spending and tighten our belts as the deficit hawks recommend, we will condemn the economy to stagnant growth and flat or declining wages. That will indeed leave the next generation a lot poorer. The existing debt will loom larger relative to the size of the real economy, and there will be too few public funds to invest in the education, employment, job-training and research outlays that our children and grandchildren need.

In the absence of these social supports that gave earlier generations the American promise of upward mobility, young adults will be thrown back on a private, familial welfare state. As Mitt Romney recommended during the 2012 campaign, more young people will borrow from their parents — a splendid strategy if you have affluent parents. Family financial help already gives the children of the affluent a big head start and leaves others to either do without or to incur private debts that indeed lower living standards by burdening young families with interest and repayment obligations. As social resources are starved for funds, the private welfare state enables the affluent to pass along economic advantages to their children in everything from the schools they attend and the enrichment programs in which they partake to the gift of graduating college debt-free, the subsidy of unpaid internships that give a boost up the career ladder and help with down payments on starter homes. Class lines harden, and the children of the nonrich become increasingly disadvantaged. A starved public sector further reduces society’s opportunity institutions.

As noted, the financing of higher education — the great equalizer — has been shifted dramatically from grants-in-aid and cheap public universities to high tuitions and burdensome student loans. The jobs available to the young today are far less likely than a generation ago to include good benefits such as health insurance and pensions. With two-tier wage systems, the incomes of the young are disproportionally lower than those of workers generally. Even though low interest rates seemingly make homeownership a bargain, the inflation in housing prices that occurred in previous decades put housing out of reach for many young families. Recent graduates carrying large student loans have difficulty qualifying for mortgages. Even during the boom years, while homeownership rates were rising generally, they were declining for young adults. Between 1980 and 1990, the homeownership rate for people aged 25 to 34 fell from 52 percent to 45 percent. It rebounded slightly in the hot housing market of the 2000s, only to fall back after the crash. What is destroying the living standards and life prospects of young adults (at least those without rich parents) is not current deficit or the projection of Social Security costs two or three decades into the future but the bad policies of the present and recent past and the failure to pursue recovery policies.

The effects of prolonged recession extend from young parents to their own children. The work of the Harvard pediatric researcher Jack Shonkoff and others demonstrates the cascading impact of unemployment, income loss and the juggling of multiple jobs on child rearing and on children’s well being. Parents are less available to be with children and less effective when they are present, and older children are pressed into service to care for younger siblings. Parents are less likely to read to children, to be consistent and loving role models and disciplinarians, to work closely with schools, to be attentive to children’s health and wellness issues, and to be emotionally at peace themselves. There are predictable and documented increases in child abuse and domestic violence.

This is the first postwar recession in which all levels of government have cut rather than increased countercyclical outlays necessary to serve both social and economic purposes. The effect has been concentrated on low-income families. The bipartisan welfare reform program Temporary Assistance for Needy Families, approved by Congress and signed by President Clinton in 1996, was intended to push welfare recipients into work. But it was enacted when the economy was close to full employment and assumed the availability of jobs. Today, with unemployment around 8 percent (and the real number double that when we count people who have dropped out of the work force and parttimers who want but can’t find fulltime jobs), welfare no longer provides aid on the basis of need to all who qualify. Only about a quarter of people who are eligible for the program actually get benefits.

Young families are being denied access to the asset accumulation that their parents and grandparents enjoyed. Asset poverty, in turn, affects economic well-being throughout the life course. It means less of a savings cushion for temporary reverses, less money to help one’s children get a good education and less socked away for a decent retirement. This is the real generational injustice of the current crisis. None of it has anything to do with the national debt or the projected shortfall in Social Security. The budget cutting demanded by deficit hawks deprives government of the resources necessary to improve the lives of young adults and families right now.

Despite the scapegoating of Social Security and Medicare, the failure to apply the right remedies to the crisis is also harms the older generation. The Federal Reserve is using very low interest rates to keep the economy from sliding further. But near-zero interest rates leave the elderly with almost no return on their savings. Meanwhile, the fiscal crisis has caused state and local governments to cut or underfund pensions for civil servants while private industry has been trimming its labor costs for two decades by phasing out traditional pension plans in favor of plans in which all the risk is borne by workers. The typical worker near retirement age has 401(k) savings sufficient for only a few years of retirement. Though labor force participation rates have generally declined in a climate of high unemployment, increasing numbers of Americans in their seventies are taking typically low-wage jobs, just to make ends meet.

The median income of elderly Americans in 2010 was just $25,704 for men and $15,072 for women. Almost two-thirds of Americans over age 65 rely on Social Security for at least 70 percent of their income. If Social Security and Medicare are cut, this hardship will only increase. Poverty rates among Americans overage 65, after declining steadily since the 1960s, are now once again higher than among the working-age population. Decent treatment of the elderly is also a form of generational justice. Despite a lot of rhetoric about “greedy geezers” harming the young, both generations are victims of bad economics. The real conflict is not old versus young but the top 1 percent versus the rest of society.

The choices we face

The received wisdom today is deeply conservative in distinct and mutually reinforcing respects. The orthodoxy is conservative in the political sense that creditor self-interests predominate; conservative as a perverse pre-Keynesian economics that ignores the lessons of the past 80 years and promotes self-perpetuating deflation; and conservative in that most of the proposed remedial measures would balance accounts by undermining public programs that are necessary for a more egalitarian form of capitalism.

In principle, we could restore economic growth and fiscal equilibrium with a restructuring of past debts, higher levels of taxing and spending, constraints on the speculative license of creditors, and expansions of the public realm. This alternative is largely absent from the discourse. For financial elites, the splendid irony of the current austerity crusade is that the very people whose financial engineering caused the collapse — people who never much liked an effective public sector or programs like Social Security — are now using the ensuing recession to justify a severe assault on the countervailing public institutions needed to keep their own immense economic and political power in check.

So the world faces a momentous choice: austerity or recovery. Unfortunately, the debate is mostly the sound of one hand clapping. Creditor self-interest dominates public discourse to an extent not seen since the period after World War I, when the victorious nations imposed punitive reparations on Germany and inflicted tight money policies on their own citizens, condemning Europe to two decades of economic misery and seeding a second world war. (Today’s German government, oblivious to the irony, is taking its revenge.) Center-right governments and their business allies are using the alleged fiscal crisis as a pretext for long-sought cuts in social spending that have nothing to do with the causes of the collapse or with its cure. Meanwhile, as the real economic crisis deepens, center-left parties seem unable to propose anything better than a little less of the retrenchment advocated by their political adversaries.

Cut through the welter of detail and the enduring questions are these: After a financial catastrophe, will unrealistic creditor claims be permitted to hobble the future, or will policies emphasize economic recovery? Will defaults on debt be disorderly, inflicting wider economic damage, or will debt relief be carefully restructured in service of efficient renewed growth? Will there be double standards, as in bailouts for banks and corporations but not for homeowners? And will rules be put in place both to ensure wide availability of credit at moderate interest costs and to prevent future abuses so that we get restored growth without repeating the cycle of speculation, bubble, and collapse?

If the austerity-mongers prevail, we will be condemned to debtors’ prison. If we can understand and act on these challenges, we can surmount the current bout of deflation, restore broad prosperity and prevent recurring crisis.

Excerpted from "Debtors' Prison" by Robert Kuttner. Copyright © 2013 by Robert Kuttner. Excerpted by permission of Knopf, a division of Random House, Inc. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher. 


By Robert Kuttner

Robert Kuttner’s new book is “The Stakes: 2020 and the Survival of American Democracy.” He is co-editor of The American Prospect and teaches at Brandeis.

MORE FROM Robert Kuttner


Related Topics ------------------------------------------

Austerity Budget Deficits Editor's Picks Great Recession Occupy Wall Street Paul Krugman Student Loans