After the mid-1970s progress against poverty stalled. The 1973 oil crisis ushered in an era of growing inequality interrupted only briefly by the years of prosperity during the 1990s. Productivity increased, but, for the first time in American history, its gains were not shared by ordinary workers, whose real incomes declined even as the wealth of the rich soared. Poverty concentrated as never before in inner city districts scarred by chronic joblessness and racial segregation. America led western democracies in the proportion of its children living in poverty. It led the world in rates of incarceration. Trade union membership plummeted under an assault by big business abetted by the federal government. Policy responded by allowing the real value of the minimum wage, welfare benefits, and other social protections to erode. The dominant interpretation of America’s troubles blamed the War on Poverty and Great Society and constructed a rationale for responding to misery by retrenching on social spending. A bipartisan consensus emerged for solving the nation’s social and economic problems through a war on dependence, the devolution of authority, and the redesign of public policy along market models.
The years after the mid-1970s witnessed a confrontation between massive urban structural transformation and rightward moving social policy that registered in a reconfigured and intensified American poverty in the nation’s cities. It is no easy task to define an American city in the early twenty-first century. Fast-growing cities in the post-war Sun Belt differ dramatically from the old cities of the Northeast and Midwest as any drive through, for example, Los Angeles and Philadelphia makes clear. Nonetheless, all the nation’s central cities and their surrounding metropolitan areas experienced transformations of economy, demography, and space that resulted in urban forms without precedent in history. These transformations hold profound implications for poverty as both fact and idea, and they underscore the need to understand poverty as a problem of place as well as persons. A long tradition of social criticism—from nineteenth-century advocates of slum clearance through the “Chicago school” of the 1920s to the most cutting-edge urban theory of the twenty-first century—presents poverty as a problem of place. In one version, which has dominated discussions, conditions in places—most notably, substandard housing—produce, reinforce, or augment poverty. In an alternate version, poverty is a product of place itself, reproduced independent of the individuals who pass through it. Both versions help explain the link between poverty and the multisided transformation of metropolitan America.
The first transformation was economic: the death of the great industrial city that flourished from the late nineteenth century until the end of World War II. The decimation of manufacturing evident in Rust Belt cities resulted from both the growth of foreign industries, notably electronics and automobiles, and the corporate search for cheaper labor. Cities with economic sectors other than manufacturing (such as banking, commerce, medicine, government, and education) withstood deindustrialization most successfully. Those with no alternatives collapsed, while others struggled with mixed success. Some cities such as Las Vegas built economies on entertainment, hospitality, and retirement. With manufacturing withered, anchor institutions, “eds and meds,” increasingly sustained the economies of cities lucky enough to house them; they became, in fact, the principal employers. In the late twentieth century, in the nation’s twenty largest cities, “eds and meds” provided almost 35 percent of jobs. As services replaced manufacturing everywhere, office towers emerged as the late twentieth century’s urban factories. Services include a huge array of activities and jobs, from the production of financial services to restaurants, from high paid professional work to unskilled jobs delivering pizza or cleaning offices. Reflecting this division, economic inequality within cities increased, accentuating both wealth and poverty.
The second kind of urban transformation was demographic. First was the migration of African Americans and white southerners to northern, midwestern, and western cities. Between World War I and 1970, about seven million African Americans moved north. The results, of course, transformed the cities into which they moved. Between 1940 and 1970, for example, San Francisco’s black population multiplied twenty-five times and Chicago’s grew five times. The movement of whites out of central cities to suburbs played counterpoint. Between 1950 and 1970, the population of American cities increased by ten million people while the suburbs exploded with eighty-five million.
The idea that the white exodus to the suburbs represented “flight” from blacks oversimplifies a process with other roots as well. A shortage of housing; urban congestion; mass-produced suburban homes made affordable with low interest, long-term, federally insured loans; and a new highway system all pulled Americans out of central cities to suburbs. At the same time, through “blockbusting” tactics, unscrupulous real estate brokers fanned racial fears, which accelerated out-migration. In the North and Midwest, the number of departing whites exceeded the incoming African Americans, resulting in population loss and the return of swaths of inner cities to empty, weed-filled lots that replaced working-class housing and factories—a process captured by the great photographer Camilo Jose Vergara with the label “green ghetto.” By contrast, population in Sun Belt cities such as Los Angeles moved in the opposite direction. Between 1957 and 1990, the combination of economic opportunity, a warm climate, annexation, and in-migration boosted the Sun Belt’s urban population from 8.5 to 23 million.
A massive new immigration also changed the nation and its cities. As a result of the nationality based quotas enacted in the 1920s, the Great Depression, and World War II, immigration to the United States plummeted. The foreign-born population reached its nadir in 1970. The lifting of the quotas in 1965 began to reverse immigration’s decline. Immigrants, however, now arrived from new sources, primarily Latin America and Asia. More immigrants entered the United States in the 1990s than during any other decade in its history. These new immigrants fueled population growth in both cities and suburbs. Unlike the immigrants of the early twentieth century, they often bypassed central cities to move directly to suburbs and spread out across the nation. In 1910, for example, 84 percent of the foreign born in metropolitan Philadelphia lived in the central city. By 2006 the proportion had dropped to 35 percent. New immigrants have spread beyond the older gateway states to the Midwest and South, areas from which prior to 1990 immigrants largely were absent. Thanks to labor market networks in agriculture, construction, landscaping, and domestic service, Hispanics spread out of central cities and across the nation faster than any other ethnic group in American history. This new immigration has proved essential to labor market growth and urban revitalization. Again in metropolitan Philadelphia, between 2000 and 2006, the foreign born accounted for 75 percent of labor force growth. A New York City research report “concluded that immigrant entrepreneurs have become an increasingly powerful economic engine for New York City…foreign-born entrepreneurs are starting a greater share of new businesses than native-born residents, stimulating growth in sectors from food manufacturing to health care, creating loads of new jobs and transforming once-sleepy neighborhoods into thriving commercial centers.” Similar reports came in from around the nation from small as well as large cities and from suburbs.
Suburbanization became the first major force in the spatial transformation of urban America. Although suburbanization extends well back in American history, it exploded after World War II as population, retail, industry, services, and entertainment all suburbanized. In the 1950s, suburbs grew ten times as fast as central cities. Even though the Supreme Court had outlawed officially mandated racial segregation in 1917 and racial exclusions in real estate deeds in 1948, suburbs found ways to use zoning and informal pressures to remain largely white until late in the twentieth century, when African Americans began to suburbanize. Even in suburbs, however, they clustered in segregated towns and neighborhoods. Suburbs, it should be stressed, never were as uniform as their image. In the post-war era, they came closer than ever before to the popular meaning of “suburb” as a bedroom community for families with children. But that meaning had shattered completely by the end of the twentieth century, as a variety of suburban types populated metropolitan landscapes, rendering distinctions between city and suburb increasingly obsolete. The collapse of the distinction emerged especially in older inner ring suburbs where the loss of industry, racial transformation, immigration, and white out-migration registered in shrinking tax bases, eroding infrastructure, and increased poverty.
Gentrification and a new domestic landscape furthered the spatial transformation of urban America. Gentrification may be redefined as the rehabilitation of working-class housing for use by a wealthier class. Outside of select neighborhoods, gentrification by itself could not reverse the economic and population decline of cities, but it did transform center city neighborhoods with renovated architecture and new amenities demanded by young white professionals and empty-nesters who had moved in. At the same time, it often displaced existing residents, adding to a crisis of affordable housing that helped fuel homelessness and other hardships.
The new domestic landscape resulted from the revolutionary rebalancing of family types that accelerated after 1970. In 1900 married couples with children made up 55 percent of all households, single-mother families 28 percent, empty-nesters 6 percent, and nonfamily households (mainly young people living together) 10 percent, with a small residue living in other arrangements. By 2000 the shift was astonishing. Married couple households now made up only 25 percent of all households, single-mother families 30 percent, empty-nesters 16 percent, and nonfamily households 25 percent. (The small increase in single-mother families masked a huge change. Earlier in the century they were mostly widows; by century’s end they were primarily never married, divorced, or separated.) What is stunning is how after 1970 these trends characterized suburbs as well as central cities, eroding distinctions between them. Between 1970 and 2000, for example, the proportion of census tracts where married couples with children comprised more than half of all households plummeted from 59 percent to 12 percent and in central cities from 12 percent to 3 percent. In the same years, the proportion of suburban census tracts where single mothers composed at least 25 percent of households jumped an astonishing 440 percent—from 5 percent to 27 percent—while in central cities it grew from 32 percent to 59 percent. The share of census tracts with at least 30 percent nonfamily households leaped from 8 to 35 percent in suburbs and from 28 to 57 percent in cities. These changes took place across America, in Sun Belt as well as Rust Belt. Truly, a new domestic landscape eroding distinctions between city and suburb had emerged within metropolitan America. Its consequences were immense. The rise in single-mother families living in poverty shaped new districts of concentrated poverty and fueled the rise in suburban poverty. Immigration brought young, working-class families to many cities and sparked revitalization in neighborhoods largely untouched by the growth and change brought about by gentrification.
Racial segregation also transformed urban space. The first important point about urban racial segregation is that it was much lower early rather than late in the twentieth century. In 1930 the neighborhood in which the average African American lived was 31.7 percent black; in 1970 it was 73.5 percent. No ethnic group in American history ever experienced comparable segregation. Sociologists Douglas Massey and Nancy Denton, with good reason, described the situation as “American apartheid.” In sixteen metropolitan areas in 1980, one of three African Americans lived in areas so segregated along multiple dimensions that Massey and Denton labeled them “hypersegration.” Even affluent African Americans were more likely to live near poor African Americans than affluent whites. Racial segregation, argued Massey and Denton, by itself produced poverty. Areas of concentrated poverty, in turn, existed largely outside of markets—any semblance of functioning housing markets had dissolved, financial and retail services had decamped, jobs in the regular market had disappeared. Concentrated poverty and chronic joblessness went hand in hand. Public infrastructure and institutions decayed, leaving them epicenters of homelessness, crime, and despair. Even though segregation declined slightly in the 1990s, at the end of the century, the average African American lived in a neighborhood 51 percent black, many thousands in districts marked by a toxic combination of poverty and racial concentration. This progress reversed in the first decade of the twentieth century. “After declining in the 1990s,” reported a Brookings Institution study, “the population in extreme-poverty neighborhoods—where at least 40 percent of individuals lived below the poverty line—rose by one-third from 2000 to 2005–09.”
Despite continued African American segregation, a “new regime of residential segregation” began to appear in American cities, according to Massey and his colleagues. The new immigration did not increase ethnic segregation; measures of immigrant segregation remained “low to moderate” while black segregation declined modestly. However, as racial segregation declined, economic segregation increased, separating the poor from the affluent and the college educated from high school graduates. Spatial isolation marked people “at the top and bottom of the socioeconomic scale.” The growth of economic inequality joined increased economic segregation to further transform urban space. America, wrote three noted urban scholars, “is breaking down into economically homogeneous enclaves.” This rise in economic segregation afflicted suburbs as well as inner cities, notably sharpening distinctions between old inner ring suburbs and more well-to-do suburbs and exurbs. Early in the twenty-first century, as many poor people lived in suburbs as in cities, and poverty within suburbs was growing faster within them.
In the post-war decades, urban redevelopment also fueled urban spatial transformation. Urban renewal focused on downtown land use, clearing out working-class housing, small businesses, and other unprofitable uses, and replacing them with high-rise office buildings, anchor institutions, and expensive residences. The 1949 Housing Act kicked off the process by facilitating city governments’ aspirations to assemble large tracts of land through eminent domain and sell them cheaply to developers. The Act authorized 810,000 units of housing to re-house displaced residents; by 1960, only 320,000 had been constructed. These new units of public housing remained by and large confined to racially segregated districts and never were sufficient in number to meet existing needs. “Between 1956 and 1972,” report Peter Dreier and his colleagues, experts in urban policy, “urban renewal and urban freeway construction displaced an estimated 3.8 million persons from their homes” but rehoused only a small fraction. The costs of urban renewal to the social fabric of cities and the well-being of their residents were huge. Urban renewal “certainly changed the skyline of some big cities by subsidizing the construction of large office buildings that housed corporate headquarters, law firms, and other corporate activities” but at the price of destroying far more “low-cost housing than it built” and failing “to stem the movement of people and businesses to suburbs or to improve the economic and living conditions of inner-city neighborhoods. On the contrary, it destabilized many of them, promoting chaotic racial transition and flight.”
Neither the War on Poverty nor Great Society slowed or reversed the impact of urban redevelopment and racial segregation on the nation’s cities. President John F. Kennedy finally honored a campaign pledge in 1962 with a federal regulation prohibiting discrimination in federally supported housing—an action that “turned out to be more symbolic than real” on account of weak enforcement. In the 1968 Fair Housing Act, President Lyndon Johnson extended the ban on discrimination, and the practices that produced it, to the private housing market. Unfortunately, weak enforcement mechanisms left it, too, inadequate to the task throughout the 1970s and 1980s.
For the most part, the War on Poverty and Great Society rested on an understanding of poverty as a problem of persons, or, in the case of community action, of power, but less often of place. Opportunity-based programs addressed the deficiencies of individuals, not the pathologies of the places in which they lived. This hobbled their capacity from the outset. The conservatives who seized on the persistence of poverty to underscore and exaggerate the limits of the poverty war and Great Society retained this individual-centered understanding of poverty as they developed a critique of past efforts and a program for the future, neither of which was adequate to the task at hand.
The coincidence of America’s urban slide into deep urban racial segregation, concentrated poverty, deindustrialization, physical decay, and near-bankruptcy coincided with the manifest failures of public policy, notably in urban renewal, and in the efforts of government to wage war on poverty. No matter that the story as popularly told was riddled with distortions and omissions. This narrative of catastrophic decline and public incompetence produced the trope of the “urban crisis,” which, in turn, handed conservatives a gift: a ready-made tale—a living example—to use as evidence for the bundle of ideas they had been nurturing for decades and which emerged triumphant by the late 1970s.
The Conservative Ascendance
The growth of urban poverty did not rekindle compassion or renew the faltering energy of the Great Society. Instead, a war on welfare accompanied the conservative revival of the 1980s. City governments, teetering on the edge of bankruptcy, cut social services; state governments trimmed welfare rolls with more restrictive rules for General Assistance (state outdoor relief); and the federal government attacked social programs. As President Ronald Reagan famously remarked, government was the problem, not the solution. The result of these activities reduced the availability of help from each level of government during the years when profound structural transformations in American society increased poverty and its attendant hardships.
Several sources fed the conservative restoration symbolized by Ronald Reagan’s election as president in 1980. Business interests, unable to compete in an increasingly international market, wanted to lower wages by reducing the influence of unions and cutting social programs that not only raised taxes but offered an alternative to poorly paid jobs. The energy crisis of 1973 ushered in an era of stagflation in which public psychology shifted away from its relatively relaxed attitude toward the expansion of social welfare. Increasingly worried about downward mobility and their children’s future, many Americans returned to an older psychology of scarcity. As they examined the sources of their distress, looking for both villains and ways to cut public spending, ordinary Americans and their elected representatives focused on welfare and its beneficiaries, deflecting attention from the declining profits and returns on investments that, since the mid-1970s, should have alerted them to the end of unlimited growth and abundance.
Desegregation and affirmative action fueled resentments. Many whites protested court-ordered busing as a remedy for racial segregation in education, and they objected to civil rights laws, housing subsidies, and public assistance support for blacks who wanted to move into their neighborhoods while they struggled to pay their own mortgages and grocery bills. White workers often believed they lost jobs and promotions to less qualified blacks. Government programs associated with Democrats and liberal politics became the villains in these interpretations, driving blue-collar workers decisively to the right and displacing anger away from the source of their deteriorating economic conditions onto government, minorities, and the “undeserving poor.”
Suburbanization, the increased influence of the South on electoral politics and the politicization of conservative Protestantism, also fueled the conservative ascendance. “Suburbia,” political commentator Kevin Phillips asserted, “did not take kindly to rent subsidies, school balance schemes, growing Negro migration or rising welfare costs. . . . The great majority of middle-class suburbanites opposed racial or welfare innovation.” Together, the Sun Belt and suburbs, after 1970 the home to a majority of voters, constituted the demographic base of the new conservatism, assuring the rightward movement of politics among Democrats as well as Republicans and reinforcing hostility toward public social programs that served the poor—especially those who were black or Hispanic. The “middle class” became the lodestone of American politics, the poor its third rail.
Prior to the 1970s, conservative Christians (a term encompassing evangelicals and fundamentalists) largely distrusted electoral politics and avoided political involvement. This stance reversed in the 1970s when conservative Christians entered politics to protect their families and stem the moral corruption of the nation. Among the objects of their attack was welfare, which they believed weakened families by encouraging out-of-wedlock births, sex outside of marriage, and the ability of men to escape the responsibilities of fatherhood. Conservative Christians composed a powerful political force, about a third of the white electorate in the South and a little more than a tenth in the North. By the 1990s they constituted the largest and most powerful grassroots movement in American politics. In the 1994 elections, for the first time a majority of evangelicals identified themselves as Republicans. Although the inspiration for the Christian Right grew out of social and moral issues, it forged links with free-market conservatives. Fiscal conservatism appealed to conservative Christians whose “economic fortunes depend more on keeping tax rates low by reducing government spending than on social welfare programs that poor fundamentalists might desire,” asserted sociologists Robert Wuthnow and Matthew P. Lawson. The conservative politics that resulted fused opposition to government social programs and permissive legislation and court decisions (abortion, school prayer, gay civil rights, the Equal Rights Amendment, teaching evolution) with “support of economic policies favorable to the middle-class”—a powerful combination crucial for constructing the electoral and financial base of conservative politics.
Two financial sources bankrolled the rightward movement of American politics. Political action committees mobilized cash contributions from grassroots supporters while conservative foundations, corporations, and wealthy individuals supported individual candidates, organized opposition to public programs, and developed a network of think tanks—including the American Enterprise Institute, the Heritage Foundation, and the libertarian Cato Institute—designed to counter liberalism, disseminate conservative ideas, and promote conservative public policy. Within a year of its founding in 1973, the Heritage Foundation had received grants from eighty-seven corporations and six or seven other major foundations. In 1992 to 1994 alone, twelve conservative foundations holding assets worth $1.1 billion awarded grants totaling $300 million. In 1995 the top five conservative foundations enjoyed revenues of $77 million compared to only $18.6 million for “their eight political equivalents on the left.”
As well as producing ideas, conservative think tanks marketed them aggressively. Historian James Smith writes that, “marketing and promotion” did “more to change the think tanks’ definition of their role (and the public’s perception of them)” than did anything else. Their conservative funders paid “meticulous attention to the entire ‘knowledge production process,’ ” represented as a “conveyor belt” extending from “academic research to marketing and mobilization, from scholars to activists.” Their “sophisticated and effective outreach strategies” included policy papers, media appearances, advertising campaigns, op ed articles, and direct mail. In 1989 the Heritage Foundation spent 36 percent of its budget on marketing and 15 percent on fundraising. At the same time, wealthy donors countered the liberal politics of most leading social scientists with “lavish amounts of support on scholars willing to orient their research” toward conservative outcomes and a “grow-your-own approach” that funded “law students, student editors, and campus leaders with scholarships, leadership training, and law and economics classes aimed at ensuring the next generation of academic leaders has an even more conservative cast than the current one.”
Conservative politics fused three strands: economic, social, and nationalist. The economic strand stressed free markets and minimal government regulation. The social emphasized the protection of families and the restoration of social order and private morality. Where the state intervened in the right to pray or in religiously sanctioned gender relations, it opposed federal legislation and the intrusion of the courts. Where the state sanctioned or encouraged family breakdown and immoral behavior, as in abortion or welfare, it favored authoritarian public policies. Militant anti-communism composed the core of conservatism’s nationalist strand, fusing the other two in opposition to a common enemy. It favored heavy public spending on the military and focused on both the external enemy—the Soviet Union—and the internal foe—anyone or anything threatening the socialist takeover of America. With the collapse of the Soviet Union, the bond holding together the social and economic strands of conservatism weakened, replaced at last by a new enemy, militant Islam embodied in Iraq and Iran and in the Taliban and Al Qaeda.
Conservatives triumphed intellectually in the 1980s because they offered ordinary Americans a convincing narrative that explained their manifold worries. In this narrative, welfare, the “undeserving poor,” and the cities they inhabited became centerpieces of an explanation for economic stagnation and moral decay. Welfare was an easy target, first because its rolls and expense had swollen so greatly in the preceding several years and, second, because so many of its clients were the quintessential “undeserving poor”—unmarried black women. Welfare, it appeared, encouraged young black women to have children out of wedlock; discouraged them from marrying; and, along with generous unemployment and disability insurance, fostered indolence and a reluctance to work. Clearly, it appeared, however praiseworthy the intentions, the impact of the War on Poverty and the Great Society had been perverse. By destroying families, diffusing immorality, pushing taxes unendurably high, maintaining crippling wage levels, lowering productivity, and destroying cities they had worsened the very problems they set out to solve.
Even though these arguments were wrong, liberals failed to produce a convincing counter-narrative that wove together a fresh defense of the welfare state from new definitions of rights and entitlements, emergent conceptions of distributive justice, ethnographic data about poor people, and revised historical and political interpretations of the welfare state. This inability to synthesize the elements needed to construct a new narrative and compelling case for the extension of the welfare state was one price paid for the capture of poverty by economists and the new profession of public policy analysis. It resulted, as well, from a lack of empathy: an inability to forge a plausible and sympathetic response to the intuitive and interconnected problems troubling ordinary Americans: stagflation; declining opportunity; increased taxes and welfare spending; crime and violence on the streets; and the alleged erosion of families and moral standards.
Excerpted from “The Undeserving Poor: America’s Enduring Confrontation with Poverty,” by Michael B. Katz. Copyright © 2013 by Michael B. Katz. Reprinted by arrangement with Oxford University Press, a division of Oxford University. All rights reserved.