BOOK EXCERPT

Coke made us all obese: McDonald's, high-fructose corn-syrup and the sick, super-sized strategy to make you fat

When soda makers switched to high-fructose corn syrup, it cost pennies to super-size drinks -- and our waistlines

Published January 4, 2015 4:30PM (EST)

  (<a href='http://www.shutterstock.com/gallery-187633p1.html'>Monkey Business Images</a> via <a href='http://www.shutterstock.com/'>Shutterstock</a>/Salon)
(Monkey Business Images via Shutterstock/Salon)

Excerpted from "Citizen Coke: The Making of Coca-Cola Capitalism"

Coke’s growth in the final decades of the twentieth century was literally littered with waste, yet much of this pollution, especially the aluminum and plastic, remained out of sight, tucked away in landfills many citizens never saw. Packaging, however, was just one of the obfuscated problems of perpetual growth. Many other unpleasant by-products of Coca-Cola’s conquest were hidden from view by 2000. Coke’s bottlers, for example, relied on petroleum-guzzling trucks that emitted large quantities of greenhouse gases into the atmosphere. By 2006, over 200,000 Coke trucks puttered around the world, burning millions of gallons of fossil fuels to bring Coke to market. Likewise, for many years Coke’s countless coolers and refrigerators pumped chlorofluorocarbons (CFCs) into the air, contributing to the depletion of the earth’s ozone. All this was done to push a luxury item.

But if many of Coke’s pollutants remained out of sight, one unwanted by-product of growth became conspicuously abundant after 1985: human fat deposits. As Coke’s consumers indulged in supersized soda binges, downing more and more sugary beverages each year, their bodies began to reflect the costs of excess. In response, a growing group of consumer health advocates began to attack Coke and other soft drink firms for making people fat. Their claim could not have been more damning. Coke itself was garbage, they complained, a “junk” food that was contributing to a growing obesity epidemic. Coke at one time might have been a simple treat, a “pause that refreshes,” but by the 1990s it had become a staple of the average American’s diet, consumed throughout the day, channeling more calories into people’s bodies than they needed.

The statistics seemingly said it all. Annual per capita consumption of caloric soft drinks in the United States had more than tripled since 1955. A country that had once consumed 11 gallons per person annually in the 1950s now downed over 36 gallons fifty years later. That meant the average American in 2000 packed away over 35 pounds of sweetener a year, just from soft drinks. This was a problem.

What had happened? How did this dramatic increase in soft drink consumption come about? How did Coke and other soft drinks become such significant contributors to citizens’ corpulence?

* * *

The story of the binge begins in the mid-1970s when sugar prices were once again fluctuating wildly. In December of 1974, Congress allowed the Sugar Act to expire, ending the almost three-decade-old quota system designed to protect U.S. sugar growers and keep consumer prices steady by controlling how much sugar came into the United States each year. The Sugar Users Group, a new lobbying agency consisting of confectioners and other major sugar buyers including Coke, had campaigned for the quota collapse, mistakenly believing that the removal of federal protections would give Coke and other soft drink firms access to cheaper sugar. For years,

Coke had been prohibited from purchasing sugar from overseas at “dump” prices that were lower than the price of duty-paid sugar. Now Coke believed it was “in the driver’s seat for the time-being” and looked forward to capitalizing on a deregulated market.

The buyer’s bonanza, however, fizzled. As protective barriers came down, prices skyrocketed, approaching 60 cents a pound by the end of 1974. With the growth of consumer markets in Asia and other parts of the developing world, sugar was in high demand, and producers all across the globe, flush with new buyers, continued to raise their prices. Overproduction, however, caused a dramatic drop in prices in 1975, threatening to bankrupt U.S. growers, who claimed they could not sell at prices below the duty-free market price. In an attempt to provide protection to American sugar producers, the government moved to reinstate a quota system in 1976, causing industrial sugar users to protest. They wanted to return to a stable sugar market, but they did not want to pay higher prices to make such stabilization possible.

Coke and its industry partners were tired of this roller coaster ride. They felt captive to the ebb and flow of global sugar market trends, and they had little faith that the volatility would ever subside. They wanted out. If there were any alternative to sugar that would free Coke from its dependence on this unpredictable trade market in the United States, the company was willing to try it.

Fortunately for Coke, there was a new sweetener on the horizon: high-fructose corn syrup. Since the 1920s, corn-refining industries in the American Midwest had experimented with processing cornstarch to produce a thick golden syrup with a molecular composition similar to sucrose, or table sugar. However, up through the mid-1960s, commercial users had largely been dissatisfied with the off-taste of these corn-based sweeteners. The sugar price scares of the 1970s, however, sparked new interest in investment, and refineries got to work to improve their processing systems. Leading the charge was the Clinton Corn Processing Company of Clinton, Iowa, which identified an artificial sweetener in 1967 that was as sweet as, if not more sweet than, sucrose and featured no unpleasant aftertaste. Clinton made the new syrup using a patented bacterial enzyme called an isomerase (first isolated in Japan) capable of transforming glucose molecules (extracted from cornstarch) into sweeter fructose molecules. The Clinton sweetener proved far superior to its predecessors in terms of taste, and in the 1970s, the company invested heavily to produce its corn syrup in mass.

The name for the new syrup was a bit of an exaggeration. After all, high-fructose corn syrup 55, the main varietal of corn sweetener used in soft drinks, contained 55 percent fructose compared to about 45 percent glucose. Table sugar, on the other hand, is just a glucose molecule bound to a fructose molecule—50/50. The “high” label for high-fructose corn syrup, in other words, connoted a difference of only five percentage points between the new syrup and table sugar. In the case of high-fructose corn syrup 42, a sweetener used in some canned fruit products and ice creams, the labeling was even more misleading. This sweetener actually contained less fructose than regular sugar (roughly 42 percent).

Clinton’s success was contingent upon federal aid. High-fructose corn syrup could only undersell sugar because corn was cheap, and corn was cheap because the government had made it so. During the Great Depression, the USDA began to enforce acreage-reduction loan programs for corn production through the Agricultural Adjustment Act, hoping to keep the excesses of American agribusiness locked up in silos and out of retail outlets. In short, the government was ostensibly paying farmers to produce less. The goal was to support commodity prices by limiting supply at a time when farmers were struggling to make enough money to feed their families. The American taxpayer financed the subsidy system, but not through visible sales tax. Rather, the USDA’s Commodity Credit Corporation allocated tax revenue held by the U.S. Treasury to pay for loans to farmers producing surplus corn during bumper crop years. The collateral corn was held in federal repositories, collectively referred to as the “ever-normal granary,” until prices rose sufficiently so farmers could get a good market value and turn a profit.

The USDA’s intervention made sense during the hard times of the 1930s, but due to pressures from a consolidating agriculture sector that had grown accustomed to this cushion, the programs were extended after World War II. As with the protective policies that allowed domestic sugar growers to expand in the twentieth century, the corn support programs from the 1930s to the 1970s allowed large-scale American agribusinesses to increase their productive capacity without suffering serious financial losses. Big farmers sank government loan payments into new machines, hydrological systems, and nitrogen fertilizers that helped them intensify their land use. They benefited from technical training offered by the USDA’s cooperative extension service, which taught farmers how to use high-yielding hybrid corn in the 1940s. These new varietals, created through cross-breeding techniques developed by government researchers in the early twentieth century, proved incredibly prolific in the crowded monocrop cultures of the American Midwest. As a result, between 1945 and 1971 corn production increased by 166 percent. The government’s goal of actually curbing production through USDA programs had failed.

This was more corn than Americans could possibly consume. Each year the government’s stockpile of agricultural surplus grew, not just of corn but also of other commodities supported by similar New Deal loan programs. By 1952, the federal government held roughly $1.3 billion worth of agricultural surplus stocks in storage facilities across the country.

In 1972, this excess would flood the U.S. market. That year, the federal government dismantled its New Deal programs, recognizing that they were no longer fulfilling their original intentions. The nation was entering an energy crisis, and inflation coupled with a stagnant economy was driving the price of consumer goods ever higher. President Richard M. Nixon’s agriculture secretary Earl Butz believed that the USDA was actually hurting the country through its outdated agricultural policies. The government was paying people not to produce at a time when food prices were skyrocketing. This seemed absurd. Butz thus proposed a comprehensive overhaul of agriculture policy, hoping to utilize the country’s agricultural bounty to curb inflationary trends.

Urging American farmers to “get bigger or get out,” Butz abandoned New Deal policies that coupled price support mechanisms with acreage reduction programs, implementing a new system under the Agriculture and Consumer Protection Act of 1973—thereafter called the Farm Bill—that favored production-stimulating bounty payments over loan programs designed to prevent farm product surpluses from flooding consumer markets. Now farmers received government payments with no strings attached. They could collect subsidies and push as much of their produce on the open market as they wanted.

The consequences of Butz’s policy were predictable. Because the new bounty program dismantled the ever-normal granary, the glut of agribusiness—a superfluity that for so many years had piled high within federally financed silos hidden in America’s heartland—now came pouring into consumer markets all across the country, and prices correspondingly dropped, from over $3 a bushel at the end of 1974 to less than $2 a bushel just three years later. There would be price fluctuations in the coming years, but by the end of 1986, buyers could purchase a bushel of corn for about $1.50.

All this cheap corn meant big business for America’s corn refineries. Clinton Corn Processing Company, A. E. Staley, and Archer Daniels Midland, the three major corn refiners in the Midwest, were beside themselves with joy. Here was a golden opportunity to make lots of money; sugar prices were rising, and raw inputs for the corn refining business were in free fall. By 1978, they could offer high-fructose corn syrup at a price 10 to 15 percent lower than sugar produced from cane or beets. All they needed were buyers.

* * *

At first, it was not clear whether the big sugar users were going to commit to the revolution. As it did with every decision, Coke approached the prospect of switching suppliers with caution. This was a radically new product, a brown syrupy gel produced in a laboratory rather than a field. Would consumers buy this stuff? Coke was not sure, so in the summer of 1974, the company decided to try an experiment. That year, the company changed the formula for its noncola beverages (Sprite, Mr. Pibb, and Fanta) to include 25 percent high-fructose corn syrup. The plan was to test out the cheaper sweetener in its less-popular beverages before tampering with its flagship brand. Hearing no backlash from consumers, Coke gradually made the switch to 100 percent corn syrup in all of its noncola beverages in the late 1970s. This decision excited high-fructose corn syrup producers, such as Archer Daniels Midland and Clinton, who believed soft drink giants would soon commit to huge corn syrup contracts. A year later, Coke approved 50 percent corn syrup for its number-one-selling product, Coca-Cola, and by 1985, Coke made the switch to 100 percent corn syrup in all of its cola and noncola beverages sold within the United States.

Coke’s adaptability had once again produced dividends for the company. When high-fructose corn syrup came online, Coca-Cola did not have to sell sugar-processing plants because it did not own any. It simply switched to new suppliers. The federal government had changed the rules of the game in the sweetener production business, and Coke was ready to follow the winning team.

Coca-Cola’s sweetener swap ensured the success of high-fructose corn syrup. Coke was by far the largest consumer of caloric sweeteners in the country, and, as such, its imprimatur mattered. Soon other confectionery businesses of all kinds followed Coke’s lead, changing to 100 percent corn syrup by the mid-1980s. A new era of sweet excess had begun.

High-fructose corn syrup helped Coke exponentially increase its syrup sales. Rather than decrease prices for their product to reflect multimillion-dollar savings in production costs (said to be some $20 million for every cent decrease in the cost of sweeteners in 1978), Coke looked to sell greater quantities of its beverages to their consumers at marginally higher prices. In "The Omnivore’s Dilemma," journalist Michael Pollan explained Coke’s mindset in the 1980s: “Since a soft drink’s main raw material—corn sweetener—was now so cheap, why not get people to pay just a few pennies more for a substantially bigger bottle? Drop the price per ounce, but sell a lot more ounces. So began the transformation of the svelte eight-ounce Coke bottle into the chubby twenty-ouncer.”

McDonald’s, Coke’s largest customer in the 1990s with over 14,000 restaurants worldwide in 1993 (3,654 overseas), was the real mastermind of this supersized sales strategy. In 1993, a McDonald’s retail strategist named David Wallerstein first introduced the concept of supersizing. The system was exploitative, but few consumers understood the math. Coke found that people would pay a few dimes more for a supersized product, even if that larger serving contained just 2 or 3 cents’ worth of additional sweetener. Because high-fructose corn syrup was so cheap, it paid to go big. As a result, soft drink companies and retail distributors created new beverage packaging, first shooting for 20-ounce containers and later encouraging consumer purchases of 64-ounce soda buckets by the mid-1990s. The result was a dramatic increase in per capita caloric soft drink consumption, which rose from 28.7 gallons in 1985 to 36.8 gallons in 1998.

The problem with the high-fructose sugar gorge was that Americans were not suffering from caloric deficits. They were consuming large amounts of calories their bodies did not need. In 1950, per capita consumption of caloric sweeteners had topped out at over 100 pounds per person (almost twice the per capita sugar intake of 49.2 pounds in 1885), but by the end of the 1980s, annual per capita consumption had risen to over 125 pounds. The upward trend continued into the 1990s, and by 2000, average annual per capita consumption of caloric sweeteners in the United States reached 152.4 pounds. Thus, citizens who had once thrived on less than 50 pounds of caloric sweeteners annually were by the twenty-first century consuming over three times that much each year. The country’s subsidized superfarms were not recharging citizens’ underfilled caloric reservoirs; they were fueling an unhealthy trend toward overconsumption of carbohydrate-rich sweeteners.

Society had changed. No longer were Americans working long hours in the field as they had in the nineteenth century. By the 1990s, the country was more urbanized than it had ever been before. When Pemberton first created Coke in 1886, less than 30 percent of the population lived in cities, but that figure had risen to 54.1 percent by 1920 and 75.2 percent by 1990. In part, this was the result of government agricultural policies that benefited large agribusinesses over small farms. No piece of legislation was more instrumental in bringing about this change than the Agricultural Adjustment Act (AAA), passed in 1933, which channeled capital to rich farmers owning large plantations in the rural South and Midwest. Tenants leasing lands from these owners never saw AAA money. Elite planters were able to hoard government funds, thereby accumulating substantial federal dollars that enabled them to mechanize their operations, reducing their demand for agricultural labor. Thus, policies designed to help farmers perversely depopulated rural America and ushered in a new era of big agribusiness growth. A nation of farmers had become a nation of city workers.

In the city, Americans found new opportunities for employment in the late twentieth century. By the 1990s, the typical nonfarm laborer in America worked in an office, not a factory. In 1994, 80 percent of all city workers in America were employed in a service sector industry. These jobs were typically less labor-intensive than factory jobs, meaning laborers expended fewer calories in the average workday. And Americans were not burning these extra calories outside the workplace. Only a minority of the population walked to work, many making long commutes from suburban homes to inner-city offices. Between 1960 and 2000, the number of people traveling by personal auto to work nearly tripled (from 40 million to roughly 110 million), with the average round-trip travel time approaching fifty minutes by the twenty-first century. After a long afternoon on the road, few Americans engaged in extracurricular activity once they were off the clock. A study conducted by the CDC in 1991 showed that roughly 60 percent of Americans said they engaged in virtually no physical leisure activity after work. Less than 20 percent of the population reported that they exercised on a daily basis in 2003. Life in the United States had become sedentary.

At the same time as Americans’ caloric demands declined, their access to cheap calories increased. Commodity support programs kept food prices down precisely at the same time that Americans’ incomes were on the rise. The result was that consumers could spend a smaller portion of their salaries to buy the basic food staples they needed. In the 1930s, the average American spent almost 25 percent of disposable income on food purchases, but by 2000, this figure dropped to roughly 10 percent. Americans were not suffering from want. They had all the food they could possibly desire. In this world of abundance, Coke was a particularly potent source of excess calories few Americans needed.

Despite meeting and then exceeding the country’s food needs, the profligate agricultural machine grew larger and larger. What made this gluttonous growth acceptable to consumers initially was the fact that its material costs were kept out of sight. Describing the continued political success of Butz’s agricultural support program in the 1990s, food journalist Betty Fussell contended that “Americans don’t believe in what they can’t see, and the superstructure of American agribusiness that controls the production of corn is as invisible and pervasive as the industrial products of corn.” The government payments that stimulated corn overproduction, totaling billions of dollars by the late 1980s ($5.7 billion in 1983 alone), came from general tax funds, not itemized sales tax, thus limiting consumers’ exposure to the cost of agricultural subsidies.

Over time, however, consumers’ waistlines exposed the expensive storage costs that allowed the oversupplied corn market to function. Far from receiving nutritional benefits from the supersize revolution, consumers functioned as the new repositories of agricultural surplus. Consumers’ bodies became jam-packed silos, replacements for the federal repositories that had once helped stimulate scarcity by keeping excess corn off retail shelves. Consuming ever-greater quantities of calories each year, Americans became bigger and bigger. According to National Health and Nutrition Examination surveys conducted by the CDC, only 14.1 percent of Americans were considered obese (defined as a body mass index [BMI] equal to or greater than 25.0) between 1971 and 1974 compared to 22.4 percent by the early 1990s. In 2008, over 34 percent of the U.S. population had a BMI over 25. Consumers were taking in more carbohydrates than their bodies needed, and as a result, most Americans were turning excess sugars into fat.

Americans did not bear the burden of obesity equally. Minority communities, for example, consistently reported higher incidence of obesity. In 2010, the CDC found that roughly half (49.5 percent) of all adult non-Hispanic African Americans were obese, compared to 34.3 percent of non-Hispanic whites. The obesity rate for Mexican Americans was also high, rising above 40 percent that year. While researchers continue to debate the reasons for such stark disparities, the link between race and poverty appears to be at the heart of the divide. Scientists showed that many low-income minority groups lived in “obesogenic environments,” where fresh, local food was hard to find. Furthermore, many people in these communities did not earn a living wage, so that even if they did have a farmers’ market in their community, they simply could not afford to sacrifice the money and time to purchase produce priced well above cheap and ready-made fast food options. In short, for impoverished minorities craving cheap calories, a McDonald’s hamburger washed down with a Coke seemed like a low-cost way to satiate a hungry stomach.

The only problem was that such satiation was illusory. Heavy sugar consumption resulted in a short-term buzz, as glucose and fructose molecules in the bloodstream caused the brain to release the pleasure-inducing chemical dopamine. But the chemical surge soon wore off, leaving the consumer with a strong urge to down more sugar. It was a vicious cycle, a tragic addiction. The result was bigger, not satiated, stomachs.

The consequence of sweetener binging was more than cosmetic. Distended stomachs were symptomatic of other health problems associated with excessive caloric intake. Diabetes was perhaps the most serious side effect of excessive sweetener consumption. Between 1980 and 2000, the prevalence of diabetes for citizens aged 0 to 44 doubled, rising from 0.6 percent of the population to 1.2 percent (over 2 million people). Again, low-income, minority communities were disproportionately affected by the disease. The CDC also registered increases in diagnoses for citizens aged 65 to 74, reporting a rise in rates from 9.1 percent to 15.4 percent over the same period. By 1996, adult-onset diabetes, a condition believed by most physicians to be linked to overconsumption of sugar and caloric sweeteners, had been renamed type 2 diabetes because so many children were exhibiting symptoms of the disease. In the early years of the twenty-first century, then, it appeared diabetes, a condition once affecting a small fraction of the population, was quickly becoming an American epidemic.

America’s fat problem had financial repercussions. By 2005, estimated medical costs of treating ailments associated with obesity had risen dramatically from an estimated $78.5 billion in 1998 to over $147 billion in 2008. Obesity presented consumers with immediate fiscal (in the form of medical payments) and physical (in the form of health problems) costs that forced them to scrutinize an agro-industrial complex that for decades had enriched Coke and other big food and beverage firms.

Coke could hardly claim it was an insignificant contributor to the crisis. Gallon sales of soft drinks had exploded during the 1980s and 1990s, and longtime Coke customers were consuming greater quantities of carbonated beverages than they ever had in history. The USDA reported that the quantity of caloric soft drinks consumed by the average adult between 1972 and 1998 increased by 53 percent. This rise in per capita consumption helped to make soft drinks containing caloric sweeteners the “largest single food source of calories in the U.S. diet” in 2004, according to the Journal of the American Medical Association. Coke was a major source of the country’s obesity problem.

Excerpted from "Citizen Coke: The Making of Coca-Cola Capitalism" by Bartow J. Elmore. Published by W.W. Norton and Co. Inc. Copyright 2015 Bartow J. Elmore. Reprinted with permission of the publisher. All rights reserved.


By Bartow J. Elmore

Bartow J. Elmore teaches history at the University of Alabama

MORE FROM Bartow J. Elmore