The "Cadillac" health plan is a myth

A doctor explains that limiting the scope of employer-based health care has long been a dream of the establishment

Published August 16, 2013 7:37PM (EDT)

    (Reuters/Rebecca Cook)
(Reuters/Rebecca Cook)

What would a gold-plated Cadillac health care plan look like to you? By the sound of it, you might think it’d cover such superfluities as annual trips to Swiss medical spas, cosmetic surgery on demand and weekly thermal seaweed wraps.

Alas, that appears to be far from the case. As the New York Times reported last week, municipal unions across the country are facing pressure to accept worse health care plans, before the so-called “Cadillac tax” on expensive health care plans kicks in in 2018.  Not surprisingly, workers are displeased with the prospect of being downgraded to inferior plans with more out-of-pocket spending.

Before concluding that such luxurious plans are the sole preserve of supposedly greedy city workers, keep in mind that by the estimate of one health economist, as many as 75 percent of employer health plans could fall under the “Cadillac” umbrella over the next decade. Reasonably good insurance – you know, the kind that covers all of your medical needs, without making you pay an arm and leg out-of-pocket every time you need to use it – has stealthily become the new Cadillac. However much the reputation of General Motors may have fallen in recent years, this seems like a bit of a stretch.

A reasonable observer might be enticed to ask: What on earth is going on here? From whence came this health care Cadillac?

To begin, the nationwide trend towards lower-value health insurance – now enshrined in law in the form of the Cadillac tax – is not, contrary to appearances, some newfangled, desperate cost-saving measure. On the contrary, limiting the scope of employer-based health insurance has been a dream of health policy wonks for decades.

The academic taproots of the idea stretch at least back to the year 1968, with the publication of the economist Mark Pauly’s highly influential paper “The Economics of Moral Hazard” in the American Economic Review.The effect of an insurance which indemnifies against all medical care expenses,” Pauly wrote, “is to reduce the price charged to the individual at the point of service from the market price to zero.”  As a result, “no individual will be motivated to restrain his own use … since the incremental benefit to him for excess use is great, while the additional cost of his use is largely spread over other insurance holders.” Although others had made note of this phenomenon previously, Pauly’s contribution was to calculate the loss to society as a result of this “excess use,” which he found to be quite substantial.

Thus was the idea of health care “moral hazard” born, otherwise known as the doctrine of: “it’s included, so we might as well take it!” We could also more colloquially label it the “salad bar” hypothesis.

From “moral hazard,” in turn, came the idea that that the fundamental, “original sin” of the American health care system was not our failure to enact universal health care, but was instead our creation of a system of tax-favored, overly generous employer-based insurance. Several years after the Pauly paper, along this line of thinking, the prominent economist Martin Feldstein – future chief economist of the Reagan administration – would argue in another important paper that “American families are in general overinsured against health expenses,” and that the right medicine was more deductibles and coinsurance.

It was not until the early 1980s, in the face of persistently rising health care costs, that this academic concern for “moral hazard” began to hit the pocketbooks of Americans: Between 1982 and 1984, for instance, the percentage of insurance plans requiring some deductible for hospitalization more than doubled, from 30 percent to 63 percent. During the 1990s, on the other hand, the central importance of “cost sharing” was sidelined as corporatized “managed care” took central stage, and insurance companies sought to decrease costs (and thereby increase profits) by limiting care on the supply side.

But with the widespread revolt against managed care in the late 1990s (and with health care costs continuing to rise in spite of it), “moral hazard” again became a prime target for “reform.” The solution was in line with the recommendations of Pauly and Feldstein – higher deductible insurance. President George W. Bush’s 2003 Medicare prescription drug law gave us the “Health Savings Account” (HSA) – a tax-free account that could be used for out-of-pocket medical expenses, but which had to be attached to a high-deductible insurance plan. “Consumer-driven” health care plans – not much more than high-deductible plans with the capacity to be coupled with an HSA – quickly became all the rage.

Yet even for those outside these new plans, cost-sharing soared over the decade. Copayments for expensive drugs skyrocketed, sometimes to the tune of hundreds or thousands of dollars, while the percent of employer plans with $1,000 deductibles (or more) tripled.

But this was not enough – the alleged culprit for our continuously rising health care costs was still that dreaded “moral hazard.” Once you paid for the salad bar, after all, why not heap on larger and larger portions of colonoscopies, all-nighters in the emergency room or chemotherapy infusions? By 2008, John McCain was essentially centering his entire health care platform on the vilification of those “gold-plated Cadillac plans,” as he called them, which covered, among other things, such absurdities as “transplants.”

And in 2010, despite opposition from House Democrats and organized labor, a somewhat limited but still highly controversial “Cadillac tax” made its way into the Affordable Care Act, creating a 40 percent excise tax for family plans valued at greater than $27,500, or individual plans greater than $10,200.

The Age of the Cadillac, it seemed, was finally coming to a close. Health care “consumers” would now have to play with their own damn money. A golden age of improved quality and reduced costs was about to dawn.

Unfortunately, however, we didn’t live happily ever after.

For over these very same years, the idea that we were a nation “overinsured” began to collide with several lines of hard evidence. First, the very existence of a large societal net loss from “overinsurance” was called into question in the academic health economic literature, most prominently by the economist John Nyman. Nyman argued that while there may be some cost to moral hazard, it was far smaller than what Pauly, Feldstein and others had calculated. We use more health care when we are sick, he demonstrated, not because health care becomes free, but because we gain “income” in the form of insurance transfers, which we rationally use to pay for “otherwise unaffordable procedures.”

Second, there was an accumulating body of data in the academic medical literature suggesting that “underinsurance,” not “overinsurance,” was the more serious problem facing Americans. More and more families were seeing higher and higher percentages of their family income consumed by their health care service burden and by out-of-pocket medical expenses. In fact, we were increasingly going bankrupt as a result of our health care expenses – according to one study, in 2007 62 percent of all bankruptcies had a medical cause, despite the fact that in a majority of those cases, those going bankrupt actually already had health insurance. Not much evidence for “overinsurance” in that department, either.

And looking at the problem from another angle, it became increasingly appreciated that the rise in deductibles, copays and coinsurance was having significant effects on our very health. As I described in a recent article, there are clear downsides to becoming “Copay Country.” There is evidence, for instance, that cost sharing may increase the risk of death for the most vulnerable, perhaps because it reduces expenditure on both “inappropriate” and “appropriate” care. In one recent experiment that randomized patients to copays or to full coverage for prescriptions after they suffered heart attacks, those with full coverage took their medications more regularly – and had less strokes and other cardiovascular complications as result. And all told, there was still no overall increase in cost, either way.

Yet, in the face of this growing body of solid evidence, we simply continue to hear more and more about how patients (apologies, “consumers”) need to have more “skin in the game” – as if having their physical hearts, lungs or – in some instances – their skin on the line wasn’t enough already.

So don’t be surprised if you find, in the coming years, that your semi-reasonable-quality heath insurance – perhaps won through hard-fought negotiation, or maybe earned in exchange for lower pay or other benefits – somehow transforms itself into a taxed "Cadillac" plan. From the perspective of much of the health policy establishment, you’ve had it too good, for too long, already. And however nice it might sound, they say we simply can’t afford truly universal health care, with comprehensive benefits for all and with little or no cost sharing (even if other industrialized countries magically manage to do so, at a lower price, and with better results).

Instead, it’s time we tightened our belts, stopped our whining and traded in our Cadillacs, or – depending on which metaphor you find less loathsome – that we finally got off of the salad bar.


By Adam Gaffney

AG is a physician and writer in Massachusetts.  He blogs at www.theprogressivephysician.org and is on twitter at @awgaffney.

MORE FROM Adam Gaffney


Related Topics ------------------------------------------

George W. Bush Health Care Health Insurance Labor Prescription Drugs Taxes