Medicaid

Debunking the 100,000 Medicaid Deaths Myth

Partisan pundits are misreading statistical estimates and misrepresenting the science to suggest that Trump's Medicaid cuts will kill 100,000 people. That claim doesn’t survive scrutiny.

|


"More Americans will die—at least 100,000 more over the course of the next decade," wrote Yale law professor Natasha Sarin in a June 9 Washington Post column about the Medicaid cuts in President Donald Trump's One Big Beautiful Bill Act.

"That isn't hyperbolic," Sarin added. "It is fact."

The average reader might be inclined to believe Sarin, who holds a Harvard Ph.D. in economics as well as a Harvard law degree, and served in the Treasury Department during the Biden administration. But contrary to her characterization, her claim is both hyperbole and not "fact."

Sarin's assertion reflects a fundamental misunderstanding of the concept of "statistical lives saved." In particular, she and several other prominent journalists misinterpreted a recent working paper published by the National Bureau of Economic Research (NBER).

As a professional debunker of bad research, I can say with some authority that the authors of that study, Dartmouth economist Angela Wyse and University of Chicago economist Bruce D. Meyer, wrote an excellent paper—a rarity among academic studies these days. But the University of Chicago's press office trumpeted the paper's findings, declaring, "Medicaid expansion under the Affordable Care Act saved about 27,400 lives between 2010-22," which is highly misleading. 

That take was echoed in coverage of the study by major news outlets. "The expansion of Medicaid has saved more than 27,000 lives since 2010, according to the most definitive study yet on the program's health effects," reported Sarah Kliff and Margot Sanger-Katz in The New York Times. Their May 16 article was headlined "As Congress Debates Cutting Medicaid, a Major Study Shows It Saves Lives." 

The story was also picked up by Time ("Medicaid Expansions Saved Tens of Thousands of Lives, Study Finds"), NPR ("New Studies Show What's at Stake if Medicaid Is Scaled Back"), NBC News ("Proposed Medicaid Cuts Could Lead to Thousands of Deaths, Study Finds"), and several other news outlets. These journalists either didn't read the study, didn't understand it, or willfully misrepresented its findings for partisan reasons. 

In the past, conservative opponents of Medicaid have been equally guilty of misconstruing academic research to support their policy views. That is what happened with the most famous study on the subject, The Oregon Experiment—Effects of Medicaid on Clinical Outcomes, which The New England Journal of Medicine (NEJM) published in 2013. The NBER and NEJM papers offer a similar account of Medicaid's impact on health, but both have been misinterpreted.

Let's start with the widely reported claim that the expansion of Medicaid during the Obama administration "saved about 27,400 lives" between 2010 and 2022. Where did that figure come from? Wyse and Meyer's paper estimated that the Medicaid expansion reduced mortality among eligible adults between 0.40 percent and 4.52 percent, compared to similar populations in states that refused the Medicaid expansion.

How did Sarin conclude that "over 100,000 and likely closer to 200,000" preventable deaths will be caused by the One Big Beautiful Bill Act and the expiration of the premium tax credits? She cited evidence that those things would cause 16 million Americans to lose health insurance. Using the numbers from Wyse and Meyer's paper, which Sarin relied upon, suggests that it could actually cost between about 3,000 deaths (0.4 percent decline in 10-year mortality) and 37,000 deaths (4.52 percent). 

The University of Chicago press release and the journalists who picked it up at least multiplied correctly: 27,400 is indeed the midpoint of the Wyse and Meyer estimate of lives saved by the Affordable Care Act's Medicaid expansion. But this is misleading for a more fundamental reason. The sum of statistical lives saved vastly exceeds the number of actual lives.

Think of all the things that have saved your life. Every breath you take, every heartbeat, every car and lightning bolt that didn't hit you. Yet, you're only alive once. Even if we restrict ourselves to the effects of government programs, the total statistical lives saved by all programs is far greater than the population.

So why isn't the mortality rate negative? With bodies coming out of graveyards like Night of the Living Dead? Because these same government programs also take many statistical lives. Wyse and Meyer only show one side of the ledger—the reduction in mortality among people who gain Medicare eligibility. On the other side are the statistical lives lost from the people the money is taken from, or the programs cut.

"Cowards die many times before their deaths," said Shakespeare's Julius Caesar. "The valiant never taste of death but once." But coward or valiant, you will die many statistical times before your death.

Counting statistical lives saved or lost is a debased currency, because it counts each actual life multiple times. And citing only the good side of the ledger makes it impossible to evaluate.

When New York Times readers are told that "the expansion of Medicaid has saved more than 27,000 lives since 2010," they are misled into imagining 27,000 people who would be dead if it weren't for the Medicaid expansion—27,000 actual lives saved. Misinterpreting statistical lives saved in this manner is a common partisan tactic that turns rational policy debates into moral finger pointing.

Then there's the crucial issue of uncertainty. The 95 percent "confidence interval" reported by Myer and Wyse ranged from 4,500 to 50,000 statistical lives saved. That 95 percent is a pro forma calculation required by journal editors with little relation to actual uncertainty. It assumes all approximations and models the authors used were errorless and that there were no data errors.

For example, the authors lost track of over 400,000 people in their sample, 14 percent of the total, and had to guess whether they lived or died. The uncertainty from that guess is not reflected in the width of the 95 percent interval. They assumed that no one moved to another state or moved out of the income range for expanded Medicaid eligibility. They had to estimate from historical trends what the mortality rate would have been without Medicaid expansion.

None of this is meant as criticism. This type of economic analysis is challenging, and approximations and models are often necessary. But the point is that you cannot take the pro forma confidence interval at face value. Far more than 5 percent of papers claiming 95 percent confidence turn out to be false. It's plausible that Medicaid expansion saved no statistical lives—even looking only at the positive side of the ledger—and it's plausible it saved over 50,000 statistical lives.

Turning attention to the negative side of the ledger, after the Medicaid expansion, total expenditures increased by more than $1 trillion. That spending also costs statistical lives, because the same money could have been allocated to other potentially lifesaving programs, such as vaccinations, suicide prevention, mental health services, drug treatment facilities, screening for cardiovascular risk factors, and replacing old cars with newer, less polluting and safer models.

Alternatively, the money could have remained in taxpayers' bank accounts, which also could promote good health. Mortality declines with income. Even if the Medicaid expansion were a cost-effective way to improve mortality, you have to consider the other side of the ledger.

Medicaid expansion may not have been a cost-effective way to reduce mortality, as a large share of health care spending is not targeted toward saving lives. Think of all your medical bills, including the ones insurance paid, and ask how many were for matters of life or death. Most likely, you purchased health care for symptom relief, a speedier recovery, an improved quality of life, and preventative care.

The lifesaving medical measures with the biggest impact, such as vaccinations and antibiotics, are relatively cheap. The Medicaid expansion may have relieved financial stress and made the program's beneficiaries more physically comfortable, which are better criteria for evaluating its impact.

Now consider the 2013 NEJM study trumpeted by conservatives, which examined various health measures. It found that Medicaid enrollment resulted in large and statistically significant improvements in patients' subjective estimates of their health and quality of life, as well as significant reductions in their financial stress. But it did not find a statistically significant impact on mortality.

One typical health finding concerned the impact of Medicaid access on average Framingham Risk Scores—which predict the risk of strokes, heart attacks, and other cardiovascular events—over 10 years. The Oregon study was unable to determine if Medicaid coverage led to a moderate increase or moderate decrease in the score. Conservative commentator Avik Roy, writing about the study in Forbes, opined that the Oregon study "showed no difference in health outcomes" at the one-year mark. This is using absence of evidence as evidence of absence. The Oregon study left plenty of room for Medicaid coverage to cause significant improvements in health, including a 19 percent reduction in Framingham Risk Scores. It's true that it was also possible that Medicaid coverage did no good or even harmed health measures.

The Oregon study did not find a statistically significant cardiovascular benefit from Medicaid access. But that does not necessarily mean there was no benefit; it just means this study failed to find one. The basic message of the NEJM study is the same as the upshot of the NBER paper: We don't know much about the health benefits of getting Medicaid coverage, other than they can't be large to show up in overall mortality statistics without painstaking analysis of gigantic samples that depend on approximations and assumptions—and sometimes not even then. 

If Medicaid expansion had saved 100,000 lives over 10 years, it would have been easily obvious without careful statistical work. That would mean a cost of over $10 million per statistical life saved, when there are far cheaper ways to save statistical lives, and extracting $10 million from taxpayers or other programs could easily cost more than one statistical life.

Both studies were well-designed, with clear descriptions of the data and methodology, and provided valuable insights. In both cases, the researchers preregistered their hypotheses, an essential procedure to make statistical results credible. Both sets of authors devoted more energy to trying to falsify their conclusions than to building them up. 

The two studies are more valuable in combination than individually. The NEJM study had the advantage of random assignment and detailed individual data. The NBER paper had a much larger sample size and time interval. Both found significant benefits to Medicaid recipients, although they did not establish that these benefits were any greater than could have been obtained by simply giving each recipient several thousand dollars per year. Neither study convincingly answered whether Medicaid improved health or saved statistical lives.

Both studies examined a range of issues and provided important insights to people interested in health care policy. They just didn't lead to a simple conclusion like "Medicaid is worthless" or "Medicaid is a cost-effective way to save real lives." Unfortunately, their nuanced findings were misrepresented by partisan journalists looking to score cheap political points.