Survivorship Bias, Sample Sizes, and the Oregon Medicaid Study

I think most coverage of the Oregon Medicaid Study [gated] has been bad. Very bad. I wanted to flag one way that it has been especially bad.

We don’t do very much U.S. domestic politics on the Smoke-Filled Room but I think the broader methodological issues are worth highlighting. So, for those that don’t obsessively follow wonkish U.S. policy debates, a bit of background. When Oregon expanded Medicaid coverage a few years back, it did so via a lottery. That allowed researchers to compare outcomes between those who received Medicaid and those that did not. And they found no statistically significant improvement on several metrics of physical well being (cholesterol checks, blood pressure checks, etc.). They did find statistically significant improvements in terms of mental health (principally depression) and financial health (apparently from not having catastrophic health expenditures). In general, physical metrics moved in the expected direction (lower blood pressure) just not sufficiently in that direction to be indistinguishable from zero. This could either be true evidence of a null relationship between insurance and health outcomes, or it could be a sign that the study was too small to capture changes in those outcomes. If you look at the study, the fact that something like 22 out of 25 metrics move in the expected (healthier) direction, even if they don’t move far in that way, suggests to me that Medicaid does improve health outcomes. But that’s a separate issue.

Two conservative, smart writers are Ross Douthat of the New York Times and Megan McArdle of the Atlantic. Both are forced to acknowledge that the Oregon Medicaid Study shows Medicaid coverage generates strong financial and mental health benefits for Medicaid recipients, but argue rhetorically: Wasn’t this about saving lives? Douthat asks, “The health care law was sold, in part, with the promise (made by judicious wonks as well as overreaching politicians) that it would save tens of thousands of American lives each year.” McArdle, drawing on the same rhetorical playbook stresses, “[W]e heard that 150,000 uninsured people had died between 2000 and 2006.” See, classic liberal over-promising and under-delivering. You told us poor people would live, not that they would be less depressed and more financially secure.

The important thing is that the Oregon Medicaid Study was a “post-treatment” survey. I’m using “treatment” in the jargon-y way. I just mean assignment via lottery to either the “treatment condition” of receive Medicaid for two years or the “control condition” of continuing insurance free for two years. It’s right there on the first page of the article: “Approximately 2 years after the lottery, we obtained data from 6387 adults who were randomly selected to be able to apply for Medicaid coverage and 5842 adults who were not selected.” To be even more precise, and requiring Douthat and McArdle to turn to the second page of the article, they collected this data via in-person interviews.

Let’s just stop right here. Dead people tell no tales. Hence they were not included in the study. The study occurred only on those people that lived to talk at the end. Medicaid could have saved 1000 lives in Oregon and this research design would not have noticed. Or Medicaid could have killed 1000 people. Same thing. This is what we like to call survivorship bias. It’s so simple, I don’t see the need to belabor the point.

But let’s imagine the study had been designed differently. At this level of power, would we have noticed? A little quick math: about 20% of Americans are uninsured, studies suggest being uninsured is associated with about 20,000 additional deaths a year nationally (U.S. population ~300m), and the control group was about 6000 people. The expected value of uninsured “excessive” deaths in this study is this rate of “excessive” deaths caused by lack of insurance per uninsured person per year times the total number in the control group. I think that gets us about 2 excessive deaths per year, or 4 excessive deaths for the period under study. I’d be very surprised if this study would be able to discern, in a statistically significant way, if Medicaid saved lives. (The death rate in the United States is 799.5/100000, meaning out of our 6000 folks, we’d expect about 96 deaths in these two years.) Even without survivorship bias.

My point: the Oregon findings in no way impugn the possibility that 20,000 Americans a year die from lack of insurance, and that Medicaid might save them. This is true solely because of survivorship bias, though sample size problems make it doubly true.

Advertisements

4 thoughts on “Survivorship Bias, Sample Sizes, and the Oregon Medicaid Study

  1. why should we assume deaths to be significant if the variables measured, such as hypertension were not. Should we assume that the logic of people dying w/o health care indicates a radical departure from non-fatal injuries or are you claiming that the size is just too small to make any good claims about outcomes?

  2. no, because if someone died from end stage renal disease secondary to hypertension, for example, their (hypertension) metrics would not be reflected in the data as health measures of dead medicaid recipients were not included in the outcome analysis. serious flaw.

  3. Hi: Oregon is actually a two year study, which both Ross and I have followed over the years, as it’s obviously a big deal for health reform. Had you been following it, you would have known that in the first phase (released in 2011), they did look at mortality, not from interviews, but by matching hospital discharge and mortality data. The mortality data showed no significant differences. So while the survivor bias argument is indeed simple, it is not correct.

    • Thanks for flagging this point. I read your discussion of the 2011 survey but hadn’t went back to look at those articles in detail, and that certainly was my mistake, though I don’t think it changes my conclusion. The NEJM variant of the study is very brief and not so helpful, but the QJE variant and NBER working paper are more detailed. Here is the link to the NBER paper since I’m not sure the QJE is accessible to everyone (http://www.nber.org/papers/w17190.pdf). It states, “Mortality – although important and objectively measured – is very low in our population; only about 0.8 percent of the controls die over the 16 month study period. Not surprisingly, Panel A shows that we do not detect any statistically significant improvement in survival probability.” Unless I’m reading the coefficients incorrectly, the size of the substantive effect of winning the lottery (0.00032 for the reduced form OLS and 0.001 for the 2 stage least squared) is right about what you would expect for the 20,000 additional deaths per year nationally for the uninsured. So, why the 2011 study certainly invalidates my jest that there could be widespread disparities in deaths, it does not invalidate Emily’s more subtle point about survivorship bias above, and obviously the point about study power remains valid. It is simply not fair to draw from either study the conclusion that Medicaid/health care expansion would not save 20,000 people a year. The only way this data would have a statistically significant coefficient for mortality is if the uninsured “excess” deaths are much larger than 20,000 annually. If you had to create an estimate of how many lives would be saved nationally, it probably would be about 20,000, though the confidence intervals would be quite wide.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s