I think most coverage of the Oregon Medicaid Study [gated] has been bad. Very bad. I wanted to flag one way that it has been especially bad.
We don’t do very much U.S. domestic politics on the Smoke-Filled Room but I think the broader methodological issues are worth highlighting. So, for those that don’t obsessively follow wonkish U.S. policy debates, a bit of background. When Oregon expanded Medicaid coverage a few years back, it did so via a lottery. That allowed researchers to compare outcomes between those who received Medicaid and those that did not. And they found no statistically significant improvement on several metrics of physical well being (cholesterol checks, blood pressure checks, etc.). They did find statistically significant improvements in terms of mental health (principally depression) and financial health (apparently from not having catastrophic health expenditures). In general, physical metrics moved in the expected direction (lower blood pressure) just not sufficiently in that direction to be indistinguishable from zero. This could either be true evidence of a null relationship between insurance and health outcomes, or it could be a sign that the study was too small to capture changes in those outcomes. If you look at the study, the fact that something like 22 out of 25 metrics move in the expected (healthier) direction, even if they don’t move far in that way, suggests to me that Medicaid does improve health outcomes. But that’s a separate issue.
Two conservative, smart writers are Ross Douthat of the New York Times and Megan McArdle of the Atlantic. Both are forced to acknowledge that the Oregon Medicaid Study shows Medicaid coverage generates strong financial and mental health benefits for Medicaid recipients, but argue rhetorically: Wasn’t this about saving lives? Douthat asks, “The health care law was sold, in part, with the promise (made by judicious wonks as well as overreaching politicians) that it would save tens of thousands of American lives each year.” McArdle, drawing on the same rhetorical playbook stresses, “[W]e heard that 150,000 uninsured people had died between 2000 and 2006.” See, classic liberal over-promising and under-delivering. You told us poor people would live, not that they would be less depressed and more financially secure.
The important thing is that the Oregon Medicaid Study was a “post-treatment” survey. I’m using “treatment” in the jargon-y way. I just mean assignment via lottery to either the “treatment condition” of receive Medicaid for two years or the “control condition” of continuing insurance free for two years. It’s right there on the first page of the article: “Approximately 2 years after the lottery, we obtained data from 6387 adults who were randomly selected to be able to apply for Medicaid coverage and 5842 adults who were not selected.” To be even more precise, and requiring Douthat and McArdle to turn to the second page of the article, they collected this data via in-person interviews.
Let’s just stop right here. Dead people tell no tales. Hence they were not included in the study. The study occurred only on those people that lived to talk at the end. Medicaid could have saved 1000 lives in Oregon and this research design would not have noticed. Or Medicaid could have killed 1000 people. Same thing. This is what we like to call survivorship bias. It’s so simple, I don’t see the need to belabor the point.
But let’s imagine the study had been designed differently. At this level of power, would we have noticed? A little quick math: about 20% of Americans are uninsured, studies suggest being uninsured is associated with about 20,000 additional deaths a year nationally (U.S. population ~300m), and the control group was about 6000 people. The expected value of uninsured “excessive” deaths in this study is this rate of “excessive” deaths caused by lack of insurance per uninsured person per year times the total number in the control group. I think that gets us about 2 excessive deaths per year, or 4 excessive deaths for the period under study. I’d be very surprised if this study would be able to discern, in a statistically significant way, if Medicaid saved lives. (The death rate in the United States is 799.5/100000, meaning out of our 6000 folks, we’d expect about 96 deaths in these two years.) Even without survivorship bias.
My point: the Oregon findings in no way impugn the possibility that 20,000 Americans a year die from lack of insurance, and that Medicaid might save them. This is true solely because of survivorship bias, though sample size problems make it doubly true.