Hospital Inpatient Care: Taking a Closer Look at the PLOS Study Results

Last Wednesday, as speculation swirled about the way Republicans might replace the Affordable Care Act and as the Fed raised interest rates starting its clock for planned increases over the next two years, this story grabbed front-page headlines in New York Times: “Go to the Wrong Hospital and You’re 3 Times More Likely to Die” by noted health journalist Reed Abelson.

Her story focused on results of a study that compared hospital inpatient performance on 24 factors for 22 million discharges. She wrote: “Not all hospitals are created equal, and the differences in quality can be a matter of life or death. In the first comprehensive study comparing how well individual hospitals treated a variety of medical conditions, researchers found that patients at the worst American hospitals were three times more likely to die and 13 times more likely to have medical complications than if they visited one of the best hospitals.” Wow. Say it aint so! 

The methodology of the Boston Consulting Group-led study is quite rigorous: ‘We used information from 16 independent data sources, including 22 million all-payer inpatient admissions from the Healthcare Cost and Utilization Project (which covers regions where 50% of the U.S. population lives) to analyze 24 inpatient mortality, inpatient safety, and prevention outcomes. We compared outcome variation at state, hospital referral region, hospital service area, county, and hospital levels. Risk-adjusted outcomes were calculated after adjusting for population factors, co-morbidities, and health system factors. 

The research team found ‘The amount of variability in health outcomes in the U.S. is large even after accounting for differences in population, co-morbidities, and health system factors…Even after risk-adjustment, there exists large geographical variation in outcomes. The variation in healthcare outcomes exceeds the well-publicized variation in US healthcare costs. On average, we observed a 2.1-fold difference in risk-adjusted mortality outcomes between top- and bottom-decile hospitals. For example, we observed a 2.3-fold difference for risk-adjusted acute myocardial infarction inpatient mortality. On average a 10.2-fold difference in risk-adjusted patient safety outcomes exists between top and bottom-decile hospitals, including an 18.3-fold difference for risk-adjusted Central Venous Catheter Bloodstream Infection rates. A 3.0-fold difference in prevention outcomes exists between top- and bottom-decile counties on average; including a 2.2-fold difference for risk-adjusted congestive heart failure admission rates. The population, co-morbidity, and health system factors accounted for a range of R2 between 18–64% of variability in mortality outcomes, 3–39% of variability in patient safety outcomes, and 22–70% of variability in prevention outcomes.’

In their concluding comments, the authors wrote: ‘These findings suggest that: 1) additional examination of regional and local variation in risk-adjusted outcomes should be a priority; 2) assumptions of uniform hospital quality that underpin rationale for policy choices (such as narrow insurance networks or antitrust enforcement) should be challenged; and 3) there exists substantial opportunity for outcomes improvement in the US healthcare system.’

While these findings are attention-grabbing, and the study is highly credible, it’s unlikely to change how individuals select hospitals nor facilitate meaningful comparisons for several reasons:

Study limitations: The analysis was limited to inpatient programs and appropriately included encounter data from Medicare as well as commercial plans. Adjustment for risk factors, patient severity and regional variation were factored in, and conclusions reached using regression analytics, a dependable statistical framework for quantifying correlation and causality. Not surprisingly, the researchers found huge variability inside the same hospital comparing quality of inpatient care across its array of programs. But that’s only part of the story: most hospitals offer a wide range of outpatient services constituting 20 times more encounters than their inpatient programs and more than 40% of revenues. So beyond acknowledging the obvious: that every hospital offers some inpatient programs that are better than others, there’s more to hospitals than inpatient programs.

Measure bias: The BCG study team chose 24 measures for its comparison. Based on these, their findings and conclusions are defensible. Contrasting the results of their Top Decile against their Bottom decile is attention-grabbing. But what if other measures were used? And what if another analytic strategy was used instead of comparing decile to decile? Leapfrog uses its own covering 15 categories with more than 70 measures. Truven Health Analytics uses 15 in its Top 100 assessment and CMS employs 64 measures in its Star Ratings. And report cards used by others, like Health Grades, US News and World Report, use different measures and weighting formulae and get wildly different results. It’s because each measures different things in different ways. Even the prestigious Baldridge recognition must be seen in context: in awarding Memorial Hermann Sugar Land Hospital, Sugar Land, Texas its 2016 recognition, it applied its distinctive methodology. Better than others, or just different? It’s not surprising, then, that there’s no shortage of publicly accessible information about hospital quality, and inconsistencies in their findings. This study accounts for 2 of 3 inpatient discharges in hospitals, so it’s impressive. But it’s not likely to be THE breakout method that becomes the gold standard for reporting on hospital quality. There are too many and each is unique.

Usefulness: Report cards about hospital quality have been around for 35 years since they were first introduced by the Health Care Financing Administration (HCFA, now known as the Centers for Medicare and Medicaid Services). Each year, the numbers and types of report cards increase. This study by BCG advances the discussion a bit further. But the authors agreed to withhold disclosure of data about particular hospitals, so the study essentially advances academic interest in associating high quality inpatient hospital care with patient volume but does little to help consumers know anything about the hospitals in their communities. This study points to the obvious: lower volume inpatient programs carry a higher risk for quality and safety lapses compared to those with higher volumes. But it’s unlikely to change how consumers, physicians, insurers and employers assess their hospitals. 

Publisher consideration: Another irony of this study was its publisher, PLOS-- a not for profit open publishing platform used by academics and notaries to advance their thinking sans the more rigorous standards of traditional health services research channels like Health Affairs and others.  PLOS espouses “Openness Inspires Innovation. It's the way we think science and publishing should be. PLOS was founded in 2001 as a nonprofit Open Access publisher, innovator and advocacy organization with a mission to accelerate progress in science and medicine by leading a transformation in research communication.” The fact that the study appears in this medium rather than others adds to its intrigue-factor.

So, this study will, no doubt, escalate interest in measuring and comparing the quality of hospital care. It’s a frequent theme: just last July, the release of CMS Star Ratings for hospitals sparked controversy because its methodology failed to adequately address social determinants of health that impact patient outcomes. That’s the new world order of transparency in which hospitals now compete.

The key takeaway from this study is this: each stakeholder in healthcare, especially doctors, hospital executives and boards, must recognize that there are few secrets about hospital quality. A citizen can go online 7/24 and compare results for Health Grades, CMS Hospital Compare, Leapfrog and many others. Outcomes and adherence to evidence-based best practices are measurable and publicly accessible. Ditto utilization volume that’s associated with better results. Advertising “quality” without the hard evidence upon which the assertion is made seems misleading. And the validity and reliability of the measures used to rate hospitals are becoming more sophisticated and predictive: it’s a new day for hospital transparency.

The PLOS study adds to a timely discussion about transparency for hospitals and the added burden of capturing and reporting data to the myriad of mediums through which reports are available. As Repeal and Replace takes center stage in the political arena, perhaps policy-makers should consider ways to make it easier for individuals to access valid and reliable information about hospital performance and help ease the financial burden for hospitals faced with growing requirements for data collection and reporting.  

I hope hospital-specific data from the PLOS study becomes available so I can compare their Top Decile vs. Bottom Decile analytics for each measure. I’ll know more, but it’s not likely to change how my health insurance defines in and out of network coverage nor alter the admitting privileges for the physicians I use. 

For consumers like me, until hospital quality is defined systematically and measured consistently, it’s buyer beware! 

Paul

P.S. In next Monday’s Report, we’ll recap the year for healthcare investors and lenders and what to watch in 2017.

Sources:

  • Barry L. Rosenberg, Joshua A. Kellar ,Anna Labno ,David H. M. Matheson, Michael Ringel,Paige VonAchen, Richard I. Lesser, Yue Li, Justin B. Dimick, Atul A. Gawande, Stefan H. Larsson,Hamilton Moses III  “Quantifying Geographic Variation in Health Care Outcomes in the United States before and after Risk-Adjustment” PLOS December 14, 2016 http://dx.doi.org/10.1371/journal.pone.0166762
  • "First Release of the Overall Hospital Quality Star Rating on Hospital Compare” https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2016-Fact-sheets-items/2016-07-27.html
  • CMS Hospital Compare https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/hospitalqualityinits/hospitalcompare.html
  • Leonardi MJ, McGory ML, Ko CY. “Publicly available hospital comparison web sites: determination of useful, valid, and appropriate informationfor comparing surgical quality.” Arch Surg. 2007;142:863-869
  • CMS Compare websites include: Nursing Home Compare; Physician Compare; Medicare Plan Finder; Dialysis Compare; and Home Health Compare.
  • Wang DE, Tsugawa Y, Figueroa JF, Jha AK. Association Between the Centers for Medicare and Medicaid Services Hospital Star Rating and Patient Outcomes. JAMA Intern Med. 2016;176(6):848-850. doi:10.1001/jamainternmed.2016.0784. http://archinte.jamanetwork.com/article.aspx?articleid=2513630
  • Trzeciak, S. Gaughan, J. Mazzarelli, A. Association Between Medicare Summary Star Ratings and Clinical Outcomes in US Hospitals. Journal of Patient Experience. 2016 vol. 3 no. 1 2374373516636681 doi: 10.1177/2374373516636681 http://jpx.sagepub.com/content/3/1/2374373516636681.abstract