From a Randomized Controlled Trial on Hospital Readmissions
William Hersh, MD, Professor and Chair, OHSU
Blog: Informatics Professor
Twitter: @williamhersh
Our news and science feeds are so filled these days with everything artificial intelligence (AI), from large language models to their impacts on society, that we may miss important studies on other informatics topics, some of which may have lessons for AI. This is the case for a recent randomized controlled trial (RCT) on a hospital readmissions initiative (Donzé, 2023) and an accompanying editorial putting it in larger perspective.(Wachter, 2023)
Some may remember about a decade ago, when “data analytics” was the rage, and health insurance payors were noting with alarm the growing rate of hospital readmissions. The cost and frequency of readmissions was highlighted in a study finding as many as 20% of hospital admissions were readmissions within a month of previous discharge.(Jencks, 2009) Before this, several hospital discharge planning programs had been studied and found to reduce readmissions.(Naylor, 1994; Coleman, 2006; Jack, 2009) This situation led the US Centers for Medicare and Medicaid Services (CMS) to implement the Hospital Readmissions Reduction Program (HRRP) as part of the Affordable Care Act. Starting in 2012, the HRRP required public reporting of readmission rates for three common diseases: myocardial infarction, heart failure, and pneumonia, with hospitals having unusually high rates of readmissions being penalized.
Around the time that the HRRP was implemented, the Health Information Technology for Economic and Clinical Health (HITECH) Act was incentivizing the adoption of the electronic health record (EHR). This provided new unprecedented sources of data, and every predictive analyst set out to find models that used EHR data to predict patients most likely to be readmitted, with the goal of identifying those who could presumably be followed more closely and have readmissions averted. Numerous studies were published using models based on EHR data to predict patients at risk for readmission.(Amarasingham, 2010; Donzé, 2013; Gildersleeve, 2013; Shadmi, 2015)
Despite the plethora of predictive models, few interventions have actually been undertaken that demonstrate improved outcomes of care. One study found that the use of a readmission risk tool intervention reduced risk of readmission for patients with congestive heart failure but not those with acute myocardial infarction or pneumonia (Amarasingham, 2013) Another observational study found that readmissions did decline with the implementation of the new rule initially and for the targeted diseases, but less so for other conditions.(Zuckerman 2016) Other have noted that the program has had marginal benefit (Ody, 2019) and redirecting resources that might be devoted to other quality improvement efforts.(Cram, 2022)
Earlier this year, an RCT was published that assessed a multimodal care intervention aimed at reducing readmissions (Donzé, 2023). Carried out in four medium-to-large teaching hospitals in Switzerland, the study implemented the best-known predictive model for risk of readmission yet found no benefit for an intervention that included it. As noted in the accompanying editorial, just because we can predict something does not mean we can necessarily do something about it.(Wachter, 2023)
Why is this RCT pertinent to AI? Mainly because just being able to predict diagnoses or outcomes is not enough. I have written about this myself in this blog over the years. Whether we are talking about predictive analytics, next-generation data science, or AI, no matter how sophisticated our models or compelling our predictive abilities, we must demonstrate how these systems impact outcomes, whether improved patient health or healthcare system processes.
How do we demonstrate the value of AI in health and healthcare? First, we must implement these systems in the real world. There is a great deal being written about the promise and challenges for implementing AI in clinical settings.(Hightower, 2023) But even implementing AI in the real world is not enough. We must also demonstrate that AI can lead to better outcomes, whether improved health or treatment of disease of patients or better delivery of healthcare services. One perspective to think about this is the continuum of translational research. As with all biomedical advances, we start with the basic science, and demonstrating value in the laboratory, which in this case is the use of curated data sets. The next step is to implement systems in real-world healthcare or community settings. Clearly these are complex interventions.
Ultimately, however, we must demonstrate experimentally that health or healthcare is improved by the AI intervention. The best experimental evidence comes from controlled experiments, ideally RCTs. And granted, such trials may be more complicated than the classic RCT of comparing a medication versus a placebo. These RCTs may involve complex designs, and results may be difficult to interpret if the trial does not show benefit. But building the evidence base for AI is essential, and studies like this from Donzé et al. demonstrate that even the best predictive models may not translate into better outcomes.
For cited references in this article, see original source. Dr. Hersh is a frequent contributing expert to HealthIT Answers.