William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author
Computer Decision Support (CDS) usually takes the form of software generated patient specific recommendations based on patient attributes in the EMR. A recent study takes a somewhat different approach by using EMR data to identify hospital patients who are at risk of death, but offers no treatment suggestions, it just says the patient has an elevated risk of dying. It is not reported if any treater who received the alert was surprised by it. In this regard, 32% of the alerts were for patients in the emergency department where it might be expected that they were at greater risk. Alerts that are not perceived as telling the provider something that they didn’t already know are exactly the kind that lead to alert fatigue. In addition, there is as yet no demonstration that such the alerts in this study did any good.
In a software training phase, in which the software itself was “learning”, only 5% of patients who would have triggered an alert actually died. This was more than the no alert group in which less than 1% died. But is 5% accuracy enough to warrant generating an alert? Note also that software based on learning generally lacks underlying science. The software can discover apparent correlations, but it doesn’t develop understanding of why these correlations exist.
A more general question is what is the treater supposed to do with such an alert when it offers no suggestions or recommendations. Presumably they are supposed to try extra hard to prove the alert wrong, which assumes that they would not otherwise try that hard. Alternatively, they might give up on the patient. If the former is correct, and they act more vigorously and are successful, then patients who were predicted to die do not. What does this do to the value of belief in the alert? Maybe it is just for self-congratulations in which the game becomes to prove the alert wrong. This might suggest that many more patients should be predicted to die to assure that the treaters don’t let this happen.
Baring outright alert fraud, it also seems that it would be desirable for such software to continue to “learn” in the interest of becoming more accurate. In this case the software would learn that its prediction of death under the particular circumstances of a patient turned out to be incorrect, ie a patient that the software said would die did not. If the software did indeed learn then it would stop predicting that such patients would die and stop issuing alerts. If these alerts actually do help patients survive, then stopping the alerts would return them to the un-alerted likely-to-die pool. Later more learning would discover these patients that died without alerts and return them to the alert group, and then start alerting again. This seems like an endless cycle. It also suggests that as a patient where in the cycle you enter the system can impact whether or not you get the extra effort necessary to stop you from dying. Ideally you would want to become a patient, if you had to be one, just after you would be new to the alert group since this would result in the maximum response to the alert. Of course this would just be chance, especially if you were entering the hospital via emergency.
To actually be helpful automated alerts have to meet several criteria. One is that the alert be accurate across a broad spectrum of patients and/or recognize patients who are out of its domain. Second the alert must tell the provider something they didn’t already know. Third, the alert must drive specific behaviors that are beneficial to the patient. These are typically in the form of specific recommended or suggested actions. At risk of trivializing the issue, consider a system that alerts clinical staff that women in labor will soon have a baby. This system might exhibit close to 100% accuracy. But is it helpful? Does it change anything? Or is it just annoying? The utility of alerts, and how the provider is supposed to use them is probably something that needs some thought before the alert system is built.