William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author
AHRQ has released a Funding Opportunity Announcement (FOA) for research addressing diagnostic errors. Diagnostic error is noted to be a complex and data-poor arena with a lack of reliable incidence information and a weak understanding of contributing factors.
Not addressed in the FOA is the distinction between errors that should not have been made and those that are within the state of the art based on underlying uncertainty. This distinction can be lost to hindsight bias since once it is determined by subsequent events that a diagnosis was wrong it becomes “obvious” to blame the diagnostician. A better measure is to ask if the information available, or readily obtainable, was adequate for a typical practitioner to make a correct diagnosis. While still subject to bias this is equivalent to asking if another practitioner given the same information would have instead reached the correct diagnosis. This can be further complicated by asking if a more expert practitioner given the same information would have made a correct diagnosis. We then have to question how many levels of expertise there are. In this regard it may or may not be relevant that the world’s greatest expert would have gotten the diagnosis correct even though the majority of more-or-less ordinary people would not have.
It may be tempting to see the role of Artificial Intelligence (AI) and Computer Diagnostic Systems (CDS) in the diagnostic dilemma as a way to bring more consistent and expert analysis to the diagnostic task. However, AI/CDS can suffer from the same limitations as human diagnosticians. AI/CDS can have varying levels of expertise depending on how they were built and what data sets were used to train them. While it may be unpleasant to think about mediocre, or worse, advice from a computer this remains the reality. Of course, the default system claim is that the user isn’t really supposed to rely on the CDS but instead view the results from their own expertise. But this puts us right back in the discussion above of available information and varying levels of expertise.
Diagnostic error is certainly a challenging a topic. However, we need to avoid calling a wrong diagnosis an error when most practitioners would have reached the same conclusion. In this regard in the case of medical devices we distinguish between user error (the user did it) and use error (an error occurred but the cause remains open. In diagnosis we need to make a similar distinction, say between avoidable and unavoidable error.