William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author
AHRQ has released another RFP for the development of Computer Decision Support (CDS) modules as part of its CDS Connect program. The premise of CDR Connect is that rule-based CDS systems can be locally developed and then shared as CDS “artifacts”. Note that rule-based (practice guideline) CDS is distinct from AI (machine learning) based systems in that the former have a published underlying basis based on a set of rules while the latter are data driven without a known logical basis. This distinction is reflected in the fact that AI based CDS is regulated by FDA while open (reviewable) rule based CDS is not regulated. Depending on the complexity of the underlying guideline, it is perhaps not always clear why making the rule “computable” is quite necessary, especially if it requires operator actions. CDS that runs automatically on existing EHR data is generally preferred, although pop-up and other usability issues still have to be managed.
AHRQ envisions locally developed CDS being shared with and adopted by other users subject to the limitations of some artifacts having IP protection. In some cases the artifacts were developed with AHRQ funding. There is little information available about rates of adoption of these artifacts elsewhere or other follow-up research on their value.
The RFP repeats the concept of local development and sharing, and requires the incorporation of the “Five Rights” framework of delivering the right information, to the right person(s), using the right format, in the right channel, and at the right time during workflow. Five is a popular number for short lists of requirements, possibly based on most of us having five fingers on each hand. Other numbers are sometimes seen such as 3 or 7 or, for the more verbose, 10. Such short lists suffer from the common limitation that reducing a complex problem to just a few bullet points is inherently incomplete. In the present case the list stops at least one short of actually mattering. Since the presumed purpose of delivering the information is to actually make a difference in patient outcomes, stopping at providing information at the “right time” is short of what really matters, but has the attractiveness of stopping the list at things that can, and maybe will be, actually measured. Measuring actual outcome improvements is far more difficult than measuring activity (as we well know from Meaningful Use) and it receives less emphasis in this RFP.
It seems curious to me that relying on locally developed CDS is a desirable approach to deploying useful CDS. One might wonder about the expertise that goes into such a local project, and the degree of design and test discipline in creating it. One might also wonder about how feedback on the CDS recommendations is obtained and processed, and how the CDS is maintained and when necessary revised (or abandoned) based on poor results or new information sources. CDS that are developed based on grants also suffer from often being over when the grant is over.