William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author
The ONC has released via the Federal Register a Request for Information (RFI) on the EHR Reported Program which was established by the 2017 21st Century Cures Act. Overarching questions that arises in this regard is given that health IT must achieve certification, and given that there are numerous problems still at hand, and given that much data still needs to be collected, are the certification criteria and processes, including field audits, achieving their objective? It is noted in this regard that part of the disconnect is the voluntary and uneven reporting of EHR problems and difficulties. This might be contrasted with medical device adverse event reporting which is mandatory.
Those that will ultimately be doing the reporting addressed in the RFI are health IT developers, which here means those with certified EHRs. Such reporting will not be voluntary since it will be a condition of certification. Clinicians will be pleased to see that they are not included in this new round of reporting. The Act required consideration of 5 areas of interest: Security; interoperability; usability and user-centered design; conformance to certification testing; and “other categories”, as appropriate to measure the performance of certified EHR technology. One might note that “other” is rather open ended but I endorse the inclusion of “other” on all lists as a reminder that the enumerated factors never tell the whole story. The Act also lists additional categories of possible interest including enabling users to order and view lab, imaging and other diagnostic tests; and exchanging data with clinical registries, medical devices, HIEs, other providers, and governmental agencies. For the 5 required areas, there is a focus on what kinds of information would support effective comparative shopping by providers including acquisition, upgrades and customization.
Thus, the reporting would include: measurable differences between products, functionalities appropriate to specific settings, not be unduly burdensome to users or to small and start-up developers, and support analysis for industry trends with respect to interoperability and other types of user experiences. I particularly like the term “unduly burdensome” which does not rule out it in fact being burdensome. Ones opinion on unduly no doubt depends on whether you are the one requiring the information or the one that has to provide it. One approach ONC suggests for finding the appropriate middle ground is whether it would be possible to use any existing resources that have user or developer data to reduce the need for collecting new information. This reminded me that when I lived in Texas a law was passed that no state agency could ask a hospital for any information that the hospital had already given to some other state agency. While this might sound reasonable in principle it requires, at least in part, for a hospital to refuse to provide such repetitive information, thereby taking an adversarial position that might not always be a wise approach in dealing with regulators.
It can also be noted that in the absence of automatic data collection the only way for developers to learn about the user experience is to ask or observe users, thus allowing burden spillover. However, automatic user reporting, it is suggested, may be achievable from user audit data and related monitoring tools. This suggests that in addition to designing the product to perform user tasks, the design would have to include self-monitoring, which itself is a design challenge and therefore another kind of burden. In this regard ONC asks: how can data be collected without creating or increasing burden on providers? and the flip side, which particular reporting mechanisms, if any, should be avoided? These questions are expanded upon for usability, an issue that existing EHRs have done a famously bad job of achieving.
The RFI notes that poor usability of health IT systems can contribute to clinician burden and physician burnout, user error, and may lead to risks to patient safety. It should be noted here that in order to have products certified, health IT developers must attest that they employed a User Centered Design process and report the results of usability testing. Such self-attestation seems to have had limited positive effect.
Specific questions are also asked about interoperability, again including how it can be measured and whether any existing data sets might contain useful information. The other category and the secondary set of issues get some attention as well.
Knowing what works and what doesn’t, and having measurable and reported criteria, are key elements in meaningful comparative shopping, yet the information available to do such shopping in the EHR arena has not been effective. This has resulted in users not only hating their first EHR, but often hating their replacement EHR as well—or maybe hating it more. Moreover, badly designed health IT is unlikely to achieve the benefits that have been claimed for it.