William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Various EHRs require a user to “validate” some elements of information (data) before they are fully accepted. Depending on the design of the EHR, information that needs to be validated may be presented in a different color or otherwise identified, and/or may be erased or hidden after some time if not validated or when actively rejected.
An important question in this regard is what is the basis for the user’s determination that an item of information is or is not valid? The ability, or inability, to do this occurs in two basic forms. One is when the user truly has, or believes they have, independent information that the recorded data is or is not correct. For example a user may be “sure” that they are looking at Mr. Smith’s chart and therefore reject information pertaining to Mrs. Jones. Or a bedside nurse may be contemporaneously present and know that a lead was off when the EHR automatically recorded data from that lead. Such data would then not be validated. Manually entered data can also be subject to confirmation, i.e. is what you typed what you meant to type. However self confirmation of manually entered data is known to be an unreliable process.
A second scenario is when the validation step occurs remotely either in space or time. In this case the validator may not have independent information to compare to the recorded data, nor would they have direct knowledge of any issues associated with the source of that data. They therefore may not actually be able to confirm that the data is correct. At best they may be able to decide that the data appears to be reasonable and perhaps then give little thought to the possibility that it is actually wrong, The user may also reject data that appears to them to be wrong based on general or specific conflicting information, yet in some cases such data might actually be correct.
In both of the above scenarios wrong data that is validated would go on to be a part of the record that is actually used, and may also be subsequently processed via an alarm or decision support system.
At least some obviously wrong data would be amenable to automatic detection rather than relying on the user. For example the value of a vital sign that was well outside of population expectations, or is inconsistent with the individual patient’s previous values could be detected and flagged for human inspection. Improper data structure can also be automatically detected via software. For example alpha entries in a numeric field should not be accepted. Similarly a data field might have an anticipated number of characters such that an entry with fewer characteristics could be flagged. (Hopefully the software is designed such that the data field cannot accept more characters than anticipated.) At a somewhat higher level of software design other information could be brought to bear such as the compatibility of an entry with sex or age. Of course data that passes these kinds of screening may in fact still be incorrect.
Given the limitations of true validation of data we can question its actual purpose. One possibility is that it is an attempt to transfer responsibility from the technical data generator and recorder (the hardware and software) to the user so that if the data turns out to be wrong, and adversely impacts care, the system provider can seek to blame the validator. In this regard users should resist the assertion that they are validating data when they actually are not.
The terminology of validation should also be changed at least with respect to data for which the user does not have independent information and therefore cannot actually provide confirmation. This is more than semantics because people may come to believe that are actually validating data, and that the data in the EHR is therefore necessarily correct. Rather than validated, perhaps the proper term is the somewhat awkward “not de-validated”.