By Keith Boone, Healthcare Standards
Twitter:Â @motorcycle_guy
A very long time ago (at the beginning of my career in HealthIT), I got to do some really cool work on the front end and processing infrastructure for a set of machine learning and linguistic services that would automatically extract problems, medications, allergies and procedures and for problems, even code the diseases into a subset of SNOMED CT. This was before most people had even heard about SNOMED CT, so some significant effort there. We also did some work in the ICD-9-CM coding space as well, based on a software product that the company I worked for had purchased that was essentially the life’s work of a physican / informaticist.
The product worked remarkably well from a technical perspective, and had a pretty decent precision/recall curve. In fact, as configured, the system was shown to do as well as professional ICD-9-CM coders.
The biggest challenge it had was basically two-fold:
- It needed to be incorporate expert feedback to refine future results (we simply hadn’t had the time to develop that feature).
- The original product couldn’t explain how it got to a particular result, although subsequent ones could do a bit better.
To simplify, it couldn’t argue for itself, or accept any corrections.
ML and AI are often “black boxes”. Most of what people are talking about with regard to AI today are implementations of some form of neural network. Can anyone really explain what the weights and connections in a neural net mean? This is a hard AI problem. Machine learning algorithms have there own set of “hidden parameters” that drive their outputs, but that cannot always be easily explained.
Yet, we’re expect to trust these things. And they applied to hard problems that humans only solve correctly 90% of the time, and do slightly better. How do you develop trust for something that’s wrong for 1 out of 20 cases? And yet we can trust a human because they can explain their reasoning, even when the result of it turned out wrong.
Interestingly enough, even when computers do as well as humans, the two together often do better than individually b/c the humans may understand nuance that the computer misses and vice versa.
If you are trying to implement AI (or ML), consider:
- How do you work in user feedback about the goodness of the proposed solutions,
- How will you explain to the user why this result is good.
This article was originally published on Healthcare Standards and is republished here with permission.