AI: The Prescriber

By Matt Fisher, Healthcare Attorney
LinkedIn: Matthew Fisher
X: @matt_r_fisher
Host of Healthcare de Jure – #HCdeJure

Artificial intelligence has been the subject of no shortage of hype about its capabilities. As AI has evolved in the past few years it has often been touted as the force that will finally drive major changes in healthcare. First it was that radiologists could be replaced by AI, then it was AI will drive care management and efficiency since it can interpret more data all at once, and then it was that it could take over a number of administrative functions. Some kernels of truth exist within all of those statements about the impact that AI can and is having in healthcare. However, many concerns still exist.

Skimming the Surface of Concerns

There are likely a number of very technical concerns that still exist when it comes to the fine details of how AI functions. While those specific elements are beyond the scope of this discussion, the underlying concerns likely feed into some of the more broadly understood or at least discussed drawbacks to AI.

Bias
A significant concern is the ongoing question of just how much bias exists within different AI models. It should be widely understood that bias in many forms has existed for a long time in healthcare with those issues coming to fore as better awareness and increased attention comes to the issue. However, removing the longheld biases is not a simple or short issue. Further, if AI models are trained on and built with current understandings, then those biases are likely unintentionally (at least hopefully unintentionally) being baked into the AI models.

Bias creates problems within healthcare because it can influence care away from individuals who need it or discount the experiences of certain populations. Entrenching those problematic outcomes into a system where most if not all users will not fully understand how it reaches decisions could result in those biases growing in unexpected ways or at a minimum (which is still not good) perpetuating the current state of care that does not treat all patients equally.

Limited Data Sets
Connected to the issue of bias is the limited scope of the data sets that are often used to train the AI models. Given the importance of data for AI to learn from, attention has shifted to the scope of the data sets being utilized. Some of the questions include where do the data sets come from, whose data is included, and how is the data vetted. Too often, the answers to those questions reveal that data sets are quite homogenous.

Not training AI models on diverse data sets means that nuances between different populations aren’t being picked up. If diverse backgrounds and experiences aren’t reflected, then the AI models may not be able to see the distinctions between populations, which can translate into not making appropriate recommendations for everyone.

Hallucinations
One final issue to raise is the ongoing existence of hallucinations in the output of AI models. The hallucination is a seemingly accurate statement or recommendation, but when checked does not make sense and is completely inaccurate. In healthcare the impact of a hallucination could have serious negative impacts for a patient. Those are serious issues that have to be resolved before any serious use of AI in any way that touches on patients.

What happens when a hallucination occurs and there is not oversight or double checking by a human? That could easily occur if AI systems are deployed in broad use since everyone is already stretched so thin in healthcare.

A Bill Before Its Time

With the high level discussion of AI concerns as a backdrop, a new bill proposed in Congress is clearly a proposal that is a bit too early. The bill would modify the Food, Drug, and Cosmetic Act (the bill giving the FDA its authority) to allow AI and machine learning technology to qualify as a practitioner that could prescribe drugs if authorized by both the FDA and the state where the prescription would occur. The stated goal of the bill to address care gaps and access to care is a valid concern, but can or should anyone trust AI to accurately prescribe drugs?

The bill is clearly just the tip of the iceberg in terms of implementation. A whole host of regulations would be needed to implement the concept starting with the FDA. Given the depth and detail of how the FDA regulates medical devices and pharmaceuticals, it is reasonable to expect that the FDA would not just randomly authorize AI or machine learning models to prescribe drugs. Strict standards would likely be required, but then again the FDA is in the early days of struggling through how to appropriately and meaningfully regulate AI. The ongoing self-driven evolution of the technology means old paradigms do not directly apply, but how can safety be assured given that circumstance?

The complicated and nuanced nature of the issue would call for close collaboration among the players connected to AI. That would and should be a deliberate process that does not rush into any final decision.

The next step is figuring what each state would do. The bill defers to state decisions on granting prescribing authority too (a necessity since each state licenses clinicians, an issue that is a completely different discussion). Which states would jump on allowing AI to prescribe? If it were allowed, what guardrails or restrictions would be implemented? Those are a lot of unknowns since there has not really been much if any public discourse on that issue yet.

Aside from implementation which is a big and complex part of the proposed bill, there are a number of other related questions. One big one is who will hold the liability for an issue caused by an AI prescriber? Are the technology companies creating the AI prepared to be held liable as a licensed clinician? Any clinician in the healthcare world knows that a lawsuit can arise at any time and will sweep in pretty much anyone who was in any way shape for form connected to delivering care to the impacted individual. How will those costs factor into the cost of the AI model? Also, how will the professional liability insurance carriers consider writing a policy to cover that type of professional service?

On a parallel line of thought, would the technology company creating the AI try to push the liability over to the customer using the AI model as part of the delivery of care? Would a contract provision trying to shift liability hold up to court scrutiny? Those are all unknown questions that would take time to figure out and likely be costly to get down to a consistent answer. The existing case law and precedent as well as statutes around product liability should not be discounted as those could easily shape the determination of the questions.

The Future is Coming

While it may currently be too early for AI to get a license, the day when that issue need to be addressed head-on is likely coming soon. Given that reality, it is important to begin considering what that world should look like now and start preparing. If the road and groundwork can be laid now, then it can help shape the discussions in the future and hopefully avoid a rush to fill in a vacuum if/when the issue is forced.

This article was originally published on The Pulse blog and is republished here with permission.