Joshua Liu, MD
Co-founder & CEO at SeamlessMD
LinkedIn: Joshua Liu
X: @joshuapliu
Co-host: The Digital Patient Podcast
Musings and Insights
𝐖𝐞 𝐰𝐨𝐫𝐫𝐲 𝐚𝐛𝐨𝐮𝐭 𝐀𝐈 𝐞𝐫𝐫𝐨𝐫𝐬 𝐢𝐧 𝐡𝐞𝐚𝐥𝐭𝐡𝐜𝐚𝐫𝐞, 𝐛𝐮𝐭 𝐟𝐨𝐫𝐠𝐞𝐭 𝐭𝐡𝐚𝐭 𝐰𝐞 𝐚𝐥𝐫𝐞𝐚𝐝𝐲 𝐚𝐜𝐜𝐞𝐩𝐭 𝐚 𝐜𝐞𝐫𝐭𝐚𝐢𝐧 𝐥𝐞𝐯𝐞𝐥 𝐨𝐟 𝐡𝐮𝐦𝐚𝐧 𝐞𝐫𝐫𝐨𝐫. 𝐓𝐡𝐢𝐬 𝐠𝐢𝐯𝐞𝐬 𝐮𝐬 𝐚 𝐜𝐥𝐞𝐚𝐫 𝐬𝐭𝐚𝐫𝐭𝐢𝐧𝐠 𝐩𝐨𝐢𝐧𝐭 𝐟𝐨𝐫 𝐝𝐞𝐜𝐢𝐝𝐢𝐧𝐠 𝐢𝐟 𝐀𝐈 𝐢𝐬 𝐠𝐨𝐨𝐝 𝐞𝐧𝐨𝐮𝐠𝐡.
→ 𝐀𝐈 𝐬𝐜𝐫𝐢𝐛𝐞𝐬: we worry that AI will hallucinate things that the patient said…
… but forget that humans can mishear or forget what was said.
→ 𝐀𝐈 𝐩𝐚𝐭𝐢𝐞𝐧𝐭 𝐬𝐮𝐦𝐦𝐚𝐫𝐢𝐞𝐬: we worry that physicians will just sign off on discharge summaries without going through the charts themselves for accuracy…
… but forget that physicians sign off on medical trainee summaries without personally going through the charts.
And as much as AI can miss something… how long do you expect busy residents or medical students to spend pouring through every in-patient note and data point? They’re bound to miss something.
So of course there is a risk in using AI… as there is a risk in human delivered care.
Which is why I find it strange when the media highlights car accidents from autonomous vehicles instead of comparing actual accident rates between AI and human drivers.
We should not demand perfection from AI before we use it.
Rather we should expect at minimum that AI (or any technology for that matter) is just as safe and effective as the average human clinician before using it at scale.
A human doesn’t have a 0% error rate, so it doesn’t make sense for us to hold AI to the same standards either.
Now where it gets challenging is how to measure risk when the actual risks may differ between the AI and human approach.
E.g. an AI scribe may be more likely to make hallucination errors (e.g. occasionally document a made up detail) whereas a human is more likely to make omission errors (e.g. forget to document a detail).
So it’s important to have clear frameworks for how we think about effectiveness and safety for these different use cases.
And have a clear baseline expectation that’s set not by perfection, but by the currently acceptable human clinician standard.
And of course, aim to get better and better going forwards.
We worry about AI errors in healthcare, but forget that we already accept a certain level of human error.
This gives us a clear starting point for deciding if AI is good enough.
→ AI scribes: we worry that AI will hallucinate things that the patient said…
… but forget that… pic.twitter.com/wc5KLb5F0K
— Joshua Liu (@joshuapliu) June 13, 2024
The Digital Patient
The Digital Patient takes an “edu-taining” approach to all things digital patient care. On this show hosts Dr. Joshua Liu, and Alan Sardana talk with healthcare, technology, and innovation leaders about the latest advancements in digital health, trends in digital transformation, and strategies for optimizing the patient experience.