By Micky Tripathi, ONC
LinkedIn: Micky Tripathi
X: @ONC_HealthIT
X: @mickytripathi1
Recently I had the opportunity to testify before the House Energy and Commerce Committee. During this second hearing on artificial intelligence (AI), I shared work that I’m doing on behalf of HHS to support President Biden’s Executive Order on harnessing the potential and mitigating the risks of AI. I’m grateful for the opportunity to discuss this work with the Committee. My oral testimony follows, and my full written testimony is also available on the Energy and Commerce Committee website.
Chair Rogers, Ranking Member Pallone, Members of the Committee, thank you for the opportunity to testify today on the Department of Health and Human Services’ efforts to promote the responsible use of artificial intelligence and machine learning in health care, public health, and human services.
I’m Micky Tripathi, and I’m the head of the Office of the National Coordinator for Health Information Technology. I come to this role having served in the Department of Defense many years ago, and for the past twenty years, in the private sector working on electronic health record implementation, interoperability, and data analytics. ONC is a staff division in the Office of the Secretary and is responsible for advancing the federal government’s health information technology efforts and catalyzing adoption of secure, interoperable health IT systems across the entire health ecosystem. In addition to my formal role running ONC, Secretary Becerra has tasked me with co-leading HHS’ efforts in artificial intelligence.
HHS’ mission is to enhance the health and well-being of all Americans, by supporting effective health and human services and by fostering sound, sustained advances in the sciences underlying medicine, public health, and social services. We’re looking at AI through the same mission lens – foster responsible use of AI to improve peoples’ lives.
There are many ways in which AI will affect health care. As called for in the President’s recent executive order, we’ve launched a department-wide task force looking at eight different areas, including healthcare delivery, human services, and R&D.
We at the department are AI optimists; AI-based technologies have the potential to accelerate innovation, increase market competition, ameliorate health inequities, reduce clinician burn-out, and improve care and the care experience for patients. However, there are lots of potential downsides as well, so we also believe that our approach to AI in health care should be “don’t trust without verifying.” It’s vital that we both seize the promise AND manage the risks.
It’s important to note that we’re not starting from scratch.
- FDA has approved almost 700 AI-enabled devices for use in the market, and as this Committee is aware, is working on a pre-determined change control plan approach to AI-enabled software devices in collaboration with international partners.
- NIH is co-leading, with the Department of Energy, planning and development of critical components of the National AI Research Resource infrastructure
- The Office of Civil Rights published a draft rule emphasizing that non-discrimination provisions of ACA Section 1557 also apply to AI-enabled tools
- The Center for Medicare and Medicaid Services has implemented rules regarding the use of AI-enabled tools in Medicare Advantage medical necessity checks and coverage determinations and they’ll begin audits starting in 2024
Going even further, I’m delighted to announce that just this morning the department released ONC’s HTI-1 Final Rule, which is a significant step in establishing responsible use of AI across the industry. The HTI-1 rule has specific provisions to promote transparency and risk management of AI-based technologies used in health care delivery based on what we call the FAVES principles: fairness, appropriateness, validity, effectiveness, and safety.
To give some context, a key role that my agency plays in health care is certifying the electronic health record systems that are now used by 97% of hospitals and almost 80% of physician offices across the country. EHRs are a key enabler of AI in health care. They’re the source of more and more of the data that feeds machine learning systems, and EHRs are also where AI works behind the scenes in user interfaces and workflows to influence day-to-day decision-making that directly affects patient lives. For these reasons, ONC started working on this from the day I took this job.
The HTI-1 regulation empowers clinicians, first and foremost, by requiring EHR vendors to establish transparency about the AI-based models embedded in their products, including making available a standardized “nutrition label” to help advance explainability of the AI operating in their software. This rule also complements other efforts across the department, by addressing areas not covered by FDA regulations and by helping providers comply with OCR non-discrimination requirements.
Shining light on where and how AI is operating in EHR systems will put health care providers in a better position to do what they try to do every day – use information they trust to make the best decisions for and with their patients. We’ve heard from providers concerned about AI being a “black box”, which is hindering their adoption of these innovative technologies. Our rule is designed to spur adoption by using transparency and risk management to instill public trust and confidence.
AI opens up vast opportunities to improve our country’s health care, public health, and social service capabilities to better serve the American people. HHS is already taking action to motivate responsible use of AI in these critical areas. Thank you again, for the opportunity to discuss our efforts with you today.
This article was originally published on the Health IT Buzz and is syndicated here with permission.