The Physicians Make Decisions Act is AI Regulation Done Right

By Dr. Tim Wetherill, Chief Clinical Officer, Machinify
LinkedIn: Tim Wetherill MD FACS
LinkedIn: Machinify

The last decade has seen AI’s escalating impact on a variety of industries take shape, and perhaps no vertical has been impacted as much as healthcare. AI has contributed to game-changing innovations reshaping how care is documented and delivered, but it’s also sparked fierce debates about where AI belongs—and where it doesn’t. Introduced last fall, the “Physicians Make Decisions Act” (PMDA) has come front and center in some of these conversations, so it’s important those in the industry understand what it is, who it could impact, and its implications for the future of AI in healthcare.

Explaining the PDMA

California became the first state to adopt the PMDA in the fall of 2024. The purpose of the legislation is to provide clarity on where and how it’s acceptable for providers to use AI in clinical decision making. Where to draw this line has come into question in the past few years as AI has become more advanced and useful in decision support, raising questions about who is accountable for decisions around a patient’s care—an AI model or the physician providing the care.

The PMDA makes clear that ultimate decision-making authority must remain with providers, and it has several provisions designed to ensure providers remain accountable. Those include language that keeps legal and ethical responsibility with providers, regardless of whether or not their recommendations are informed by AI tools. Provider organizations are also required to disclose the role of AI in decision-making processes to patients, ensuring transparency. In addition, the legislation mandates the establishment of review boards to ensure AI models are being used effectively and without bias. It also prohibits the use of AI models for making final clinical decisions without the approval of a provider, especially in certain high-stakes scenarios.

The idea behind the legislation is not to be overly restrictive, but rather to create a preemptive framework that allows for the use of AI in certain healthcare contexts without sacrificing proper oversight.

What Does the PMDA Mean for the Healthcare’s Relationship with AI?

While it’s important to take caution to avoid stifling innovation, the PMDA is a necessary tool at this stage in AI’s implementation to prevent overreach. By establishing guardrails for where AI can and can’t be used, and keeping providers as the ultimate decision makers, the PMDA provides reassurance that humans won’t be overshadowed by unproven machines.

This should also engender trust in patients to engage with AI-assisted care while still being confident that their providers are at the helm. The legislation’s transparency provisions will similarly help create trust by preventing opaque platforms from obscuring the role that AI is really playing.

Patients aren’t the only party that need to be confident in AI for it to be effectively used. The PMDA will also help providers rely on AI more as a tool to supplement their work, since they’ll be able to collaborate with the fear of losing control. Since the PMDA requires oversight of AI, developers will also be incentivized to prioritize safety and neutrality, helping introduce tools that better serve diverse patient populations and remove biases so that providers can leverage their recommendations with confidence.

Thus far, three states have followed California’s lead in implementing the PMDA, with several others considering adopting similar measures. As the legislation continues to pick up steam, there’s the potential for the PMDA to influence federal, and even global standards for AI in healthcare, positioning the U.S. as a leader in responsible AI use.

Challenges and Criticisms

The PMDA is a step in the right direction, but there are challenges to consider. Requirements around the establishment of oversight boards will take significant resources, and as with any AI regulation, tech companies argue that regulations like these will slow the development and deployment of life-saving technologies. Also, because the PMDA is being adopted in a piecemeal fashion by one state at a time, there’s the potential for uneven implementations that generate confusion for providers and AI developers.

What’s next?

Concerns around the PMDA are valid, but the careful regulation of AI is essential to ensure providers retain the autonomy they need, and patient care isn’t sacrificed. That may go against the traditional “move fast and break things” mantra of the tech industry, but moving slowly and deliberately is essential in a setting as high-stakes as healthcare.

The PMDA strikes a proper balance between ensuring AI still has its place in healthcare, without allowing it to run wild and make decisions that could have devastating outcomes. It’s a system of checks and balances that keeps providers in control, and how those checks and balances are implemented and evolve will dictate how the healthcare industry makes use of AI for the rest of this decade.