By Scott Johnson, Chief Technology Officer, Cognizant TriZetto Healthcare Products
LinkedIn: Scott Johnson
LinkedIn: Cognizant
In the healthcare industry, efficiency and productivity are crucial for improving patient experiences and reducing operational costs. GenAI has quickly played an important role in streamlining administrative tasks and processes, allowing physicians to focus on clinical decision-making and patient care. While applications of GenAI, such as automated claims management and prior authorization, promise significant efficiency gains, a major roadblock still stands in the way: safely incorporating sensitive patient data into systems built on large language models (LLMs).
Does this mean healthcare organizations are locked out of the generative AI revolution? Absolutely not.
By addressing security challenges, healthcare organizations can pave the way for widespread adoption of GenAI technology and unlock its full potential in the industry. Here are three key areas that are crucial to successfully integrate GenAI into healthcare workflows:
Ensuring Secure Access: The Role of Role-Based Access Control (RBAC)
The concept of role-based access control (RBAC) is a familiar measure in the healthcare industry. It ensures that only authorized personnel can access specific types of patient information, based on their role. This same principle extends to generative AI applications. Software systems need robust controls to determine precisely who, both internally and externally, has the authorization to interact with different types of patient information.
While building a private LLM is theoretically possible as a first option, it also presents a monumental undertaking in terms of complexity and time. Instead, if organizations strategically adapt existing security and privacy protocols, they can leverage the immense potential of this technology while still abiding by the current data privacy regulations that prohibit sharing PII/PHI.
The question arises: Do organizations need to completely rebuild elaborate RBAC systems from scratch for every generative AI application? Thankfully, the answer is no.
Modern software can integrate with and utilize existing security frameworks to control access. Instead of reinventing the wheel, organizations can reuse existing systems to access relevant data sets and apply all the necessary access rules. This ensures that all data access activities are thoroughly validated and deemed appropriate for those who have clearance to see and use patient data, ensuring these activities are safe and compliant with all privacy regulations. This collaborative approach saves time and resources while maintaining the highest security standards.
Transforming Patient Care: The Power of Retrieval-Augmented Generation for Efficiency
Using advanced AI applications require vast amounts of organizational data, including sensitive patient medical histories. However, directly integrating personal patient information, like names and health details, isn’t always straightforward – especially as healthcare professionals prioritize safely and efficiently accessing patient data. Synchronization of sensitive data becomes a logistical hurdle, and ensuring compliance with HIPAA security and privacy controls is paramount. This creates a knowledge gap – the AI model simply lacks the necessary information to operate effectively.
The solution lies in a concept called “retrieval-augmented generation.” This process enhances AI’s ability to integrate an organization’s data with the reasoning engine of the LLM. This allows the model to access protected data without compromising security or privacy. Here’s how it works: the method takes non-confidential information that’s readily available such as clinical help guides, procedural manuals and technical support bulletins, and uses the information to guide the LLM prompts with a small number of examples, or “few shot prompts.” This information is then indexed and stored in a specialized database without exposing sensitive patient data.
By utilizing retrieval-augmented generation technology, healthcare organizations can benefit from “few-shot prompts” and embedding techniques. This allows the generation of outputs relevant to personally identifiable information (PII) or protected health information (PHI) without directly exposing the entire data set or resorting to traditional model training methods. In essence, the LLM learns to “reason” about protected data by drawing insights from the readily accessible information within the prompts and specialized database.
Staying Compliant: Navigating the Regulatory Landscape with Confidence
Compliance with regulations has always been a top priority for healthcare organizations. With the introduction of GenAI coming to use, there is an added layer of complexity to the equation. Industry experts fully anticipate new regulations and laws that focus on the use of generative AI tools and solutions, especially when it comes to protecting patient data.
While GenAI might be a relatively new technology for many healthcare companies, adhering to existing data protection and privacy regulations is the first step to compliance. As mentioned, the ability to securely retrieve data and manage role-based access control plays a crucial role in achieving this compliance.
Working with a trusted partner with deep expertise in both healthcare data regulations and generative AI development becomes critical. The right partner can provide support in other vital areas of a generative AI program, such as the overarching data strategy (including data integration beyond just PII/PHI), platform implementation and operation, and ensuring ethical use of the technology.
By prioritizing data security, access control, and ethical use, generative AI can revolutionize healthcare workflows, leading to improved efficiency, cost savings, and ultimately, better patient outcomes. While GenAI presents exciting possibilities for the future of healthcare, its successful implementation requires a collaborative effort. Healthcare organizations, technology providers, and regulatory bodies must work together to ensure responsible development and deployment.