By Jon Moore, MS, JD, HCISPP, Chief Risk Officer and SVP, Clearwater and Dr. Thomas Graham, Ph.D., VP and Chief Information Security Officer, Redspin, a Division of Clearwater
Twitter: @ClearwaterHIPAA
Artificial intelligence (AI) is a tool; like any tool, it is capable of benevolent and malevolent applications. As with any novel technology, there is often an eagerness to adopt and utilize it, even without comprehensive knowledge about its most effective deployment or potential hazards. This issue is especially salient in the context of AI, given that the interconnected world we inhabit, combined with our increasing reliance on information technology, amplifies the potential rewards and risks associated with its usage to an unprecedented degree.
Most conversations around the implications of AI have only scratched the surface; for all the benefits, the inherent risks must be weighed in conjunction. For example, AI is based on current computing technology and power. However, in the near future, quantum computing is going to become more and more common. The implications of using AI combined with quantum computing could be one of the greatest leaps forward in our understanding and capabilities. However, that same leap will also impact the mechanisms we currently have in place to protect information, potentially negating many of the existing safeguards.
AI Challenges in Healthcare
AI has the potential to revolutionize healthcare by improving diagnosis, treatment, and patient outcomes. It also has the potential to improve access to care and reduce costs. However, AI is not a magic solution; it is a tool that requires careful deployment to ensure the best possible outcomes and to avoid the worst.
One of the most significant challenges in the healthcare industry is data management. Healthcare data is vast, diverse, and often incomplete or inaccurate. AI has the potential to help clinicians sift through this data and extract meaningful insights to improve patient outcomes. However, AI may lead to incorrect diagnoses or inappropriate treatments without proper implementation.
Furthermore, AI algorithms can perpetuate existing biases or create new ones. AI learns from the data it is fed, so if the data is biased, the algorithm will be too. This issue is critical in healthcare, where biases can lead to incorrect diagnoses, inadequate treatment, and systemic inequities.
Another challenge with AI in healthcare is ensuring privacy and security. Healthcare data is highly sensitive and must be protected from unauthorized access or misuse. AI applications that use patient data must comply with strict privacy regulations and maintain the highest data security standards. Special attention must be paid to where the data resides and with whom it is shared. Traditional security controls need to be in place around the AI-enabled applications, and new controls must be developed and implemented to protect the AI algorithms further.
Healthcare providers must carefully consider implementing and deploying AI applications to ensure that they improve patient outcomes while minimizing risks. Ultimately, AI’s ethical and responsible use in healthcare will lead to a safer, more efficient, and more equitable healthcare system.
AI Challenges to the Department of Defense
While AI holds limitless positive potential, it can also disrupt the existing mechanisms and safeguards implemented for information protection, potentially undermining the current level of protection. From a risk perspective, this brings concerns as nation-states and coordinated threat actor groups are continuously seeking better ways to circumvent current protections in place.
One of the key mechanisms that currently allow this infiltration is phishing. As AI capabilities become better defined, nation-states and threat actors could, and will, utilize AI to create more convincing campaigns. For example, the AI could be configured to pull in a monumental amount of information from publicly available locations (such as social media), compromised exfiltrated data, and other sources to craft specific campaigns around that data that provide a level of legitimacy to the attack. This possible scenario is especially concerning given one of the largest targets of these adversaries is the networks and programs that support the Department of Defense (DOD).
One of the recent initiatives in the DOD, the Cybersecurity Maturity Model Certification (CMMC), was created to mitigate risks to the organizations that support the DOD. The CMMC was launched because nation-states created working models of DOD technologies before the DOD was able to do so. By using AI, a threat actor could feed the current required configuration settings into the AI engine, then identify potential threat vectors that the configuration items do not account for. Another potential is the prevalence of Open-Source development and the code being readily available. AI could be used to take that code and identify unknown vulnerabilities (Zero Days) that are currently not addressed.
To address these evolving AI-enabled threats, cybersecurity professionals need to first understand them – and then integrate that knowledge into developing the next generation of protections for organizations. The longstanding notion is that cybersecurity professionals are always trying to catch up to those malicious actors. By better understanding and leveraging AI, these professionals can be better equipped to deal with the next generation of threats.