By Matt Fisher, General Counsel, Carium
Twitter: @matt_r_fisher
Twitter: @cariumcares
Host of Healthcare de Jure – #HCdeJure
Security will only be as good as the design and implementation behind it. Acknowledging those realities is important to help ensure that security is actually built into systems and that data are actually protected. However, what is behind those concepts? That is where the nuance will arise.
Focusing on design and implementation was brought to the fore from a recent research report about security connected to Fast Interoperability and Resources (FHIR) APIs. The report (of which this description is a gross oversimplification) found flaws in the way that FHIR was incorporated into applications designed to encourage or promote the flow of data. The issues are not a flaw or failure of FHIR, but the way in which it is incorporated and rolled out by the app developers. The distinction is an important one to note because it should help drive attention to the right areas: namely development and implementation.
FHIR and Security
Looking at the report addressing security and FHIR is a good starting point. FHIR is a standard that sets common data formats and elements with the intention of promoting the flow of data. The basis of FHIR is that is intended to promote the exchange of data. What should be clear from the definition is that FHIR is not a security protocol or even meant to address what happens with the data from that perspective. Instead, as defined, FHIR is focused on standardization and encouraging the flow of data.
Security then becomes a separate layer beyond FHIR. If security is a separate layer, then the development of appropriate security protections falls on the developer. That distinction is highlighted in a response from HL7 (the developer of FHIR) to the report discussing the implementation flaws of third parties taking advantage of the opportunities provided by FHIR.
Use of FHIR in Applications
If security falls on the party using FHIR in the development of an application, then the concerns of intentions or sophistication behind that development return to common concerns. The wide spectrum of understanding or intent behind companies developing applications has been (or should be) an important topic. Considering applications that are targeted to healthcare, the approach to getting into the industry and who the targeted users are both influences on what rules apply when it comes to security (and privacy) requirements.
As should be understood, if an application will be produced or used by a healthcare organization to help it provides services, then HIPAA will most likely apply. While a detailed discussion of how to determine when HIPAA applies and what that means from a technical perspective can be found in a prior post titled Applying HIPAA to Digital Health. To quickly summarize though, if HIPAA applies then the company developing the application must have appropriate policies and procedures in place. HIPAA does not state specifically how an application should incorporate security or what needs to be built into the code. Additionally, the application itself is not compliant in and of itself. Instead, the application is a component of how the developing company itself complies with HIPAA.
What about when HIPAA does not apply? In that instance, a specific security regulation may not apply (or exist). In the absence of a mandated security protocol, then determining what security measures to include could be influenced by potentially applicable privacy requirements. Turning to privacy requirements though is likely not all that informative since privacy can exist without security.
If arguably no regulation sets specific security standards, then what resources can be used to determine what constitutes an appropriate level of security? There are some agencies and industry-specific groups that promulgate standards. The standards will evolve over time (as they should because threats change), but the key is likely to understand the industry and maintain an understanding of what others are doing. That is arguably industry standards are both developed and confirmed, which is somewhat of a circular way to explain the concept. The creation of the industry-standard also calls for participants in the industry to share information. Sharing is certainly a good path to pursue to enhance security as everyone can learn from everyone else.
Intent Behind an Application
No matter what standards or expectations may exist, the intent of the application developer will influence what is included. If a developer wants to be able to scrape data or get control of data, then it may provide lip service to protecting or securing data, but will not necessarily give enough time or attention to including appropriate degrees of security. While that reality can make sense to the developer, it leaves the assessment of the application up to the user. Users may not be in a position to be able to make that assessment though. That creates a fraught situation though because it is exceedingly difficult to make that determination.
Given the difficulty of assessing a developer’s intent, enforcement or repercussions from inadequate security will not necessarily be resolved or addressed until an issue occurs. If the application misrepresents or hides its actual purpose or operation, then claims could be possible through consumer protection laws under the purview of the Federal Trade Commission or various states. However, that means harm has already occurred and redress is being sought. Many individuals may not want to spend the time to get that remedy and will not recognize and benefit if an issue has already occurred.
Implementation of Applications
Leaving aside the other issues, whether an application is designed well or not, implementation is a step that cannot be overlooked. Even the most secure application will not recognize the full security benefits if it is not put into operation and configured appropriately. A great example of that reality would be the number of data breaches caused by misconfigured S3 buckets. The cloud storage systems are usually well designed, but if not tuned up correctly then data become exposed.
If implementation is a way to ensure security, then it must be a focus for each organization. Instead of just putting an application straight into production, it is essential to review how the application is used and check all of the settings. Taking time upfront can drive alignment with existing protections within a system. In particular, reliance on default settings should not occur. No matter what industry an application is targeted to, the defaults may not have all security features turned on or set up in a preferred manner. That is why implementation is the time to review and vet before it goes live. Getting it right before data are actually going through the application in a live environment will help reduce the likelihood of a bad event occurring.
Security Front and Center
All of the discussion around applications just means that security must take a front and center position. Trust has to be earned and constantly reviewed. As stated at the beginning, security can only be as good as design and implementation (along with a number of other elements). It all then comes back to asking questions, increasing awareness of mandates or expectations, and being deliberate in each step of the process.
This article was originally published on The Pulse blog and is republished here with permission.