Former DoD Cyber-security Analyst Addresses HIPAA Security Challenges
By Anthony Brino, Associate Editor
Government Health IT
HHS’s Office of Civil Rights is starting enforcement of the HIPAA Omnibus Rule on Sept. 23, 2013, and aside from regulatory compliance, the HIMSS Privacy and Security Committee is encouraging health organizations to use sound risk management practices and standards for privacy and security by 2014.
According to Mac McMillan, a former Department of Defense cyber-security analyst and now CEO of the IT consultancy CynergisTek Security, there’s still quite a ways to go.
As information networks in healthcare become more connected, McMillian said, privacy and security becomes more complex, with compliance needs and costs increasing, especially for breaches.
As healthcare organizations prepare for a new wave of HIPAA audits, McMillian talks about why they should be designing security architectures from the get-go, some lessons from the 115 audits in 2012, and the possible network threats to medical devices.
Q: You have been working in healthcare IT privacy and security for about a decade now. How do you think health organizations are doing?
A: I think it varies across the industry. Some of our larger institutions I think are a little bit further along. That’s partly because they’ve got the resources and the staff and the expertise in many cases to do that. Or they’re using outside experts to help them. I think as you go down the line in terms of size, it gets more and more difficult. That’s what we saw quite frankly with the results of the audits — smallest entities are struggling the most with doing this. I think it really comes down at the end of the day to a resource issue, because they don’t have dedicated staff, they don’t have in many cases large IT organizations, and so a lot of the skill sets that are required are not necessarily available to them, unless they go outside.
There are some challenges out there; I think all in all, we’re doing a lot better than we were when we started nine years ago. But I think we still have quite a ways to go.
I don’t think it’s going to get any easier going forward, especially as we move into an expanded definition of what the perimeter is as we embrace BYOD and mobile applications. Prior to 2009, when HITECH came out and meaningful use began, we still had a number of organizations who did not have all of their medical records digitized. Today that’s becoming the flip side; today it’s very few organizations that don’t have their data digitized. And as those systems become more prevalent, they begin to identify new capabilities and new ways of using that information, which creates other new connections and new platforms and new modes of delivery. All of that increases complexity, and complexity is not the friend of security.
When you look at the data from the threat centers that monitor the malware attacks focused on a particular industry, healthcare before 2009 was ranked somewhere around 15th or 16th. Today, they’re ranked number one. When you look at the size of the breaches that we’ve had, clearly, as we have digitized more and more of our data, we’ve had much larger breaches occur. Before if you had a breach, it was generally very small. When it was paper-based, it was tens of records, it was hundreds of records — not tens of thousands of records, like we’ve seen with backup tapes or laptops that have gotten lost or were stolen.
Q: What did you take away from the OCR’s HIPAA audits last year?
A: When you look at the results of the audits, the biggest areas of concern that were identified on the privacy side was authorizations — keeping track of who, actually, are we giving the information to and do we have the proper authority to do that? When you look on the security side, it was things like logging and monitoring what users are doing in the environment. It was disaster recovery, in terms of availability of systems and making sure that is covered. It was conducting those risk assessments. Many of our organizations are still not conducting them or conducting them thoroughly enough.
The other area of course was accountability and encrypting mobile devices. In general there were very low marks for accountability of systems. And coupled with that, the lack of encryption that we see still with some of these devices. If you don’t know where your devices are and you don’t have them protected, that’s a recipe for disaster unfortunately. The biggest concern we still have in healthcare today is not so much hackers breaking into our systems and perpetrating complicated extortion attempts or theft of data through our systems directly; it’s still really organizations not taking proper precautions and users making mistakes and doing things they’re not supposed to. That’s what the audits showed us in spades. We really don’t know, in many cases, what’s going on in our environments, because there’s very little monitoring and auditing that’s of a proactive nature.
Q: How do you think organizations are doing in ensuring their IT is secure from the beginning, before a roll-out, which is something you and others have talked a lot about at industry conferences?
A: What we’re talking about is actually taking a step back before we buy systems and thinking about how we’re going to use that system, what information is going to be in that system, what that system is going to be communicating with, where it’s going to sit in our environment, and thinking about the types of inherent features that we need that system to have to help us be compliant and to protect that data. We have some organizations out there that I think are beginning to do a good job, because they’re beginning to realize the benefits in doing that ahead of time — engineering the security in, as opposed to retrofitting it later. It always costs more later, unfortunately, and making good decisions when buying systems will save a lot of headaches down the road.
But then we have, unfortunately, a lot of organizations that have a very decentralized buying apparatus. They still have non-IT folks who can go out there and purchase equipment, or influence decisions around or above the IT organization. And unfortunately in the long run they will pay for that. Because if something occurs, the cost of a breach today is way higher than the cost of protection. Having to retrofit your system for security properly is much more costly than doing it properly upfront.
We’ve done several assessments in the last month or so where we’ve actually looked if people are buying systems today that have the right features that will help them be compliant. In other words, do they have the capability to audit, to back-up, to encrypt the back-up, to set granular permissions with respect to user identity, etc? Are they thinking about disaster recovery when they buy the systems, or are they just buying a system and assuming it’s always going to be up and running, which the operator tends to do more often than not? What we’re finding is that we’re still not doing a good job of that. We’re still not putting vendors through their paces, in terms of interrogating their solutions prior to selections — things like encryption. We’re still buying systems that don’t meet FIPS 140-2 [Federal Information Processing Standards] with respect to the algorithm on their encryption that they have in their product. Automatically, that makes that system not meet safe harbor. It’s just things like criteria that we could put into our processes that will enable us to weed out the ones who really want to be serious about working in healthcare and the ones that aren’t focused on that.
Q: So there’s no evidence that any medical devices, like pacemakers, have been hacked, or of attempted at hacking. But you’ve said these devices are vulnerable, especially if they’re connected with wireless networks?
A: More and more of our medical devices now communicate to the network, and more often than not they’re in a wireless network, as opposed to a direct connection. What that means is that our wireless networks need to be more secure than they’ve needed to be in the past, because there are essentially three risks with medical devices. The first and most important one, of course, is patient safety. Any interruption of that wireless network can affect the patient connected to it. Before, when those devices were not networked, it was not quite the same scenario. The second area that’s of concern is the actual integrity of the data itself, in terms of availability of the segment of those areas of the network and those devices. And last but not least is the integrity of the devices themselves with respect to their susceptibility to compromise from other types of vectors, such as malware, and what that can do, not only to those devices or that segment of the network, but also to the rest of the network if it’s not properly managed.
Q: And how were those vulnerabilities discovered?
A: It’s discovered when networks go down and that affects part of the network, and obviously you know about it immediately. When these devices are networked, via the wireless network, and something happens to that part of the network that it no longer is operating properly, or it’s affected in some way, then obviously you’re going to know that immediately. That’s a lot different than somebody actually, like we’ve seen of late at some of the [trade] shows, demonstrating how to hack a mobile device. So far, knock on wood, to my knowledge, we’ve not seen an actual case of somebody deliberately attacking a patient or attacking medical devices, for the purpose of harming anyone. What we’ve seen so far is just the normal types of events that hospitals suffer all the time — outages, or things that affect the network or parts of the network, due to malware, etc., which are more indirect, indiscriminate types of events.
This article was originally publishes on Government Health IT and is used here with permission. You can hear Mac McMillan discuss these issues on the May 21, 2013 broadcast of MU Live! The show will air at 2 pm ET on our Internet radio station, HealthcareNOWradio.com.