One of the recurring questions I get from (more sophisticated) clients about security risk mitigation and safeguards is, what’s the standard for acceptable risk? This is, of course, the standard for Security Rule compliance overall, and for each Security Rule safeguard as well.
“Acceptable risk” is ideally supposed to be whatever an organization decides, after appropriate assessment and analysis, it the probability and severity of harmful incidents, after implementing whatever safeguards the organization considers reasonable and appropriate. Healthcare organizations don’t really have the last word on this, however, and another answer is that “acceptable risk” is whatever OCR decides it is if they audit you. This is not a very comforting standard, for those who just want to know what to do, but there are better ways to address the problem.
Certainly at a minimum you need to be sure you cover all the specifications in the Security Rule, either directly or “addressably” if appropriate. You should also certainly be informed by OCR’s audit protocols and the results of OCR enforcement actions. I’d also strongly recommend checking against NIST authorities where applicable; while these are not legally required for non-Federal organizations (except by contracts with Federal agencies, where applicable) they are definitely persuasive. As well as lengthy and generally aimed at larger organizations, without really taking private-sector realities into account a lot of the time. (That last bit was editorial comment which does reflect the views of the management.)
Even with these authorities taken into account, there will be many situations where it will be difficult to decide what’s an “adequate” safeguard and an “acceptable” risk, and this can be especially problematic if there’s a significant cost or burden difference between a “decent” solution which reduces risk to a significant degree, and a “better” solution which reduces risk even further. Either one might be “reasonable and appropriate” under the Security Rule; or then again, might not be. Short of an OCR audit, how do you decide which way to go?
There’s a useful risk management distinction which can be applied here, between “mere compliance” and “loss prevention,” which in the healthcare space in particular I prefer to broaden to “prevention of harm:” “Loss prevention” typically focuses on losses to the organization under analysis, while “prevention of harm” takes a broader perspective and includes harm to both the organization and to third parties. This broader perspective is more consistent with traditional health care risk management strategies, which definitely include assessment of potential harms to patient health and safety, as well as the breach notification rule, which is based on the concept of mitigation of third-party harms.
(A particularly nauseating – and I mean that literally – example of the distinction between acceptable risk for compliance and harm prevention in the food sector can be found here: Food Sickens Millions as Company-Paid Checks Find It Safe.)
The distinction between the two is not really as simple as a cost difference. As a strategy “mere compliance” works a lot like a checklist: Make sure you’ve got something for every specification, make sure you’ve got the basic documentation in place as required, and don’t go beyond the four corners of the Security Rule or the basic assessment activities it requires. Then, if something bad happens, hope that your safeguard selection and documentation will withstand scrutiny. This might well be adequate for “mere compliance” – OCR might let you know, after the fact.
A “harm prevention” strategy, on the other hand, would include whatever is necessary for compliance, but go beyond to consider the range of potential harms which could come from a security failure. Most interpretations of the Security Rule seem to take a very narrow view that it is all about protecting the confidentiality of information, but I think that is too narrow, and that the Rule does allow for consideration of other types of risk, and balancing against them. This is, I think, part of the point of the “reasonable and appropriate” analysis which is supposed to inform safeguards determinations. In fact, I think there is a good argument that the Security Rule assumes a “harm prevention” standard, but as far as I know that hasn’t been articulated by OCR.
A “harm prevention” approach, then, would consider not only information-related risks, but also health and safety risks. Even within the category of information-related risks, it would take a weighted approach, and consider the severity of potential harms to third parties. Again, this is consistent with the “reasonable and appropriate” standard, as well as the “addressable specifications” standard where that applies. Safeguards would then be selected based on the degree to which they are projected to prevent serious potential harms, and trade-offs would be made based on harm prevention rather than “mere compliance.” This might mean that some “mere compliance” checklist items get short shrift – I’ve seen this in some organizations – but it also might welle mean that the likelihood of reportable security breaches would be substantially lower. This in turn would reduce the likelihood of an OCR audit.
The approach taken by any given organization will be determined by the culture and resources of the organization, of course, and as I noted “harm prevention” definitely needs to cover everything included in “mere compliance,” even if it emphasizes some items less and includes additional risks not included under a narrow interpretation of the Security Rule. But while “harm prevention” may cost more in the short run, my own suspicion is that for most if not all healthcare organizations, in the long run it is likely to be far more cost-effective.