InfoSec Risk-Shifting and Consumers

One of my pet peeves (I have quite a few) is the way that we tend to use the term “risk management” as if it had a generally accepted meaning everybody understands. For infosec and most other IT professional purposes risk generally means a “hazard” associated with IT usage, in more formal terms described as a function of the probability of an event with negative consequences occurring and the potential severity of such harm.

From an IT and infosec professional’s POV, “risk management” is what you do to reduce the likelihood of an identified, potential negative event or class of events, its harmful consequences, or both. Safeguards and controls are selected depending on whether their associated cost is reasonably proportionate to the expected benefits in reducing risks.

This concept set is a little fuzzy around the edges, but is generally accepted as a viable algorithm for IT management and infosec. (I actually think don’t actually think this algorithm works all that well in these areas either, and I think I’ve got a solution for that, but that’s a topic for a future post.) However, I don’t think this particular algorithm is recognized and accepted by one very important IT stakeholder group: Consumers.

Consumer advocates will not find the infosec/IT professional cost-benefit model very attractive for a simple reason: It generally shifts residual risks to them. Any cost-benefit-based risk management strategy will inevitably wind up determining that some risks are not worth the cost of elimination. If this model is the legal standard of care – which it in fact is under HIPAA and GLBA, and other laws and standards – that means that an organization which has decided not to protect against such risks is not liable if a negative event in that risk range occurs. If the individual(s) affected by a negative event have no recourse, they have assumed the risk; in other words, the residual risks have been shifted to the consumer.

For an example, consider a mythical ecommerce company which gathers customer data as part of financial services it provides. The company is subject to the Gramm-Leach-Bliley Act, and so must provide security safeguards for this data. It selects these safeguards based on the standard cost-benefit model, and decides it would not be cost-effective to implement, say, two-factor authentication for access to customer transactions data. It then experiences a security incident involving theft and fraudulent misuse of customer data, through an exploit which could have been prevented by two-factor authentication.

Is the company liable to the customers who have been harmed? I would say probably not, if the standard of care is set by Gramm-Leach-Bliley and the company performed a reasonably competent risk analysis whose data supported going with single- rather than two-factor authentication. (Yes, I know Gramm-Leach-Bliley doesn’t provide for a cause of action, but trust me I could write up a complaint using the regulatory standard to set the negligence standard of care.) I’d also say it probably isn’t exposed to regulatory penalties from the FTC, for the same reason.

If you’re one of the consumers harmed by this incident the fact that the company’s cost-benefit analysis justified the decision to leave you exposed and then take all the harm yourself is probably not just cold-hearted, it’s probably insulting. And if you were one of the consumers, you’d probably feel that way too.

The problem is that when we look at the world as individuals (not just consumers!) we don’t do it through cost-benefit lenses, and (notwithstanding Milton Friedman, may he rest in peace) that’s probably a good thing. We consider that we have our own rights and interests, and don’t want to be harmed (materially) just to save someone else some money. And that’s what being on the receiving end of standard model risk management looks and feels like, if you’re the victim of residual risk-shifting.

I don’t know quite what the solution is for this dichotomy of perspectives; I think it is quite common in many areas – I rather suspect it is the rule rather than the exception. I do know that it makes infosec public policy and legal standards inherently unstable, because use of the standard cost-benefit model means that there will unavoidably be consumers aggrieved at being (or at least feeling) victimized, and so there will be public policy pressure by privacy and victims’ advocates to shift the risks back to the companies.

At the public policy level, I think this means we need to have robust discussions about what, exactly, we mean by “risk,” and what the trade-offs might be. At the company level, I think we need to be very careful to think through how residual risks might be shifted by the risk management strategies we adopt, and whether that in itself is acceptable.

After all, the more infosec residual risk you shift to consumers, the greater the risk you will create aggrieved plaintiffs and/or advocacy and pressure groups. In the final analysis, a low-cost infosec strategy just might wind up turning the residual risks you tried to shift into negative publicity, lawsuits and regulatory action . . .

Related Posts


HITECH Incorporation by Law: A Painful Conundrum

Okay, here’s yet another HITECH question: What does it actually *mean* if HITECH BA requirements are both applicable as a matter of law, and required to be incorporated into BACs? Do we have any discretion to vary the BAC language from the legally incorporated language? We’ve all (well, many of of us) read the argument […]

Read story

Data Havens and UFO Hacking

This probably dates me but is too good to pass up. The once-upon-a-time data haven of Sealand has offered political asylum to the guy who hacked into NASA and DoD computers in search of UFO information. From Personal Computer World (UK) North Sea ‘state’ offers McKinnon asylum Sealand may not be enough to save ‘most […]

Read story