All organizations have limited resources: money, people and executive time. One way to allocate those scarce resources is by calculating the return on investment (ROI) for each option. There are limitations to such calculations, but we'll get into that later.
So how do we calculate the ROI of an investment in information security risks and measures (called "cyber" moving forward).
An ROI Formula
Let’s start by defining ROI. Investopedia defines it as: "Return on Investment (ROI) is a performance measure, used to evaluate the efficiency of an investment or compare the efficiency of a number of different investments. ROI measures the amount of return on an investment, relative to the investment’s cost. To calculate ROI, the benefit (or return) of an investment is divided by the cost of the investment. The result is expressed as a percentage or a ratio."
The return on investment formula therefore is: ROI = (Gain from Investment - Cost of Investment) / Cost of Investment
In the above formula, "Gain from Investment” refers to the proceeds obtained from the sale of the investment of interest. Because ROI is measured as a percentage, it can be easily compared with returns from other investments, allowing one to measure a variety of types of investments against one another.
Why should this apply to investments in cyber?
Related Article: 8 Biggest Risks for Internal Auditors in 2018
How to Calculate the ROI of Information Security
We could (and probably should) look at the investment in cyber overall, but for purposes of this post I'll discuss the cost of additional tools, services and personnel to address a recently identified risk. (I’m going to use common language rather than get hung up on semantic discussions about which words to use per ISO.)
For example, the cyber risk created by the acquisition of robots in the organization’s warehouse has been identified and assessed by the experts (CISO and CRO with the concurrence of the CIO) as high, more than they believe the organization should take. Note, I use “take risk” rather than “accept risk” as it is more true to real life and the decisions we have to make.
They worked with business managers to reach this decision and based the risk assessment on how a breach would affect enterprise objectives. The business managers value the negative consequences of a breach at $10 million and the CISO says that the likelihood of that significant a breach due to vulnerabilities in the robot automation is currently 5 percent.
They have requested an investment of $250,000 per annum, saying that amount is necessary to bring the risk to acceptable levels. The CISO believes that would bring the likelihood of a significant breach ($10 million) down to 2 percent. In this case, we would modify the ROI calculation so that it is based on the reduction in risk rather than the gain from the investment.
If we accept that the current risk should be valued at $10 million or 5 percent and the risk after the investment is in place is $10 million or 2 percent, then the reduction is $10 million or 3 percent equaling $300,000. This calculates as an ROI of 20 percent, which sounds like a great investment. But, would spending an additional quarter of a million dollars be a good business decision?
A couple of questions:
- While the CRO and CISO say that the risk is outside acceptable levels, is it really?
- Would the risk really be reduced to 3 percent? Or, is that simply the risk from this particular vulnerability?
Taking each in turn, $10 million is a lot and would look bad in the newspapers. But if the organization has annual revenue of $4 billion and net earnings of $350 million, is a $10 million dollar number realistic? I suggest the enterprise could shrug off such a loss fairly easily. On the other hand, there might be serious follow-on consequences to an incident.
Top management and the board should have serious conversations that focus not only on acceptable losses, but also on what investors and regulators might consider a reasonable level of cyber defense, detection and response. Any definition of ‘risk appetite’ should probably be based on the likelihood of a serious breach, rather than on the amount of loss.
Related Article: Artificial Intelligence: How to Ensure ROI
The Cost of Doing Business
Let me start the discussion of the second question with a story. Some years ago, a partner and manager from PwC visited me. They suggested that my company acquire software from them that would address an information security exposure we had. I agreed that the software would indeed be of value as it would close a small open window in our infrastructure. But, I also informed them it was not a good investment because not only did we have other windows open, but the lock on the front door was broken.
When you have multiple vulnerabilities, the possibility of a breach remains high until all (or close to all) of them have been closed down.
- Computers are not secure.
- IoT is even less secure.
- We should call it the internet of unsecure devices.
Bigman alarmed us all, explaining how easy it is to hack almost any organization. The best counter-measure is to isolate your systems — but that is not always practical.
In light of the two stories above, would the $250,000 investment really reduce the risk of a breach with a significant effect on the achievement of enterprise objectives? Or is it just closing a small window?
I think we should stand back and consider the likelihood of a breach as a result of an incident taking advantage of any of our vulnerabilities with an impact valued at, say, $10 million (if that is the most you could sustain). Is that likelihood acceptable? If not, what likelihood is acceptable? How much risk are you willing to take?
Given that, how can you get to an acceptable level? How many windows would you have to close? Which ones, and at what cost? Is there a better solution? In fact, what are all your options: including actions that have nothing to do with cyber, such as removing your intellectual property and putting it under your bed, or changing business strategies?
I remain unconvinced the ROI on cyber is really as high as it may seem at first glance. Rather, I am starting to think that at some point it is better to consider cyber risk a “cost of doing business.”
If you can’t actually reduce the likelihood of a breach, can you at least increase the likelihood of prompt detection and response? Can you get to where a prudent individual would say you have a reasonable level of investment in cyber?
What do you think?