Selling the value of culture isn’t easy. Especially in a cybersecurity program. I’ve found security professionals to be among the loudest complainers about terrible security cultures within their organizations. But, ironically, they also tend to be the first ones to throw up their hands when it comes to changing those cultures. Sometimes the reason behind this feeling of helplessness is the unpleasant truth that, as much as companies say they take security seriously, InfoSec teams and CISO’s often lack the political juice to effect fundamental change. Other times the reason has more to do with the fact that people don’t come equipped with a command line interface. That tends to make them more or less unfathomable to security teams used to working with technology systems. Changing the unconscious biases and values that make up organizational culture seems about as likely as writing a shell script that will make your server kiss you and really mean it. So why bother?
Because culture eats strategy for breakfast, as Peter Drucker said and I’m so fond of quoting. InfoSec programs that expect to achieve their security objectives must address culture or fail. But to sell people on the idea of cultural change and transformation means justifying spending money on projects, money that might go elsewhere. You have to make the case that the money will be better spent on culture than on that next gen firewall you’re considering, or that big data analytics project.
To that end, this post is dedicated to demonstrating the financial return on investment that comes from changing security culture, making it stronger. Organizations with stronger cultures tend to make decisions that prioritize security, even against other priorities like efficiency, productivity, or profitability. Organizations with strong security cultures tend to believe that, even if security negatively impacts such priorities in the short term, the organization will make up those losses in the long run by experiencing fewer and less damaging security incidents. Weaker security cultures tend to go the other way: security is not the first priority and may be quickly shouted down as a concern when other priorities are on the line. That doesn’t mean security is not important, just that it’s not as important as getting projects done on time, making it easier for people to do things, or saving money. An organization’s security culture influences the individual security decisions people make. So we can begin with a simple question: how much do “bad” security decisions (the ones in which security is not as important as other priorities) cost an organization?
I’ll answer this question, and by extension the question of how financially valuable a strong security culture is, by building a simple model. Models are made up of structures, assumptions, and data. They are not all that difficult to understand, but it’s always important to make these things explicit. Showing your work allows people can follow your logic and argue with you if they think you are wrong.
Let’s start with the basics. I’m going to use a simple version of a Monte Carlo simulation to build and run this model, which just means I’m going to define some scenarios (including the likelihood that certain things will happen in the scenario) and then use a computer to simulate that scenario taking place hundreds of times. That will allow me to produce some expected ranges of outcome, given the assumptions of the model. If you are interested, Investopedia has a good, high-level Monte Carlo video tutorial.
There’s an important caveat when building this sort of model. We will not get a single number as a result. Because we’re dealing with probabilities, we have to factor in uncertainty. Which means we have to express our results in terms of ranges and not the number. Imagine you’re playing craps in a casino (Monte Carlo’s were named after that city’s casino). You may know that the most likely number you’ll get rolling two fair six-sided dice is 7, because there are several possible combinations of the dice that will equal 7. Rolling a 2 or a 12 will be less likely because there are fewer combinations that equal those numbers. If you’re thinking in averages, in terms of the number, and you expect to see a 7 but don’t expect to see snake-eyes or a 12, you’re going to have a bad night. That may sound like common sense, but think about how often we reduce security risk into a single score or dollar figure. It’s fascinating to me how an industry that loves to spend time in Las Vegas can so regularly fall into the trap of deterministic (“give me the number”) thinking when it comes to security risk analysis. But that’s another post…
To build my cultural risk model I have to define a few things and then make some assumptions about them. In plain words, my model looks like this:
Organizations make a certain number of decisions each year that may impact information security. Some of those decisions will involve conflicts between security and other priorities. The organization’s security culture will influence how often security is prioritized during these conflicts. If security is not prioritized, there’s a chance the decision may result in a security incident. Every security incident will result in some level of financial loss for the organization, resulting in some total annual loss across all incidents.
There are limits to these assumptions, and you may have already spotted a few concerns. For example, a bad security decision one year may not result in an incident happening in the same year. But we want to keep things a little simpler here, so we may note that objection but agree that, in general, the model makes sense. We can even represent the model graphically:The next step is to break down the components of the model and make some explicit assumptions about the parameters of each component. For example, how many security related decisions do you expect your organization to make in a given year? Again, for simplicity’s sake let’s assume that your organization makes about one such decision each working day. It might be a formal decision about whether to install a new device on the network. Or it might be an individual decision about whether or not to click the link in an unfamiliar email you just received. We can debate whether 260 decisions per year is too many or too few (that’s the value of making your assumptions explicit) but we won’t do it here. What we will do is assume there’s some variability in that number. We don’t make exactly one decision each weekday. Sometimes we make two, sometimes we don’t make any. But we feel comfortable estimating that we’ll make about 260 decisions each year, give or take 25-50. We can graph that too, which shows us that we’re looking at an expected range between about 180 and 330 decisions with security ramifications made by the organization over the year.
Now according to our model, we would expect some portion of these decisions to be security no-brainers, like should we restrict access to the new HR database we are installing? But other decisions might not be so cut and dried, like should we grant that security exception the e-commerce team is asking for so that they can add new features to the customer-facing portal? Or should I finish the security testing on my code even if it means delaying product delivery? We can estimate how many times these conflicts will come up. In this model I estimate that they come up about half the time, but never less often than 1 out of 10 decisions, and never more often than 8 out of 10. That adds another layer of possibilities to the model, this one affecting how many decision conflicts the organization will have to deal with annually.
As I discussed earlier, how each of these conflicts plays out, and whether the resulting decision prioritizes security or prioritizes something else, will be heavily influenced by the enterprise security culture. A strong culture will make security the most important consideration (or at least equally weighted) more often than a weak security culture, resulting in more decisions that favor security and fewer that are “bad” from a security perspective. To add this factor to the model, I’ll again keep it simple. I’ll assume that a weak culture, security will come out on top in a conflict 25% of the time, in about 1 out of every 4 decisions. By contrast, a strong security culture will choose security over competing priorities 75% of the time. Not every decision will go the security team’s way, but more of them will. The end result is the two cultures producing two different ranges for the number of “bad” security decisions made, as the next two graphs illustrate.
In the weaker of the security cultures I modeled, dozens upon dozens of decisions that would have otherwise prioritized security do not. This increases risk, since any of these less secure choices might result in an actual security breach. But what’s the likelihood that any one bad choice will end up triggering a full-blown incident? In the absence of empirical data, we may have to guesstimate, which is what I’ve done here. My model assumes that no fewer than 1 out of 1000 of the bad security decisions the organization makes over the year will result in a full-blown incident. I don’t specify how costly an incident will be (that comes later). I also assume that no more than 5 out of 100 bad security decisions (5%) will trigger an incident. That’s the worst things will get. On average, I expect 1% of the bad security decisions made will result in a security incident, or about 1 out of 100 times. So I’m being conservative here from a breach perspective. Even if the organization makes lots of bad security decisions, it will still be okay at least 95% of the time.
The last thing we have to figure out for the model is how much a security incident is likely to cost if one does happen. Once more, for the sake of simplicity in an already long blog post, I’m making the assumption that no incident will ever cost the organization less than $10,000 in terms of lost time, productivity, and damages due to theft or sabotage. That’s the minimum. At the other end of the scale, I’m putting a cap on incident losses of $2 million per event. The organization will never see a single incident cost more than that, even including loss of reputation, fines, etc. A typical incident I’m pegging at a cool half million dollars – I’d expect to lose $500,000 total on any given breach resulting from a bad security decision.
As I’ve said, you may argue these numbers are too high or too low. It doesn’t matter. I’ve deliberately tried to err on the side of low cost, because I want to demonstrate how valuable a security culture can be even when it’s not very likely you’ll get breached, and a breach isn’t likely to cost you a tremendous amount of money.
And so we find ourselves at the point where we can run the model in all it’s permutations, over and over, imagining that all the different parts and probabilities I’ve described above crunch together. The organization makes decisions over the course of a year. Conflicts come up, the outcomes of which are influenced by enterprise security culture. Good security decisions are made, as are bad ones. Some of the bad ones cause security incidents and those incidents cost money. How does it all play out? See for yourself.
Culture matters. As you can see in the results of the model, the probabilities for annual losses from security incidents are quite different. And the only variable that has changed is the strength of the security culture. In a weaker culture, where more bad decisions are made, more money is lost. The stronger culture takes less of a financial hit. Of course there is still the possibility for incidents and losses, including very costly ones, with either culture. Security culture transformation is no more of a silver bullet guaranteeing you’ll never have a breach than that new fancy firewall you’re considering buying. But if a weak security culture means you lose millions more than if you had a strong one, you have to ask yourself, what’s the best investment?
The point of this exercise is to make the case that security culture can provide every bit as much financial value to an organization as technology can. Maybe more, if the process of transforming culture proves less expensive than the cost of new security kit. But you won’t know unless you measure that value. Many security organizations assume you can’t, and so they fall back on buying technology instead. Which, at the end of the day, might not prove to be a “good” security decision.