The challenge is dealing with the staggering costs associated with providing adequate security. Private estimates peg the U.S. Government's expenditure for war on terrorism in 2003 at $250 billion, including the (perhaps conservative) $100 billion cost of invading Iraq.
Private business picks up most of the expense associated with providing adequate security for its critical information and business processes. Stories of hackers accessing millions of credit cards and the ubiquitous 'insider threat' statistics keeps information security on the front burner. But how much information security is enough? Is there any way to derive an economic benefit from a security investment?
Most of the security ROI success stories revolve around reducing the amount of time required to perform regular tasks in maintaining a security environment: re-setting passwords, updating virus files, checking detection logs for potential intrusions and maintaining correct security configuration information. The cost benefit to applying these security technologies is probably real, but measuring the return on a security investment using this methodology is sidestepping the real issue: are you any more secure as a result? And what is the real business benefit to that security?
Unfortunately, most analysis of the costs of cybersecurity look at the avoidance costs of events like the SQL Slammer worm rather than any calculation of proactive return from good security practices. This 'doom and gloom' mentality wears thin in the eyes of the security officer and her management since it doesn't answer the question of when, how and where security dollars should be spent.
@Stake, the security consulting firm, did some innovative research back in Q4 2001 looking at the cost benefits of increased security on application development (see http://www.sbq.com/sbq/rosi/sbq_rosi_software_engineering.pdf). By testing the maxim "early detection is always cheaper in the long run" the study showed that catching security vulnerabilities in the design phase was more cost effective than in implementation, which is more cost effective than in testing, and so on.
The results are not surprising, given what we think we know about how applications are developed. What is surprising is that given all the evidence and commonsense to the notion that it's cheaper in the long run to fix security vulnerabilities sooner rather than later this still is not enough of an incentive for application developers to stop releasing insecure code. Hence the recent Microsoft SQL Server vulnerability which led to the SQL Slammer worm and over $1 billion in direct and indirect losses. Why?
A couple of reasons. First, to whom does the detriment accrue when something like Slammer hits? Not Microsoft - if anything it gets a PR and a sales boost in a perverse way with all the publicity. You can just hear the poor IT administrator saying "Oh jeez, I didn't realize I had an outdated version of SQL Server, I guess I'd better upgrade." Economists call this situation an externality - a cost that is not properly borne by the producer such that it persuades the company to do the 'right thing' in society's eyes. For example, think of a manufacturer that dumps chemicals in the river as a byproduct of its manufacturing process. Left unchecked, only the manufacturer gains and everyone else loses as a result.
The second reason code developers will continue to release insecure code is the very fluidity of software itself and the expectation that vulnerable software can be 'patched.' Remember the Pentium flaw that was discovered in the early 90s? The flaw was the result of an implementation error that somehow managed to slip through the verification and test phase until a few million PCs had already been shipped with the flawed chip inside.
Intel actually had a software patch for the error, but this was pre-internet and nobody wanted to accept that a) the error was not that big a deal for 99 percent of the population and b) the most cost-effective way to repair the flaw was to either ignore it or install the software. Instead, Intel took a $500 million charge against earnings and agreed to replace any 'defective' Pentiums at no charge. Most significant for Intel, it changed its design, development and test processes to ensure that the problem would not happen again. In almost ten years since the first Pentium flaw there has not been a second, despite Intel's silicon being at least 20 times more complex, according to Moore's law.
With software, the expectation today is that a patch is sufficient remedy for a defect. Imagine how things would have been different if Microsoft were forced to fix the approximately 75,000 unpatched SQL servers a là the Pentium flaw instead of simply posting the revised code on its web site? Until these externalities can be turned towards the originator of the vulnerability there will be little economic incentive to do more. Of course, there are other players to consider: the ISPs who manage the broadband connections, the security administrators, and the authors of the malicious code that can most directly be traced to the damage. Sorting out who is responsible for what is a big task, but one thing is sure, right now the status quo is broken.
The economic return of security may have more to do with a cost-avoidance or risk-reduction mentality than a 'security as a key business enabler' mindset, especially in this tight-fisted bottom-line oriented environment. The faster that the true costs of providing effective cybersecurity can be borne by those that stand to gain, the easier time information security will have in getting out of the doom and gloom mentality and into the 'best practices' spotlight.
Robert Lonadier is the president of RCL & Associates, a Boston-based analyst and consulting firm specializing in providing implementation-ready counsel and advocacy services to senior management in information security. He can be reached email@example.com. RCL does not currently have any relationships with the companies mentioned in this article.