A veteran CIO of a New York city-based financial services company learned in July 2002 that several vital files had vanished from one of his company’s 25 servers. An employee had tried to find some information and failed. That’s when IS discovered that there was, in fact, no company information on that particular server at all. Panicked, the CIO and his staff went into emergency mode.
They soon discovered that a hacker had found his way through their firewall and wiped out all the production files on the server, leaving chaos and a couple of strangely labeled files in his wake. Two frantic days – and 15 hours of work – later, the alien files were deleted and the missing data restored through backup tapes. But it took an additional two weeks to be sure that the hacker hadn’t accessed and tainted any of the company’s 24 other servers.
All told, the CIO (who spoke on condition that his name not be used) reported that the breach cost the company US$50,000. But when asked how he came up with that number, he said he honestly couldn’t say. Because he really wasn’t sure.
“We didn’t do a line-by-line breakdown of the costs because it didn’t seem necessary at the time,” he admits. “But consultant costs, loss of production time and overtime for the IT staff were part of it.”
Even if CSOs can quantify the cost of a breach, few executives will talk on record about it. Companies have an incentive to downplay–or downright hide–such information. “It’s embarrassing to admit that a hacker got through your firewall,” says Tina LaCroix, CISO of Aon Corp., an insurance provider. “Most companies won’t give out the real information [about breaches]. They don’t want you to know they have vulnerabilities because they make the CSO look bad.”
“No one wants to be the company on the front page of The New York Times,” says Thomas Varney, a director of technology assurance and security, who spoke on the condition that his Fortune 100 company not be named. But ignoring vulnerabilities won’t make them go away. Every day (or so it seems), another consultancy reports dire new statistics on the cost of security failures. According to the 2002 computer crime and security survey from the Computer Security Institute and the FBI, 80 percent of the 503 security practitioners surveyed acknowledged financial losses due to security breaches, but only 44 percent were willing (or able) to quantify losses.
While circling the wagons is understandable, it’s also counterproductive for the industry as a whole. “The bottom line is that CSOs are doing a pitiful job of tracking breach costs,” says Michael Erbschloe, associate senior research analyst at Computer Economics Inc., an IT investment consultancy. “They don’t want to go public with the costs or even talk about it internally. The rationale is that, if CSOs don’t know the numbers, no one else will either, which cuts down on the likelihood that their company’s reputation or stock price will take a hit.” But he cautions, “CSOs need to wake up. Start sharing data, or we’ll all be more vulnerable than we’d like.
“Every breach is different, and costs will vary from incident to incident. That’s why it’s incumbent upon the CSO to have an incident-response plan in place prior to a breach.”
Creating a methodology for quantifying as many costs associated with a breach as possible is essential. Start by determining the value of your information and assets so that you can more easily find out what you lost. Break the incident down into every conceivable category because, inevitably, it has all been affected.
Hard costs – replacing servers or paying overtime – are easy to track. The real difficulty lies in quantifying nonattributable costs – the loss of customer trust or business. “Do more than simply calculate your physical losses,” says Craig Goldberg, president of Internet Trading Technologies. “Look at what was lost in terms of customer, shareholder and employee information. What was the cost of lost business?” And don’t forget the most serious damage – a blow to your company’s reputation. “It’s the gray areas that are usually the most significant in terms of cost but the hardest to prove,” says Goldberg.
That’s why cyberinsurance is a tough area, says Rich Mogull, research director at GartnerG2 Cross-Industry Research. Companies lack the solid actuarial formulas that enable them to figure out risks over time, so they underprotect -or overprotect – themselves.
Knowing Is Half the Battle
It didn’t take long for Ron Woerner, CISSP and information security officer for the Nebraska Department of Roads, to get a phone call from his ISP when an SQL Spida worm hit his department’s systems in May. It found its way in via the Internet through an open SQL port that happened to have a blank administrator password, and then planted several files to help it look for other targets through which it could spread.
“The ISP wanted to know why we were making so many SQL calls, so I got suspicious,” Woerner recalls. “I asked him to block all our SQL calls to the Internet, since it’s not a critical method of connection for us. Then I contacted our administrator for that particular system and confirmed that we were infected. At that point, I alerted our incident-response team, but I only put them on alert. The situation seemed under control, and we didn’t want to go overboard with our response. I updated our virus scanner on the infected system, found four files associated with the worm and removed them. We rebooted the server, did a sweep so everything was clean, and made sure our switch was configured to block the SQL port from our box to the Internet to prevent reinfection.”
The whole incident took two hours to handle. Since it was a relatively minor attack and Woerner had a detailed incident-response plan in place, he was able to track the breach cost easily. The worm had infected an internal server, and during the downtime necessary to contain the infection, 15 employees were unable to do work on their computer. “Average pay for those workers was US$25 an hour; they were out for two hours, so I figure it cost about US$750,” he says.
The incident’s relatively small size doesn’t diminish its importance as an example of why adding up the numbers can pay off in the end. Woerner took the US$750 number to his CIO and used it to demonstrate the need for a security budget and the necessity of taking preventive, instead of defensive, action. If the password on the SQL application had been changed from the default or if the SQL port had been blocked, he points out, it would have taken only 10 minutes instead of 30 hours of work time away from the employees–and it would have cost nothing.
Because no data or system was seriously corrupted, Woerner had to consider only system and worker downtime, two of the most basic considerations when attempting to quantify the cost of a breach. But it can quickly get more complicated.
Woerner says he could have padded the breach’s cost to underline his argument to the CIO, “but if you inflate the cost, it will come back to bite you,” he says.
The industry’s lack of a consistent model for calculating security losses often results in inaccurate loss estimates, “numbers that never would hold up in a court of law,” says Varney, who spent years doing computer forensics with the Department of Defense and the Secret Service. “A company calls up and says, ‘We’ve just been hacked. We’ve lost $1 million.’ They pull a number out of the air,” he says. “I ask how they got that number, and it turns out they’re just guessing.”
Varney says many CSOs don’t realize loss estimates are not enough to prosecute security offenders. “If the amount varies from what the prosecution presents, the defense will poke holes all over your case,” he says.
Law enforcement has minimum monetary damage requirements for prosecuting a security case. The amount depends on the jurisdiction, Varney says, but it can range from US$500 to US$500,000. The numbers must be carefully catalogued, and prosecutors must be able to prove them. Otherwise, a lawsuit might not go the way you think it should.
Case in point: In September 2001, a jury found Herbert Pierre-Louis guilty under the Computer Fraud and Abuse Act for launching a virus attack on four offices of Purity Wholesale Grocers in 1998. According to Purity, the virus shut down operations for a week and caused at least US$75,000 in damage, well over the US$5,000 minimum. But in April, a federal judge threw out the conviction because the jury ruled that the virus didn’t cause enough damage to rate as a federal crime. The breach occurred before the Act was amended in 2001 to cover lost revenue from suspended operations and repair costs from interrupted service, and thus the damages as defined by the law did not total US$5,000. Pierre-Louis’s conviction was nullified.
Trying to nail a hacker is just the beginning. The concept of downstream liability is also a concern, says Aon’s LaCroix. These days, viruses jump from company to company. If a company is deemed negligent in deploying adequate security, there’s a potential for third-party lawsuits from others affected afterward. “You are no longer responsible for just your own security,” LaCroix says.
Ask Ziff Davis Media. Deficient security and privacy protections cost the publishing company at least $125,000 in August 2002 when an online subscription promotion exposed subscriber information, including credit card data, to public view. Several subscribers subsequently became the victims of identity theft. In a settlement with the New York state attorney general, Ziff Davis agreed to pay a total of US$100,000 to three state governments, as well as US$25,000 in compensation to 50 customers whose credit card data was bared during the incident. If all 12,000 subscribers whose information was revealed had provided credit card data to the company, the settlement could have reached US$18 million, according to John Pescatore, an analyst with Gartner Research.
Until someone comes up with a way to prevent breaches from happening at all–and, as we’ve pointed out in this issue, risk will never be reduced to zero (see “The Art of Uncertainty”)–CSOs will have to deal with the aftermath of incidents and trying to come up with a cost for the whole shebang.
“We learned one lesson really well,” says the anonymous CIO of the New York financial services firm. “Understanding what you’re spending on security cannot be overrated.”
Criteria for Determining the Cost of a Breach
1. System downtime. What systems were out of commission and for how long?
2. People downtime. Who was unable to work, and how long were they unproductive?
3. Hardware and software. How much did it cost to replace servers, hard drives, software programs and so on?
4. Consulting fees. If you needed extra firepower while fighting an attack or for a postmortem analysis, how much did you spend on fees and other expenses?
5. Money. How much were the salaries for people affected by the breach? Consider overtime pay or trades that couldn’t be made during downtime.
6. Cost of information. What was the value of information–employee, shareholder, customer–that was stolen or corrupted? How much did retrieving the information cost?
7. Cost of lost business. Did clients take their business elsewhere? Were there opportunity costs–lost contracts or business deals–due to systems being compromised?
8. Incidentals. How much did you spend on food, lodging and transportation for the people working to fight the breach? Were there additional facilities costs, such as power usage and electricity?
9. Legal costs. What were potential and actual costs of litigating and investigating the incident?
10. Cost to your company’s reputation. Did you spend money on a PR campaign to control the damage?