Risk analysis needs a reality check

Every day, we hear about security problems lurking on our networks and are urged to fix them immediately. When we deal with that information, we’re performing risk analysis. When we run out and install every security patch we read about, we’re performing poor risk analysis.

One factor that contributes to poor risk analysis is having too much awareness of a problem. Get hypersensitized to an issue, such as security threats, and you’re bound to react in a way disproportionate and uncalled for by the facts. We’re not just inundated with security information; we’re overwhelmed by it. This sets us up to make poor decisions.

The reality of today’s software development life cycle is that full-production releases don’t come out bug-free. And quickly made, poorly tested security patches are just as likely to have bugs. Microsoft, because it releases so many patches, has hit the news with reports of updates that made things worse, but it is not alone. A few weeks ago, Apple introduced 10.2.4, a bug-and-security patch to its OS X operating system. People who installed it suddenly discovered problems with their power management and PPP stacks. Anyone can make these errors.

The complexity of systems, the difficulty of doing good quality assurance and the rush to push products out as quickly as possible have put us all on an upgrade-and-patch treadmill. But experienced network managers know that patching a working system is often worse than leaving it alone.

Why, then, do we throw normal caution and good business sense out the window when it comes to security patches? Our normal strategies of testing, containment and problem avoidance disappear and are replaced by prevention and anticipatory self-defense. A company I work with rushed last week to react to the most recent sendmail security patch and ended up trashing its e-mail system – this for a bug that had, as its worse effect, the potential to crash the mail-handling process and require a restart.

All security, all encryption, all authentication, is based on probabilities, and one factor contributing to poor risk analysis is failing to pay attention to the probability of a risk actually becoming a problem. A recent paper from security researchers at Stanford showed how it is possible in some implementations of OpenSSL to recover the private key from the outside. It’s innovative and interesting research, and it will help to make cryptographic software better. But it also requires a system with a gigahertz-precision clock to be sitting less than a millisecond away from the server being attacked. The attack is impractical and impossible over the Internet. But this didn’t keep system managers all over the Internet from updating their OpenSSL code.

I’m not saying that patching systems is a bad idea. But network managers need to step back a second and do a real risk analysis on these perceived threats. Is the cure worse than the disease?

Snyder, a Network World Test Alliance partner, is a senior partner at Opus One Inc. in Tucson, Ariz. He can be reached at [email protected].

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now