With the amounts of sensitive personal data held by businesses and public sector agencies these days, the issue of data security is never far from the news. Occasionally it even makes headline news, as the Heartbleed flaw in the OpenSSL cryptography library did in April.
Heartbleed made it to ‘superstar’ status as far as IT security is concerned. Among other things, it prompted a massive shutdown of the Canada Revenue Agency web site. And it caused plenty of post-incident recrimination over how the bug occurred in the first place, and how long it had been known about before action was taken.
While OpenSSL flaws have surfaced again in the last several days, Heartbleed still continues to generate discussion. Now, several weeks on, some industry watchers are starting to talk about what Heartbleed taught us – or should have taught us – about computer security.
In a post on the Data Center Knowledge web site, data centre expert Bill Kleyman reminds security professionals that they need to be proactive about securing their systems. Kleyman lists some of the habits of highly successful IT organizations when it comes to protecting their data.
The first and most important thing is to have effective security policies that address the physical aspects of security as well as the digital; things as simple as keeping doors locked. “When creating a good security policy, take into consideration your entire infrastructure. This will span everything from passwords to locked and monitored server cabinets.”
It’s important to remember to monitor every part of the platform, including local and cloud-based environments. Today’s monitoring systems can aggregate firewalls, virtual services and cloud components. Enterprises shouldn’t forget to monitor the logical layer, as it continues to increase in importance.
“With that in mind, it is important to ask yourself a couple of visibility questions around your cloud and data center platform,” Kleyman says. “How well can you see data traverse your cloud? How secure is your data at rest and in motion? Can you effectively monitor traffic extending out to your end-users? Proactive monitoring can help find spikes, anomalies and even security holes in your environment.”
Obviously a big part of doing security right is using the right tools, and there are a lot of options out there, such as the large-scale physical appliances at the network edge or within the computing environment.
In one example a large enterprise’s IPS/IDS system spotted post-Heartbleed bots and alerted admins who shut down services that were under attack, minimizing the impact on the organization. Appliances like application firewalls, virtual firewalls or security services running within the IT environment can add up to strong security protection and do advanced monitoring.
In cloud environments event correlation and logging can be a challenge, but they have to be included in security planning. In the case of Heartbleed, organizations with strong correlation and logging engines were able to use the information they provided to pinpoint the source of a bot or tracking tool within their own networks, blocking the sources and denying access to sensitive resources.
The other piece of the puzzle is system vulnerability testing, which will alert network managers to vulnerabilities so they can take action before the bad guys do. There’s lots of technology available to perform this function, but as with everything else to do with security, having a well defined process and adhering to it is the main element of success.
“Large organizations have a healthy vulnerability testing cycle,” Kleyman says. “Some do testing on a cycle, others have random ongoing testing, while others include specific application and data testing protocols. Regardless of the scenario – you’ll be much better off finding the issue before anyone else.”