Design woes leaving IT folk out of control

A bad control is often a good control that doesn’t work effectively. The operative word in that statement is “effectively”: evaluation requires an understanding of the control’s intent within the context of the system and the business.

It is here that IT people often stumble when designing controls. Without an understanding of the intended result, they may craft ones that follow formulaic business rules but nevertheless miss the point, says Will O’Brien, president of the Manta Group.

In financial systems, an example of an application control is a program that records invoices on the general ledger whenever invoices are issued, in order to track the financial flow, he says.

“You can program an application control just by reading the business rules, but that’s not sufficient. You need to have a general control that is more of a manual process.

“It must be one that goes back and validates that all invoices are in fact being recorded in the general ledger,” says O’Brien. “Application controls are typically ‘owned’ by business units, but the IT department has to administer them. That’s where general controls come into play, and that’s what IT folks need to understand,” he says.

According to O’Brien, a lack of understanding of general controls — which are simply controls that call for human intervention and are typically a managerial action of some kind — is pervasive within IT.

Systems that generate logs and exception reports unseen by human eyes are a typical example. Someone needs to review and analyze the information and, equally important, the information needs to be processed for human consumption.

“IT needs to make the log reviewable. That’s the trap people often fall into: if they have a log, there’s way too much information for anyone to review. If it’s just a dump of so much data, the effectiveness of the control is negated,” says Mario Durigon, senior manager within the information risk management practice at KPMG.

Human intervention in the form of a response to a problem that a control has detected and/or logged is also needed. “If you have a security incident, there may be an automated control that detects it. But IT needs to have a response process in place that supports timely investigation of any unauthorized activities. It’s not enough to say, ‘yeah, we have a security hole – just wanted to let you know,’” says O’Brien.

Rather than tackling controls piecemeal, system auditors recommend starting with a risk assessment of an organization’s information assets, which will provide the foundation for integrated controls design. Without a big-picture understanding of the value and priority of assets, IT departments may allocate time and resources inappropriately to controls based on the wrong criteria, such as perceived impact, visibility, ease of implementation and so on, says Aron Feuer, president of Cygnos IT security.

“We’ve seen organizations that have deployed intrusion prevention or detection systems far in advance of having put some base control elements in place. It does an organization no service to try to engage in intrusion prevention when they have not clearly identified what kind of information assets they’re trying to protect,” he says.

Placement of controls is also a frequent problem that can lead to misappropriation of time and resources, says Feuer. “We see organizations that invest 70 per cent of their security dollars in the perimeter in advance of looking on the inside of their network. IT may have a clear understanding of the type of traffic occurring inside and outside that perimeter, without paying attention to the fact that controls at the entry level are only one piece of puzzle.”

Factors such as social engineering, poor authentication or a wireless network can circumvent all the time and energy they put in, Feuer adds. “So the placement of those controls, despite the fact they’re well done, is inappropriate.”

Lack of operational consistency is another widespread problem. “We often go to organizations and find there is no system around their change control, incident handling, event management and policies. You can’t guarantee the consistency or integrity of the system, irrespective of security, if you don’t have those elements in place,” says Feuer.

In the majority of their audits, Feuer’s team recommends the implementation of ITIL (Information Technology Infrastructure Library), a framework outlining best practices for IT service management. The concepts within ITIL support IT service providers in the planning of repeatable processes.

At the end of the day, designing good controls requires a change of perspective for IT people that will come with time, says O’Brien. “With IT, you’re dealing with a culture of firefighters, and you’re asking them to become fire preventers, and that’s a completely different mindset,” he says.

QuickLink 050934

Related links:

So you want to be an architect?

IT outfits urged to step into strategic shoes

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now