Security is just like quality – you’re never finished, because there’s always room for improvement.
Compounding the problem is the fact that the security threat continues to evolve; it simply moves to attack new vulnerabilities as soon as you patch old ones. Not surprisingly, in the past few months we have seen a new trend – attacks that target web applications. These applications encompass technologies that deliver information from back-end application servers, up through web servers, and finally to the end user through a browser interface. According to the most recent Symantec Internet Security Threat Report, approximately half of the vulnerabilities disclosed in the last six months of 2004 affected web applications.
Attacks on web applications go right through traditional perimeter defences such as firewalls, because the firewall is configured to allow web traffic to pass through to the web server. Even more worrisome, these attacks can also go through sophisticated security infrastructures. It might seem surprising, but Secure Channel – a key component of the federal Government On-Line (GOL) program – does not protect against these types of attacks. In fact, a particular attack can enjoy a protected ride right through an encrypted and authenticated session, and be delivered quite effectively to the vulnerability in the application….complexity is the enemy of security, and software and systems are only becoming more complex.TextFortunately, there are new security solutions – such as intrusion prevention – but the responsibility for this now moves to the operational group that owns the application servers.
The root cause of the problem is vulnerabilities in software. This can be commercial packaged software such as operating systems or databases, or a custom built application for a GOL service, for example.
Complex web applications are like any other piece of software; they may be riddled with bugs or contain poor coding practices that ultimately lead to vulnerabilities. These root vulnerabilities are not going away any time soon, as it is notoriously difficult to write correct software. As well, remember that complexity is the enemy of security, and software and systems are only becoming more complex.
The new web application attacks find and exploit these vulnerabilities in software. They are not the same as the older class of worms or viruses, which are mostly concerned with large-scale propagation over the Internet. The new breed is carefully crafted to break into a particular web site or application. It is because the attacks look like normal web traffic that they are able to go right through standard perimeter security controls and firewalls, directly to the application.
The application attacks typically take advantage of data input validation errors that lead to buffer overflow or command injection vulnerabilities. For example, user input such as a name might be requested on a web form. Instead of entering the expected string, the attacker enters an over-length string that could cause a server crash, or the input might be a carefully crafted command designed to query fields in, for example, a SQL database. In the case of SQL command injectio, the web server merely takes the character sting that was responding to an input request and passes it to the back-end application. The application now interprets this as a command from a trusted internal server and responds appropriately. This response might involve returning the results from a generic database search. These results are then dutifully forwarded by the web server to the requesting browser, with the end result that an attacker could be searching your internal database as if he was a privileged insider. This could lead directly to a severe privacy breach or identity theft if, for example, the database contained personal information about citizens being served by the application.
The trigger for the attack cycle starts with the discovery of a vulnerability, at which point attackers quickly develop exploit code. The vulnerabilities are conveniently flagged by software vendors on patch announcement days. Shortly afterward, exploit code appears – on average less than 10 days later.
While the ultimate fix is to patch the software to remove the vulnerability, this is often difficult to do in a timely manner that stays ahead of attacks. Complex servers can be hard and costly to patch, as updates must be carefully tested before they can be rushed into service. This strategy for the security problem also points to an interesting change in responsibility – it has now moved from the IT security staff taking care of the network to the application owner in the operational business group.
The trends are not good. In some cases the attacks come so quickly that there is not even a patch to plug the hole, or no signature to identify and block the malware; the enemy is inside the gates before you know it. In December 2004, for example, the Santy worm spread rapidly over the Internet only one day after the vulnerability was discovered. In this case the attack was on specific bulletin board software, phpBB, a popular open source web application. The rapid propagation in this case was assisted by automatic Google searches which the worm used to quickly identify vulnerable web sites. While you would not expect to find this particular application deployed in GOL applications, it is an instructive example.
The battle is by no means lost, it just indicates that it is time to investigate and deploy newer security solutions. A whole class of security technology known as intrusion prevention systems (IPS) is rapidly coming on stream, ready for deployment against known and unknown web attacks.
Current IPS solutions have improved dramatically from first generation solutions that evolved from intrusion detection products (IDS). IDS systems are passive devices that monitor network traffic and seek particular attack signatures; they then raise an alarm indicating you might be under attack.
There are two problems here: (a) They are typically tuned to be so aggressive in looking for attacks that they generate a lot of false positives, and (b) you are no more secure after having done this.
Indeed, you are perhaps more nervous after looking at all the attack signatures, but unless you harden the network you are no better off. Of course signatures themselves are problematic, as they are only developed and distributed after attacks have been noticed and analysed.
IPS solutions, by contrast, are active devices that sit in the middle of the traffic flow. Rather than just monitoring and logging events, an IPS drops the attack traffic packets and lets good traffic through. Modern IPS solutions don’t rely only on signature updates of attacks, they also use heuristic techniques to learn normal behaviour so anomalous events can be recognized, and they filter traffic according to other rules. These rules may require updating if new vulnerabilities are discovered, but this is a far more efficient update mechanism compared to attack signatures.Rather than just monitoring and logging events, an IPS drops the attack traffic packets and lets good traffic through.Text A particular buffer overflow vulnerability, for example, may lead to thousands of variants of attack code over time, and each variant would need its own signature update. By contrast, it only takes one vulnerability facing rule update to take care of the subsequent thousands of attacks that may follow.
This is the inherent efficiency obtained with the newer IPS approaches, which are “vulnerability facing” rather than “exploit facing”. In fact, the IPS may be tuned so precisely that it can run for perhaps years without requiring updates as it guards against both known and unkn