Although Web 2.0 has enriched the Internet with some great new capabilities, it has also brought some very unpleasant ones, namely a whole class of new security threats that can silently install when a user visits a compromised website.
Web 2.0 gives the bad guys more “surface area” to exploit-more bandwidth, more communication channels (for example, IM, P2P), and more client-side executable options. To make matters worse, many users appear to have thrown caution to the wind when it comes to downloading untrusted content. Employees who would never download an e-mail attachment from someone they didn’t know will now add a widget to their MySpace page or play a potentially harmful YouTube clip without knowing where it came from.
It is also becoming more and more difficult to distinguish malicious from nonmalicious sites. Google recently published a paper from researching sites it crawls (see ” The Ghost in the Browser”), and found that one in 10 websites contains a malicious payload. Most users would be hard-pressed to distinguish the malicious 10 percent from a random set of search results. Once inside the firewall, these covert applications can steal confidential data, infect other machines and launch spam or malicious attacks.
The “new new” threat: Botnets
The most sophisticated of these new threats are botnets. These collections of software robots known as “bots” run on compromised computers called “zombies” that can be controlled by “bot herders” through a communications infrastructure named “command and control” or “C&C” for short. The value of a botnet is directly proportional to the number of machines it controls, the value of those machines (for example, .com versus .org, if data theft is the goal) and the aggregate bandwidth the botnet can command for distributed denial-of-service (DDoS) attacks.
Once a bot hijacks a PC, it starts scanning the network for other vulnerable hosts to compromise. The bot will then report back to C&C with information on how many systems are under its control. Finally, C&C will send instructions and payloads for the botnet to execute, which could include sending spam, click fraud, collecting confidential data or launching a DDoS attack.
In the early days, botnets were typically controlled by a single C&C, so chopping off its “head” would render the botnet useless. Not anymore. These days, most botnets contain multiple C&Cs, hiding on many servers, with control being turned over to a new server every few minutes. They use a tiered infrastructure, much like a military command structure, so taking out a lower-level C&C won’t affect the rest of the botnet. In the spirit of organized crime, botnet owners are now collaborating, sharing pools of bots and C&C servers to increase fault tolerance, and they’re making more money in the process. Finally, bots are broadening their reach beyond their initial target base of desktop PCs and are now infecting servers, including e-mail and UNIX servers.
No one knows for sure how many bots are out there, but Mi5 Networks has discovered them in approximately 65 percent of the enterprises and 100 percent of the universities we’ve work with this year. What’s amazing to watch is the amount of activity even one bot can generate. It’s not unusual for a single bot to perform more than 1 million IP scans and hundreds of thousands of spam-related communications in a single day. In one network of more than 8,000 PCs, for example, we found 145 bots in the first month, but those bots performed more than 136 million IP scans during that time.
Bot detection and prevention best practices
The amount of C&C traffic crossing the firewall is intentionally kept very low, allowing bots to avoid detection from traditional intrusion protection systems and other security measures. Although some ISPs and security monitoring services can tell if significant spam or DDoS traffic is coming from an IP address space within an organization, they can’t definitively confirm whether machines within the corporate network are infected, nor which machines are generating the traffic. What’s required to pinpoint hijacked machines inside the firewall is the ability to monitor internal network traffic in addition to the data coming in and going out of the enterprise. This visibility exposes how botnets spread internally, send out spam, launch DDoS attacks and so on. Ideally, a security system will also block communication out of the network from infected machines and even automatically dispatch cleanup agents.
Like most security issues, there isn’t a single magic bullet to stop bots, but the first step is to implement a layered defense (desktop + gateways) that limits the number of bot infections. Beyond that, enterprises need early warning systems that can detect infected PCs inside their network and block those machines from communicating sensitive data back out.
According to recent research by Gartner, the Web perimeter remains the biggest unprotected border within most organizations’ networks today. Although most enterprises have URL filtering in place, fewer than 15 percent have adequate protection from Web-based malware. Gartner predicts that by the end of 2007, 75 percent of enterprises will be infected with undetected, financially motivated, targeted malware that have evaded their traditional perimeter and host defenses.
Doug Camplejohn is founder and CEO of Mi5 Networks, a vendor of Web security gateways.