Joel Snyder: When will we ever learn Web security?

Two years ago last month I wrote a column called “Learning lessons from Code Red.” Code Red had hit hard, taking over servers all over the Internet. It’s still there – we get dozens of Code Red attempts every day from a worm that’s two years old.

Two worms that hit this summer, W32/Blaster (also known as W32/Welchia, W32/Nachi and Lovsan) and SoBig (also known as SoBig.F) spread exactly the same way. Microsoft Corp. published bulletins, but people ignored them. Patches were issued, but no one applied them. The worms came in through firewalls that shouldn’t have let them in. Infected systems continued spreading the worms because we didn’t have adequate tools to contain them. Two years after Code Red, there are still fundamental problems in the way we manage and secure systems that make us vulnerable to this kind of attack.

The first problem concerns Internet Service Providers (ISPs). Worms spread like this partially because of the widespread availability of broadband Internet, specifically unfirewalled broadband Internet. People want to learn at home, so they bring up a Windows server. Why bother with a firewall; it’s just a test box, right? ISPs traditionally have sold unfiltered bits to their customers.

At the enterprise level, we could count on firewalls. At the residential level, how much damage could a 28.8Kbps modem do? During the transition to broadband, ISPs have not changed their model. They insist on selling high-speed connections at rock-bottom prices, which is great for consumers – until the irresponsibility of ISPs in providing adequate security for their customers causes the whole Internet to fall to its knees.

ISPs need to reevaluate their policies on open access to customers, especially residential broadband customers who cannot be expected to firewall their own systems properly.

The second problem involves tools. Although network managers generally keep their houses in order, it’s not because they know what’s going on; it’s because the system is so over-engineered that they don’t have to know. Recent research shows an enormous amount of Internet traffic is plain garbage: packets that should never have gotten where they are, or even been allowed to leave their original network.

The bottom line is that we generally don’t have a good way to say who is doing what on our networks. There are lots of tools out there, from uniform resource locator watchers to intrusion-detection systems to Internet protocol-layer flow tools. Even most Cisco Systems Inc. routers have flow-analysis tools built in. But few of us have installed them, and fewer still know how to use them.

Just answering the question “Who on my network is infected?” is not easy, even though these systems will stand out like a sore thumb. Network managers need to take a closer inventory of their networks and add tools that will help them monitor what’s going on with all those bits. Without data, we’re flying blind.

Snyder, a Network World Test Alliance partner, is a senior partner at Opus One in Tucson, Ariz. He can be reached at [email protected].

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now