The network juggling act

It has typically taken an army of IS professionals to keep computing infrastructures marching onward. In an age of highly distributed computing, which increasingly supports vital business processes, IS departments continue to expend more effort and energy on ensuring that computing and communication infrastructures and systems stay up and running.

In fact, market researchers International Data Corp. estimate the IS departments of typical medium and large organizations expend an average of 65 per cent of resources and efforts to simply keep IT infrastructures operational. Certainly a good portion of that is dedicated to maintaining the availability, reliability and performance of enterprise communication networks.

That’s meant being aware of network problems, then searching out and isolating failures that occur, and ultimately repairing the fault. Since most companies haven’t designed their infrastructures to necessarily enable rapid fault resolution, the task of network management involves a great number of people. The largest corporations more often utilize sophisticated tools such as network, systems or enterprise management environments like Hewlett-Packard Co.’s Openview, IBM Corp.’s Tivoli or Computer Associates International Inc.’s Unicenter. However, these solutions are rarely in use by small- and medium-sized companies in Canada.

The tool spectrum

But even for larger and smaller enterprises, there is little consistency in the network management approach. A given company may use any number and type of tools and solutions available on the market.

“There is such an incredibly wide span of solutions – those that are incredibly complex but also incredibly functional,” said Steve Pettite, general manager of security and software with network equipment maker Enterasys Networks Inc. “Others are much simpler, but not rich. There’s no clarity. The market is totally fragmented on that front.”

At the other end of the spectrum are freeware tools, which may provide single monitoring or management functions or other solutions that are scaled-down versions of more comprehensive tools designed to give network administrators a sampling of features.

Again, it’s usually a case of throwing people rather than tools at the problems in an effort to bring some semblance of management to enterprise networks. That means waiting for a failure to occur, then dispatching IS professionals to solve it. But it’s a formula for disaster, since there’s never enough people. IS departments continually feel the pinch of operational budget restraint and the fact that it’s always difficult to find and, more importantly, retain highly skilled IS professionals.

“Most people are looking to do more with their network with less people…and companies aren’t in a position to bring more people at the problem,” Pettite said. He explained that among the biggest challenges for most IS departments is aligning IT resources with business need. So, for example, companies often struggle with the challenge of ensuring enterprise networks enable the processes and solutions that help a business achieve its goals. And, given the increasing shortage of people, reduction in budgets and lack of understanding of tools that are available, the challenge and frustration increases.

There’s no standard approach taken when it comes to how most businesses manage networks, according to Pettite. He observed that there’s a wide variance in the level of effort and investment expended by IS departments to manage enterprise communication systems. Pettite likens this disparate approach toward network management to how people might care for their automobiles.

“Everybody takes care of their car in a different way,” he said. “It’s completely random and totally variant. One car owner might do required oil changes and standard maintenance, while another doesn’t. “

Some companies might not have any network management systems and tools in place and their networks may run with few problems. Similarly others that expend a great deal of effort on management and may have similar levels of reliability, but the difference is they are prepared to deal with problems when they occur. The value of network management is having the ability to react by keeping networks up and available, by identifying problems and resolving these as quickly as possibly.

Better than ever

The greatest contribution to enterprise network performance, reliability and availability during the past 10 years has more to do with technological advancements made in network equipment, than from any other activity. The routers, switches and other enterprise network equipment devices built today are technologically better than ever as is the reliability of data communication networks within enterprises.

“Network infrastructure components are definitely more reliable than in the past – much more reliable and stable,” Pettite said. In fact, technological advancements made in high-end network infrastructure equipment quickly translate into functional features of low-end gear.

Much of today’s network gear features built-in function such as redundancy, failover and load balancing, and is counted upon by many IS departments as a primary means of enhancing performance and maintaining availability of the enterprise network.

Simplification, through the disappearance of multiprotocol traffic on many enterprise networks, has also significantly increased network stability and performance. Where once existed IPX, DECnet, Appletalk, SNA, VTP, ArcNet and a plethora of other unique and proprietary network traffic types, most now look to standardize on versatile IP for all enterprise communications. Gone also, for the most part, is shared Ethernet and technologies such as token-ring and FDDI, to be replace by switched Ethernet technologies ranging from 10Mbps to beyond 1Gbps speeds.

Protocol reduction helps simplify network topologies and makes it much easier to perform fault isolation, according to Pettite.

“Fault isolation used to be a much bigger deal, with more protocols on the network and it made for weird things happening,” Pettite said. He explained that in the days of multiprotocol networks, it was difficult to isolate faults. In addition, software-based network hardware would often appear to fail, making the process of locating these problems extremely difficult.

“Today, everything is point-to-point, whereas previously networks were based on shared (technologies). Protocols typically used today are IP or IPX and a bit of Appletalk,” he said. “Fault isolation is still a big deal, but networks fail more predictably. It is easier to isolate a failure. Network topology is much more sound.” For example, user-to-service mapping, the ability to identify through a monitoring system which network resources and services are available and/or are being utilized by a specific user, stays available even though devices may fail, Pettite added.