Wai Chong manages information systems at a health maintenance organization. Mark Silva is vice-president of network operations at a large investment firm. Mark Dahl is the distributed systems manager at the subsidiary of a global oil company. Each is responsible for very different kinds of data, but all must store rapidly increasing amounts of information. Each is looking to networked storage to solve that problem.
That means they’re grappling with whether to choose relatively inexpensive, easy-to-implement network-attached storage (NAS) or storage-area networks (SANs), which are potentially more powerful but also more expensive and harder to implement.
Managers tend to go with NAS if they have tight budgets, need to bring more storage on-line quickly and work at firms leery of fast-changing technology.
SANs are more appealing to companies that need fast data access for widely distributed users and have the money to make long-term investments in their storage infrastructures.
IT managers must weigh cost against ease of implementation and management, speed of data access, scalability, backup and fail-over capabilities and interoperability with other parts of the network. The decisions will become more urgent as the Internet and applications such as customer relationship management and enterprise resource planning generate more customer data.
Even when IT does decide on a strategy, management must be convinced that the move is worth it.
NAS usually occupies its own node on a LAN, typically an Ethernet network. In this configuration, a single server handles all data storage on the network, taking a load off the application or enterprise server. By detaching storage from individual servers, it makes the data available to any user on the network. NAS is essentially plug-and-play storage that uses proven Ethernet and SCSI technology.
A SAN, by contrast, is a high-speed dedicated subnetwork connecting storage disks or tapes with their associated servers. Although these components can be connected via other protocols, including SCSI or IBM’s Escon optical fibre, they’re associated with the emerging high-speed (133M to 4.25G bit/sec.), long-distance Fibre Channel protocol.
SAN technology is designed to support disk mirroring, backup and restoration, archiving and retrieval, data migration among storage devices and sharing of stored data among servers. SANs can also be configured to incorporate subnetworks such as NAS systems.
Weighing the Cost
Chong, an information systems manager at Omni Health Corp. in Sacramento, Calif., says he wants to implement a SAN to accommodate Omni’s storage needs and its plans to give patients access to billing records over the Web. “A SAN is the best strategy for future needs,” he says.
Earlier this year, Chong began talking to Compaq Computer Corp. about building a SAN, but his bosses recently applied the brakes to the project, wanting more time to consider costs and the still-evolving SAN technology. Chong’s situation is common.
“These are expensive decisions – I spend a lot of my time thinking about storage and trying to think about it strategically,” says Silva, vice-president for network operations at State Street Corp. in Boston.
Omni’s 1 terabyte (TB) of data is currently stored on rack-mounted disks connected to individual servers via SCSI buses. This common approach is easy and relatively cheap. A Guardian 90GB SCSI RAID array from Seagate Technology Inc. costs about US$7,600, while NAS or SAN technology can cost hundreds of thousands or even millions of dollars. Storage can be increased merely by adding SCSI host bus adapter ports in the form of add-in cards to the server, daisy-chaining more devices off existing buses or adding servers – or all three.
However, each SCSI bus can support a maximum of 15 disk arrays, and each SCSI bus can stretch no farther than 75 feet from the host.
Large storage needs can quickly translate into a dense jumble of hardware, with data accessible only through individual servers. To see data on other servers, users must go through the network – a pro