Storage-area networks that use the Internet SCSI protocol are gaining acceptance with corporate IT as a supplement to — or a complete replacement for — FibreChannel SANs.
Products supporting the new iSCSI protocol issue SCSI commands and transfer block data over existing IP networks, allowing administrators to move storage networks onto existing Ethernet LAN infrastructures or new Ethernet subnetworks instead of building and maintaining a separate Fibre Channel network.
With iSCSI, there’s no need for highly paid Fibre Channel specialists or expensive Fibre Channel switches, host bus adapters and cabling. And with IP SANs, existing Ethernet networks can be used to back up servers as well. For example, Microsoft Corp. has released a software driver that supports iSCSI-based back-ups of Windows systems.
With gigabit Ethernet switches and high-performance processors, iSCSI is a viable option for midrange system applications as well as departmental or remote office back-ups, says Ahmad Zamer, interim chairman of the IP Storage Forum at the Storage Networking Industry Association in San Francisco.
Zamer points to cost as a major factor in iSCSI adoption decisions. A Fibre Channel switch can cost US$500 to US$2,000 per port vs. US$125 to US$150 for gigabit Ethernet, and Fibre Channel host bus adapters cost twice as much on average as IP network adapters, he says.
Computerworld spoke with three early adopters who have begun testing iSCSI SANs. While some problems have arisen, these users say they’ve been satisfied with both the price and the performance of the technology.
Adrian Porter, senior database administrator at 1-800 Contacts Inc. in Draper, Utah, was using direct-attached storage to back up five Hewlett-Packard Co. dual-processor ProLiant servers running SQL Server databases. But the system was exceeding storage capacity every six months, and performing data migrations with every storage upgrade was costly.
Initially, Porter decided to shop for a Fibre Channel SAN. Because he already had ProLiant servers, Porter first tested HP’s Fibre Channel-based Enterprise Virtual Array, bringing a server and a copy of one of his databases to HP’s test facility in Colorado Springs. “We couldn’t even get the same performance we were seeing on direct-attached storage,” Porter says. But the problem wasn’t just with HP’s offering. “We did the same tests with EMC and got the same results,” he adds.
Porter had considered using a network-attached storage appliance instead of a SAN, but that option didn’t provide enough performance either. Unlike SANs, NAS devices route data through an intermediate server file system and transport the data in files rather than using more efficient block data transfers.
“From what our engineer saw locally, apparently if you get over a 30GB database, you start seeing degradation in performance,” he says.
In April, Porter settled on an IP SAN that included a Network Appliance Inc. fabric-attached storage (FAS) array and a NearStore nearline storage appliance.
While testing a traditional NAS appliance, “we were maxing out at 45MBps throughput,” says Porter. “With iSCSI, we’re seeing bursts of 110MBps (when performing data snapshots and replication).”
Porter says one reason for choosing iSCSI was that he could avoid spending US$50,000 on a Fibre Channel switch. “We were able to leverage the same existing infrastructure we had with Gigabit Ethernet,” he says.
Porter’s IP SAN consists of a Cisco Catalyst 4500 Gigabit Ethernet switch, Intel Corp.’s Pro/1000 T IP Storage Adapter (a Gigabit Ethernet device) and NetApp’s FAS940, a storage system that supports Fibre Channel disks internally but presents a gigabit Ethernet and iSCSI interface to the network.
While Porter says throughput has been more than adequate, errors with the Intel host adapter cards have resulted in servers losing their mappings to target storage devices. Porter is now considering using Microsoft’s iSCSI driver and a standard Intel Gigabit Ethernet network adapter.
Porter says he was able to get 16TB of capacity out of his IP SAN versus the 3TB to 5TB he could have afforded with a Fibre Channel SAN. “And we were able to use the networks we already had in place,” Porter says.
Maintenance also has been easier. “Administering this from one central location has taken half as much time (as with Fibre Channel). It allows me to focus on the database rather than on the hardware,” he says.
PBS Takes SAN Supplement
Ken Walters, senior director of enterprise platforms for the Public Broadcasting Service in Alexandria, Va., has had a Fibre Channel SAN based on IBM storage servers for three years. He was satisfied with attaching all of his mission-critical servers to the SAN, which he uses as a high-speed back-up network. But some 100 blade servers used for development work were still using direct-attached storage because he couldn’t justify the cost of additional Fibre Channel switches and host bus adapter cards for them.
About a year ago, Walters estimated that each Fibre Channel switch would cost about US$25,000 and a Fibre Channel network adapter card for each server was about US$1,000.
An IP SAN using iSCSI devices seemed the perfect way to connect those “stranded” servers, and the IP network had a lot of spare bandwidth, Walters says. “At peak hours it’s all used, but at off-peak I have a lot of excess,” he says, noting that his TV network distributes video streams to 177 member stations via satellite.
Walters considered Cisco Systems Inc.’s MDS 9216 switch with an iSCSI blade but chose San Diego-based Stonefly Networks Inc.’s i2000 Storage Concentrator router because of its ability to pool capacity from many storage arrays and dynamically provision it on the fly. The price was also right.
“Because iSCSI was pretty new, I didn’t want to have to go to my boss and ask for a lot of money and have it not work out,” Walters says. The Stonefly product was under US$10,000 while the Cisco switch was US$48,000. “I could slide (the Stonefly product) under the radar,” he adds.
Walters spent about three months load-testing the iSCSI network using IBM BladeCenter blade servers, Red Hat Inc.’s Linux, Windows 2000 and Iometer, an open-source tool for measuring network traffic on single and clustered storage subsystems.
Walters chose Microsoft’s iSCSI initiator, but because he had heard that the processing of TCP/IP and iSCSI protocols can use up to 90 per cent of CPU cycles on servers, he tested both standard network adapters and models with special TCP/IP offload engines (ToE) from Alacritech Inc. in San Jose and Intel.
The ToE adapters, which offload TCP/IP processing overhead from the host to an onboard processor on the adapter, worked flawlessly, Walters says. But the more expensive ToE devices were unnecessary because his low-end servers never pushed more than 14MBps. through the IP network. He opted for standard network interface cards at US$100 each instead of paying up to US$1,000 for ToE adapters.
“If you’ve got a Xeon processor and you’re not CPU-bound, you probably don’t need to worry about a ToE card,” he says. “I could push about 50MBps out through the (iSCSI) adapter cards, but I can’t think of many applications I’d be running that would push that much out.”
A Good Enough Performer
Shawn Eveleigh, a senior systems administrator at Oakville, Ont.-based Zenon Environmental Inc., has used an EMC Corp. Clariion storage array for two years to back up more than a dozen Dell Inc. and HP servers. The servers, which support more than 500 users at the developer of membrane technologies for drinking water purification and wastewater treatment, run Microsoft Exchange, Windows Server, the LiveLink document management system from Open Text Corp. in Waterloo, Ontario, and Zenon’s ERP system.
With his maintenance contract coming to an end, Eveleigh began looking into the total cost of ownership of both maintaining his Fibre Channel SAN and upgrading it to accommodate growth. He concluded that the maintenance contract and expansion costs would have been more than what it would cost to deploy a new IP SAN.
Last August, Zenon chose PeerStorage, a native iSCSI storage array from EqualLogic Inc. in Nashua, N.H.
“I don’t have e-commerce applications or high-transaction databases, so I can only attest to how iSCSI performs in a medium-size environment. And in that case, it does the job. I’ve done jet stress tests with Exchange 2003 and very large database files, and I can’t seem to hit its limits,” he says. Eveleigh says in testing the array, he was able to get up to 40MBps throughput.
He was also able to minimize the downside risk by negotiating with the vendor. “I even got them to do a money-back guarantee,” Eveleigh says. EqualLogic agreed to take back the equipment and issue a full refund if after 30 days Eveleigh couldn’t get it to work adequately. “That alleviated our risks,” he says.
Eveleigh says the SAN installation was almost a turnkey operation. Once the physical network connections were plugged in, he spent about a half-hour configuring the proper IP addresses and another 10 minutes or so setting up storage volumes.
“The bottom line is, as long as the (host) servers see the storage and can do that at performance levels that keep up at a reasonable cost, that’s all I need,” he says.