Network World (US)
Greater Baltimore Medical Centre had a dilemma. Its IT needs were growing, but its data centre space was not. After months of dead-end negotiations with vendors in an attempt to put multiple applications on fewer, bigger boxes, the non-profit turned in the other direction: It brought in blade servers.
“There were three business problems that really drove us toward the technology: One, we had no space in our data centre, and we needed to add 30 servers. Two, we had (limited) power, and we needed to add 30 servers. And three, we were out of air conditioning capacity, or very close to it, and we needed to add 30 servers,” says Eric French, network manager at the healthcare organization.
“We needed technology that was low-output HVAC (heating, ventilation and air conditioning), had low power requirements and was very small,” he says. “Blades fit that bill.”
The medical centre, which deployed 30 HP BL20p blades earlier this year, is one of a growing number of companies turning to blade servers to get the computing power they need in smaller packages. While blades failed to take off as predicted after blade server pioneers RLX Technologies Inc. and Egenera Inc. introduced them in 2001, they are gaining momentum, analysts say.
IDC reports that U.S. blade server sales in the first quarter this year totaled US$47 million, eclipsing about US$43 million in revenue logged for all of 2002. IDC expects the market is expected to reach US$6 billion by 2007. Blades initially were targeted at service providers and large corporations looking to pack a lot of computing power into small spaces. Today, all the major systems vendors are peddling these slices of processing power as a cost-effective way to consolidate data centre infrastructure, get rid of masses of cabling and streamline management. Blades – which are about one-eighth the size of a standard 1U server, but require less power – sit in specialized chassis that enable them to share resources.
Buying one or two blades, however, won’t save you money. Because individual blades today are priced about the same as comparable 1U servers and users also must pay for the blade chassis, which at IBM Corp., for example, starts at US$2,800, the savings come only when customers bring in multiple blades, users say.
First-generation blades were simply stripped-down versions of standard servers and had little in the way of additional features, but that is changing as vendors add more intelligent switching, enhanced network connectivity and storage links.
While businesses are looking more seriously at this new breed of server, challenges remain. Early adopters point to benefits such as cost efficiencies, space savings and manageability, but also note that there still is work to be done in areas such as network connectivity, switching and storage.
“There are limitations now, in September 2003, but they will get relieved over time,” says Daniel Kaberon, director of computer resource management at Hewitt Associates LLC, a human resources consulting and outsourcing firm in Lincolnshire, Ill.
Hewitt uses grid software on IBM’s BladeCentre to spread the load across blades running a pension-benefit calculator engine that provides information on its Web site. Because traffic to the site can spike unexpectedly, he says blades make it easy to meet demands. “As the workload grows, we can simply add more blades,” he says.
As for network limitations, “I could talk about limitations in network switches, but I know there are new network switches under development,” Kaberon says.Hewlett-Packard Co., IBM and RLX all recently announced network enhancements to their blade servers to make it easier for corporate users to integrate blades into their infrastructures.
It’s these kinds of enhancements that have companies considering blades as a more integral part of their data centre architectures. Greater Baltimore Medical Centre had initial concerns about its blades because the first generation did not connect to storage-area networks (SAN). HP rectified that when it announced connectivity to SANs earlier this year. Today, Greater Baltimore’s French says the plan is to standardize on blades.
“Unless there is an absolutely compelling reason not to, meaning there’s some application that needs a PCI slot, then it will be a blade,” he says.
French says learning to run the blades was about as easy as learning to run any new server.
“The only learning curve, and I think it is probably pretty standard, is getting accustomed to the remote deployment tool. It’s one of those things where it takes a little bit of work to get it set up. But once you’ve got it set up it runs like a champ,” he says.
French says he’s seeing cost savings, too, especially related to manpower.
“What it used to take two days to do . . . with the remote-deployment tool and the blades, we can do in two hours,” he says. “Every (blade) server has a significant savings in manpower. We’re getting to the point where we’re ready to turn over (blade management) to our operations staff, and our network people won’t even deal with that. (This can) free them up for other projects.”
Financial trading systems company Nyfix says it brought in blades at the end of 2001 because it wanted to get a jump on any technology hurdles that came with the new servers.
“We knew we wanted to adopt blade technology, and we knew there was a steep learning curve, so we wanted to get some experience in operating and configuring blades,” says John Knuff, vice president of network engineering at the company in Stamford, Conn. “We’re glad we were early adopters because now we’ve learned some of the tricks of how to use them and when not to use them.”
The best place to incorporate blades is when the same configuration is needed across multiple servers, Knuff says.
“Sometimes we bring up several clients a month, and we can scale very quickly,” he says. “We can throw in a couple more blades and get them configured and have a new client up and running in a day. The advantage is speed. A disadvantage would be if you want to talk to four different networks on a single server, then I wouldn’t use blades.”
For Cambridge Health Alliance in Massachusetts, blades meant the ability to quickly add support for a critical ambulatory-care application and to do so on Linux. The alliance’s choice represents part of a larger trend. IDC says users deploying blades are doing it at a higher rate on Linux, which represented 15 per cent of the entire server market in the first quarter of 2003 and 57 per cent of the blade server market over the same period.
After determining that buying blades would save the organization US$1 million in infrastructure costs over five years, the alliance settled on BladeFrame from Egenera, which hooks into the organization’s SAN and automatically moves application load among the blades within the system.
“Our plan is to put smaller applications into logical groups (on BladeFrame) to more efficiently manage the servers and the applications,” says Judy Klickstein, CIO and vice president of IT. She says that currently the alliance has small applications on isolated
servers that typically run at only 50 per cent capacity. Moving those applications to the blades – where capacity can be shared – would enable the alliance to get more out of its resources, she says.
Greater Baltimore Medical Centre’s Nyfix’s Knuff says he’s seeing payback in space savings.
“With the blades we’re able to get up to 10 GHz per rack unit. With our legacy Sun (Microsystems Inc.) Solaris platform, we’re actually at 0.5 GHz per rack unit. It’s a factor of 20 times improvement,” he says. “Since data centre space in the metropolitan New York City area is so expensive, that’s where we get a big payback: compute density.”
But early adopters also point out that blades aren’t for everyone.
“I know there are people out there who say you’ve got to get them and we’re going to replace every server we have with blades,” Knuff says. “For us, we use it where it makes the most sense. If you’re trying to get higher utilization of your space, you should look at blade servers.”