New business models call for new servers

Reliability, scalability and manageability is the mantra of the server customer of today and the future, say vendors and analysts alike.

Those consumer cries are evidently being heard, as several vendor releases over the last couple of months seem to be answering the call. In February, Mississauga, Ont.-based Hewlett-Packard Canada Ltd. announced a suite of new products and servers that enhance its IA-32-based server and storage product lines, according to the company.

HP touted its latest servers as the “manageability” option. Parag Suri, the category business manager for netservers at HP Canada, explained that each box offers “tool-less” entry for easy upgrades and servicing. Matched with the company’s customer services, this feature speaks to the nature of the IT industry, he said.

“The feedback we’ve been getting from customers worldwide is, a lot of them have limited IT resources,” Suri said. “Even if the resources are available, people don’t have large IT budgets. They want to be able to deploy more technology, but at the same time they want to be able to manage it easily.”

Likewise, Armonk, N.Y.-based IBM Corp. announced a new server system, the eServer, on HP’s tail. The company says the IBM eServer x440 with Enterprise X-Architecture technology is a less expensive, high-performance option for corporate data centre computing. IBM’s server boasts up to 16-way SMP and 64GB of memory.

Frank Morasutti, manager of Intel high-performance servers e-server x-series at IBM, said this announcement is part of a goal to use the company’s mainframe skills in the enterprise space.

“There are three main features. (The first) is a pay-as-you-grow strategy…you build a box as you need it, it’s the building block approach,” Morasutti said about the IBM offering which employs a “pay as you grow” configuration option.

Evolution of industry

Brad Keates, director of marketing for Sun Microsystems Inc. Canada, said the reliability and availability of the server in enterprises is increasingly important and is actually changing the way customers talk about servers.

“People don’t talk specifically about client server computing anymore because the terminology has changed to something like network computing and that has become much more of a proxy for a lot of things,” he said.

He added that while reliability and availability has always been important, the evolution of the business model, from a PC-centric to server-centric, has pushed servers into a new level of prominence.

“In the next couple of years, customers are looking at total cost of ownership and return on investment,” he said. “That leads quickly to questions they have around server consolidation. What do I do about server sprawl and how do I make sure I don’t have a weak link in the chain of delivery.”

And with new business models come new issues of operability and manageability.

“The issues related to tuning and how easy it is to deliver scalability, those are the main issues related to ease of use,” he said. “The key thing that has happened is that there has been tremendous standardization on the interoperability. We have eliminated a whole level of confusion.”

HP went as far as to develop a new naming scheme related to its latest offering in a move to help potential customers more easily identify the right server for their needs. The first two letters of the name denote the shape (“t” for “tower,” “r” for “rack optimized”) and the numbers ramp upwards according to capacity. For example, HP’s tc7100 is tower a configuration and the rc7100 is a rack-mountable version of the tc7100.

It makes sense, said Alan Freedman, an industry analyst with IDC Canada in Toronto. In the past it was hard to know which HP server to buy.

“Before, it was all over the board, a real jumble in the way they named things. Now there’s some consistency and customers can see where they’re slotted for their applications, their workloads and where the next option is.”

But Freedman adds more customer demands to lists that the vendors have developed.

“I think (servers) will be smaller and smaller,” he said. “I think what people are trying to do is reduce the environmental footprint and concerns. They are miniturizing in terms of blades and they are reducing the overall size of the data centre by consolidating a number of smaller, mid-range servers onto a larger, more cost effective high-end server.”

And, much to the chagrin of some vendors, he said Itanium has a little ways to go yet before it completely proves itself as the right choice for customers.

“The next iteration has to come out where there are true performance gains over I-32 and there has to be more Independent Software Vendors support (ISV) where they are porting 64-bit applications to the Itanium chip or developing new applications specifically for Itanium,” he said. “The more applications there are out there, the more implementations we will see.”

Meanwhile, Freedman said it isn’t accurate to compare Itanium to the less costly celeron.

“Celeron is really in the low-end,” he said. “It’s the cost-conscious processor. In the mainstream, you will have Pentium III and Pentium 4 chips Zeon chips.”

Freedman sees Itanium making a more prominent move onto the marketplace later in the year.

“It is going to gain momentum as more applications are available and more people are familiar with it,” he said. “People need to improve their comfort level in putting high-end applications and mission-critical application on Intel servers.”

Larry Sherman, director of technical marketing at Maynard, Mass.-based fault tolerance servers company Stratus Technologies Inc., said he sees a real move toward network-based, real-time applications, and that the move is driven by the Internet and intranets.

Sherman agrees that customers are looking for simplicity, not only for efficiency, but also because of a more sombre recent trend of reducing staff and costs.

“They need solutions that are simple to install and simple to operate,” he said.