Despite its shining promise for establishing elegant, integrated and cost-effective distributed environments, grid computing has yet to win over skeptical enterprise IT executives and reach mainstream adoption.
Already established as a building block that supports high-performance computing environments in universities and scientific research institutes, grids are now the focus of industry giants. Hewlett-Packard Co., IBM Corp., Oracle Corp., and Sun Microsystems Inc. are betting they can transplant the technology into the hearts of their largest enterprise customers. But concerns over immature technology, sketchy financial track record, and even its unclear definition are keeping many corporate users from taking the first steps toward deployment.
“The technical people are somewhat confused (by grids) because vendors are all using a different language despite the fact they are just talking about the same concept — the next wave of distributed computing,” says Dan Kusnetzky, vice-president of system software at IDC.
Larry Sikon, CIO at investment bank Thomas Weisel Partners LLC, personifies the cautious view of grids held by many corporate users. “I’m content with the applications I have in place at the moment,” Sikon says. “But (grids) are a neat concept as far as being able to tap spare CPUs, and when I have an application that might apply, such as number crunching, I would consider it.”
In much the same way as an electrical grid does for electricity users, grid computing promises to more efficiently link and provision resources across enterprise platforms. But it may take several years before its value is appreciated, says Mary Johnston Turner, vice-president and practice director at research firm Summit Strategies. “But the jury is still out when it comes to making broad architectural commitments to it. Grid purchasing decisions will be driven by CIOs and architects looking to save money, improve their services level, and increase IT flexibility.”
The top suppliers of grid technologies, predictably, believe grids are the best way to pursue a short-term strategy such as low-level integration of departmental servers, or as a step toward creating a full-blown utility computing environment using platforms such as HP’s Adaptive Enterprise, IBM’s On Demand, Oracle’s flagship product Oracle 10G, and Sun’s N1.
IBM’s companywide On Demand program is particularly grid-centric, with the company using its WebSphere and Tivoli products to conduct policy-based management. Sun sees a grid as fundamentally offering services that provide a policy-based management via its N1 platform. HP will use its OpenView Platform as the basis of its grid strategy. The company will integrate its Talking Blocks Web services management into that platform. Oracle will build its grid hopes around its Oracle 10G database to offer infrastructure provisioning and workload management.
One hurdle must be overcome before corporate users will move to grid technology: a remedy for the lack of fully exploitive systems management and security products that can be smoothly melded with existing enterprise infrastructure. Executives with leading grid suppliers know it is critical to address this need before meaningful grid projects can go forward.
However, a flood of low cost but fairly robust Intel-based hardware and Linux-based software platforms has been arriving during the past year, and can serve as good entry points for IT shops looking to do their first grid implementations, analysts say.
Early adopters are finding grids useful for exploiting the idle resources of existing servers, thereby saving money on new purchases and driving greater productivity.
“The economic aspects of grid computing are just beginning to be studied, like the way you might price some compute capacity available to you, or the impact of network speed and latency and the value of that compute capacity,” says Shahin Kahn, vice-president of the high performance and technical computing business unit at Sun.
IBM also believes tapping idle servers alone is justification for establishing a grid. “Most studies tell you that Windows servers are typically utilized between five per cent and 10 per cent, with Unix and Linux servers averaging between 15 per cent and 20 per cent,” says Dan Powers, vice-president of IBM’s grid computing strategy. “But look at the mainframe. It has a much higher utilization rate because it has a superior operating system that can virtualize every application below it. This needs to happen on these lower-end platforms.”
IBM continues to expand the technical aspects of its grid strategy through a number of its business partners. One such partner, Avaki, has come up with the “data grid,” in which a single service can fetch data from multiple sources and deliver it customized to a particular developer or user.
Powers believes there are three areas in which corporate users can first dip their toes in the grid computing waters:
• using grids to schedule jobs within the network more efficiently;
• orchestrated provisioning of important server-based tasks; and
• information virtualization, through which grids can make all the servers flung across an enterprise appear as if it is one big server.
A more general area that has piqued the interest of corporate users is integration. By weaving together an assortment of grid technologies with Web services, some are finding it to be an efficient and cost-effective way of implementing a grid architecture. It is also a way of addressing the thorny problem of integrating geo-graphically disparate data stores that would better enable an on-demand e-business.
“If the grid vendors do their jobs properly, users should be able to take applications running in silos and integrate them on grids,” says Dana Gardner, a senior analyst at The Yankee Group. “You should not have to do a major reconfiguration or recompile of your apps.”