Site icon IT World Canada

Next-generation distributed computing

If your large enterprise is like most, about 10 per cent of your available CPU cycles actually gets used – a fact that is driving enterprises to deploy the next generation of cost-reducing distributed computing architectures.

Distributed computing is coming, and its benefits will be significant. Grid computing, utility computing, P2P (peer to peer) architectures … toss in Web services, and you have a recipe for radically transforming the enterprise IT infrastructure, lowering costs, and making distributed and failure-tolerant systems more viable.

Some draw analogies between today’s data centre and the days when every company had its own power generator manned by workers shoveling coal – not a scalable solution (nor for that matter a highly available one) until the modern utility emerged. “It’s not like one day we’ll wake up and computing flows as easily as the water,” says Bill Martorelli, vice-president of Enterprise Services at Hurwitz Group Inc. of Framingham, Mass., adding that current initiatives “are really the culmination of years and years of distributed computing concepts.”

And these concepts are becoming reality. Major players, including IBM Corp., Sun Microsystems Inc., and Hewlett-Packard Co., are introducing utility computing offerings featuring dynamic capacity scaling, aligning their internal grid and Web services initiatives, and supporting open-source standards efforts for linking computing grids cross-enterprise and globally. Most importantly, enterprises are starting to deploy computing and data grids on a cluster-and campus-wide scale.

The idea is to take advantage of idle capacity, whether across the data centre, around the world, or at different times of the day. Grid computing has primarily focused on solving the systems-management challenges of distributed computing, such as security, authentication, and policy management across heterogeneous platforms and organizations. Utility computing has focused on developing provisioning technology and the business model for on-demand, pay-as-you-go infrastructure. And p-to-p efforts have drawn attention to the potential for leveraging idle resources to handle huge computing tasks.

Now these worlds are converging because of customer demand. As companies push for better resource utilization, lower management costs, and more distributed, failure-tolerant infrastructures, a combination of grid and utility computing is a no-brainer. “Everything is still a big soup of new technologies … it’s like a puzzle,” says Wolfgang Gentzsch, director of grid computing at Sun Microsystems. “But everyone wants to have it.”

Laying the grid

Grid technology, originally developed to deliver supercomputing power to large scientific projects, comes in two flavors: compute grids (using distributed CPUs) and data grids (using distributed data sets). Although grids promise dramatic ROI from shifting enterprise computing workloads, around the globe or to external partners, most enterprises are starting with smaller, local deployments known as cluster (single datacenter) or campus grids — in line with the trends toward clustering and virtualizing resources in the datacenter, and using shared file systems and the cheapest available hardware and software.

Gaithersburg, Md.-based Gene Logic Inc., for example, recently deployed a cluster computing grid using software from Cambridge, Mass.-based Avaki Corp. to help run processing-intensive DNA sequencing routines. The company wanted to retire an older Unix system with high maintenance costs, according to Joe Parlanti, Gene Logic’s manager of computing infrastructure support. But buying a new system meant “you either have to buy it big enough and grow into it or buy something [smaller] and be stuck with the size.”

The company had some smaller Unix machines it used infrequently for a different application. By linking them into a grid, it was able to handle the sequencing work with no scheduling conflicts and eliminate the older machine. “The grid’s so much quicker, about 15 percent of the amount of time [needed] to run the total sequence,” Parlanti says. “It improves our efficiency, and lowers our cost basis for support, and I can add and delete systems from the grid at will … it’s very versatile.”

Multi-site and inter-enterprise deployments await the development of software to handle the systems administration issues associated with heterogeneous-environment grids. Authentication and naming, job scheduling and policy management, fault tolerance and failure recovery, site autonomy, and QoS (quality of service) are among key issues being addressed by startups including Avaki Corp. and Entropia Inc. and larger players such as Sun and IBM.

A major step forward was the recent introduction, with support from IBM, Microsoft Corp., and others, of a standards proposal called OGSA (Open Grid Services Architecture), building on an existing open-source grid middleware solution from the academic Globus Project (www.globus.org) plus Web services standards including XML, WSDL (Web Services Description Language), and SOAP (Simple Object Access Protocol). “We’ll all be better off if there’s a common set of protocols that allow these services to interoperate,” says Dr. Carl Kesselman, a co-leader of the Globus project.

Peer-to-peer architectures are also converging with the grid model. Although p-to-p has failed to gain enterprise momentum due to the greater perceived manageability and security of tightly coupled server-oriented systems, some p-to-p players such as Groove Networks Inc. are focusing on collaborative computing and communication (more akin to a data grid). Others, including Sun’s JXTA, are trying to create building blocks to solve some of the classic grid problems, like authentication. “P-to-P is going to be absorbed or amalgamated into other activities like grid computing,” speculates Andrew Grimshaw, Avaki’s CTO and founder.

Computing on tap

As impending grid standards bring cycles-on-tap closer to reality, IT departments are getting a taste of utility computing – a new model of a la carte options where you pay only for what you eat, whether buying from an external vendor or a managed services provider. The idea is to extend the cost benefits of outsourcing into the datacenter, while still providing centralized control and high availability.

The utility model can be as simple as buying a 64-CPU server but only paying to use 32 initially, or paying based on average utilization throughout the day and scaling up or down dynamically as needed. Or it can be as complex as dynamically configuring and provisioning server farms, storage systems, and other resources on demand, outsourcing peak computing demand to external datacenters, or in the extreme even outsourcing your whole IT infrastructure to another company (as American Express Co. recently announced it would to IBM, in a $4 billion deal). Several vendors have announced utility-like dynamically-reconfigurable infrastructure initiatives – for example HP’s Utility Data Center, Sun’s N1 project, Compaq Computer Corp.’s Adaptive Infrastructure, and IBM’s eLiza.

The goal of the new model is “driving asset utilization by taking static islands of computing scattered throughout the enterprise and making them dynamic,” explains Pat Tickle, vice president of product management at Terraspring, one of several startups including eJasent and Jareva that are developing software to orchestrate the dynamic management, provisioning, and scaling of datacenter infrastructure. Such software enables IT managers to build a library of resource configurations, map and schedule applications onto servers, and track and bill cycles consumed to the appropriate user.

In theory, such offerings will enable IT departments to manage internal resources more efficiently, and outsourcers to build true low-cost computing utilities.

The utility model would also be a win for business continuity, making it cheaper and easier to purchase standby capacity, and to replicate and re-allocate capacity in case of a failure. But before utility computing can happen on a large scale, cultural issues in addition to security and performance issues have to be resolved. “IT organizations are used to running their own infrastructure, they like to control the machines,” points out Michael R. Nelson, IBM’s director of Internet technology and strategy. But, he adds “we know we can’t continue on the present trend … companies are overwhelmed by the cost of managing their systems.”

The Bottom Line

Grid, peer-to-peer, and utility computing

Executive Summary: The convergence of grid, peer-to-peer, and utility computing efforts – along with Web services – promises to revolutionize the data centre by lowering costs, eliminating server underutilization, reducing management complexity, and providing additional business continuity benefits.

Test Center Perspective: Enterprises are just starting to deploy grid and utility computing products and services. Widespread usage of external computing utilities, and of multi-site and inter-enterprise distributed computing, is likely to require the further development of platform-independent standards.

Exit mobile version