Site icon IT World Canada

N+I – IBM guru touts grid computing

LAS VEGAS – Grid computing can deliver the equivalent of an operating system that makes the Internet into a massive virtual computing machine, letting users harness the power of many host systems without worrying about the complexity of the technology, an IBM Corp. executive said here Wednesday.

The technology, currently used mostly by groups of research institutions, can also allow enterprises to make better use of their total computing resources and to improve the quality and reliability of IT services they deliver to their users, said Irving Wladawsky-Berger, vice-president of technology and strategy in IBM’s server group. He presented a keynote on the subject mid-day Wednesday and described IBM’s Web services vision in an interview earlier in the day.

Like computing systems, which migrated from mainframes to PCs, grids will evolve into systems in which users don’t have to know about the complex decisions and processes involved.

“You leave it up to the grid to determine which system has the necessary capacity to get the job done,” Wladawsky-Berger said.

Enterprises will embrace grid computing slowly, much as they did the Internet in the mid-1990s, probably beginning by implementing “intragrids” among their own systems, he said. Their next step may be to bring partners in to the grid, and later they may take advantage of grids provided as network services by third parties. Eventually, when integrated with Web services standards, grid computing will enable e-business on demand, he said.

“It’s going to take a little while for businesses to be confident that grid computing works for them,” Wladawsky-Berger said.

The best place for commercial enterprises to begin using grid computing is with number-crunching applications similar to the work research institutions have been doing on grids, he said. An example might be financial risk analysis, he added. Protocols already exist to share these kinds of jobs across multiple systems. Additional open protocols will be needed for more specialized applications, Wladawsky-Berger said.

Grid computing can improve IT services in several ways, he said. First, using computing resources throughout a grid flexibly requires systems that constantly check the “heartbeat” and workload of each host in order to allocate tasks, much like systems that have been in mainframes for years, he said. These monitoring systems, in turn, can bring enterprises a new level of ongoing system reliability, or quality of service, Wladawsky-Berger said. In addition, tapping into resources on many different hosts gives companies a way to replicate all their resources in multiple locations for quick recovery from failures, an increasingly feasible and important function, he added. As IT resources such as storage decline in cost, more companies will replicate their data.

“The cost of replicating is not that much, and the cost of an outage is getting higher,” Wladawsky-Berger said.

The Globus international Internet research organization is now working on wrapping grid protocols around Web services protocols in order to make the technology serve enterprises’ emerging demands, he said. The Open Grid Services Architecture (OGSA), a proposed evolution of the Globus Toolkit, is intended to bring these together and will be developed through this year and into 2003 with IBM’s help.

Companies can already implement some grid functions with IBM middleware. IBM offers grid capabilities in middleware products including WebSphere, as well as in its DB/2 database software. Those capabilities are not based on open standards today, but the company will move them over to open standards in the next year or two, he said.

More information on Globus is available at http://www.globus.org.

Exit mobile version