When will grid resource allocation grow up?

While grid computing has received a lot of press in the past couple of years, it is still difficult for organizations involved in grid to get it to dynamically and automatically allocate CPU power to process applications.

This scenario exists simply because the technology is still too immature, according to industry insiders.

Jonathan Eunice, president and principal analyst at research firm Illuminata Inc., said grid technology will advance significantly over the next five years but for now, users will have to make use of current allocation tools.

Additionally, there isn’t a guidebook about how to design a grid to automatically and dynamically allocate CPU power to the most crucial processes or applications running, he said.

“There aren’t a lot of best practices because the user experience running these types of jobs is limited, especially in a commercial context,” Eunice explained. “Typically, the early instances have been about building a structure to run a job faster.”

For example, speeding up portfolio analysis, he said. Users are able to scavenge some of the compute cycles already, available on their computers to accelerate portfolio analysis. But the next level — where numerous applications are competing for those compute cycles — has not been widely done.

Aisling McRunnels, senior director of utility marketing at Sun Microsystems in Santa Clara, Calif., agreed. She said organizations have not reached that level of sophistication with their grid where they are running many applications.

She said right now, other than in educational institutions, financial institutions are the big users of grid and they are generally running only simple applications and calculations.

Research firm Gartner Inc. distinguishes between a traditional grid — which runs only a few large applications, sharing numerous machines with multiple owners — and a “real-time infrastructure,” whereby numerous applications are run simultaneously and CPU power is allocated dynamically and automatically, and it is owned by one organization, according to Carl Claunch, research vice-president at Gartner.

The University of Calgary’s grid is a traditional one because it has all the headaches associated with it — security problems and conflicts in priority. The University’s Grid has 15 sites across the country, with about 2,000 users, and 2,000 to 3,000 programs and possibly as many databases. In Calgary, the project runs a 28,000-processor machine called the GeneMatcher 2, which analyzes data about genome sequences.

Dr. Christoph Sensen, professor at the University of Calgary in the department of biochemistry and molecular biology in the faculty of medicine, and principal investigator for the genome bioinformatics platform, said Sun’s Grid Engine allows them to set up rules about how CPU power is prioritized. He said Sun’s Grid Engine is great for configuring a Grid that is “in your own basement” but there are limitations when you want to extend the infrastructure to a different location.

“We want to make it seamless,” Dr. Sensen explained. “If someone sits in Fredericton and they want to do an analysis, we don’t want them to have to ship the data to Vancouver. We want them to be able to spawn an analysis as if the computer was in their own building.”

It’s difficult to run a Grid spanning multiple domains because of difficulties with authentication, Dr. Sensen said. Right now the user in Fredericton would need to have access codes for the domains in Vancouver, log on, send the data there and then run the job. Because there are no tools available, the University of Calgary is building in the functionality itself.

With a real-time infrastructure, Claunch said, there are multiple ways to move the work around, such as partitioning tools like VMWare, load balancing tools, distributed job schedulers and workload managers. Organizations haven’t accomplished real-time infrastructures yet, but they are getting closer, he said.

The first steps are to develop better IT processes, including building in more repeatability, more automation and more standardization. Although it might take an organization 10 years to accomplish this, he said it’s not too early to start now.

Illuminata’s Eunice agreed that there are skills users can leverage to build their grids or “real-time infrastructures.” Unlike Gartner, Illuminata refers to both as grid computing. “We have a lot of experience, for example, with workload management,” he said.

Between the past five and eight years, users have done lots of server consolidation projects, which are close to grid, he added, because users take one big server and allocate for example, 12 per cent for one process, 39 per cent for another, 15 per cent for another and 34 per cent for another.

Server consolidation tools include Aurema’s ARMtech and Hewlett-Packard Co.’s Process Resource Manager (PRM).

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now