Site icon IT World Canada

HPC firms need energy efficiency, too

High-performance computing centres are the Indy cars of the technology industry: built for speed and little else. Let the other guys build the Corollas and Civics for economy.

Today, however, even some HPC systems are looking for ways to control energy use.

“Our appetite for additional compute cycles is pretty much insatiable. But we do look at power and cooling,” says Tommy Minyard, assistant director of advanced computing systems at the Texas Advanced Computing Center (TACC) at the University of Texas at Austin.

TACC is building a Sun supercomputer expected to deliver 400 teraflops of processing power from 13,000 Advanced Micro Devices quad-core processors. Power and cooling issues remain secondary to performance, but are becoming more important as faster processors and smaller-form-factor blade servers increase heating density, Minyard says.

“The standard, old, conventional under-floor cooling with central air-conditioning units just aren’t adequate for the heating densities we’re seeing down the road here,” he says.

The Sun servers TACC is deploying will consume about 30 kilowatts of power per rack, vs. the 12 kW consumption of a typical server rack. “That’s extremely high. You’ve got all that heat in a small space, and you have to cool it,” Minyard says.

TACC also is deploying in-row cooling units to control server temperatures better, but Minyard is intrigued by other possibilities, including spraying a fine mist of droplets right in front of the chip. “If you can move the cooling directly to the chip, that is much more efficient than trying to cool the air,” he says.

No doubt the lessons TACC and other HPC centers learn will one day come in handy for conventional data centres as they continue to grow in size and complexity, Minyard says.

Exit mobile version