20 degree data centres becoming a thing of the past, APC says

Cooling a data centre to 20 degrees Celsius may be going out of style, APC power and cooling expert Jim Simonelli says.

Servers, storage and networking gear are often certified to run in temperatures exceeding 37 degrees, and with that in mind many IT pros are becoming less stringent in setting temperature limits.

Servers and other equipment “can run much hotter than people allow,” Simonelli, the chief technical officer at the Schneider Electric-owned APC, said in a recent interview. “Many big data center operators are experienced with running data centres at close to 32 degrees [and with more humidity than is typically allowed]. That’s a big difference from 20.”

imonelli’s point isn’t exactly new. Google, which runs some of the country’s largest data centres, published research two years ago that found temperatures exceeding 37 degrees may not harm disk drives.

With cooling systems eating up half your data centre dollars, it makes sense to slash your cooling costs any way you can. But can data centres use cold weather to their advantage?

But new economic pressures are helping data centre professionals realize the benefits of turning up the thermostat, Simonelli says. People are starting to realize they could save up to 50 per cent of their energy budget just by changing the set point from 20 to 26 degrees, he says.

Going forward, “I think the words ‘precision cooling’ are going to take on a different meaning,” Simonelli says. “You’re going to see hotter data centers than you’ve ever seen before. You’re going to see more humid data centres than you’ve ever seen before.”

With technologies such as virtualization increasingly placing redundancy into the software layer, the notion of hardware resiliency is starting to become less relevant, reducing the risk of over-heating.

Server virtualization also imposes new power and cooling challenges, however, because hypervisors allow each server to utilize much greater percentages of CPU capacity. On one hand, server virtualization lets IT shops consolidate onto fewer servers, but the remaining machines end up doing more work and need a greater amount of cold air delivered to a smaller physical area.

If you’re shutting off lots of servers, a data centre has to be reconfigured to prevent cooling from being directed to empty space, Simonelli notes.

“The need to consider power and cooling alongside virtualization is becoming more and more important,” he says. “If you just virtualize, but don’t alter your infrastructure, you tend to be less efficient than you could be.”

Enterprises need monitoring tools to understand how power needs change as virtual servers move from one physical host to another. Before virtualization, a critical application might sit on a certain server in a certain rack, with two dedicated power feeds, Simonelli notes. With live migration tools, a VM could move from a server with fully redundant power and cooling supplies to a server with something less than that, so visibility into power and cooling is more important than ever.

The ability to move virtual machines at will means “that technology is becoming disconnected from where you have appropriate power and cooling capacity,” Simonelli says.

To support the high densities introduced by virtualization and other technologies such as blade servers, cooling must be brought close to the rack and server, Simonelli says. As it stands, cooling is already the biggest energy hog in the data centre, with power wasted because of over-sized AC systems and temperatures set too low, he says.

While every data centre has different needs, Simonelli says enterprises can learn something from the giant SuperNAP co-location data center in Las Vegas, a 407,000 square-foot building that relies heavily on APC equipment, such as NetShelter SX racks, thousands of rack-mounted power distribution units and UPS power supplies.

While the site can support 60 megawatts, it’s being built out in 20-megawatt chunks. “That means they can maximize the energy consumption, the efficiency of the data centre as they scale. They’re not powering the 60-megawatt site right away,” Simonelli says.

One of the biggest mistakes is to over-size power capacity, in anticipation of future growth that may never come. Companies have to plan for where they think they will be a few years from now, but build out in smaller increments, he says.

“You have to have the floor space, and you have to have the capability of getting power from the utility,” Simonelli says. “But if you’re going to build out a one- or a five-megawatt data centre, and you know that your first year of deployment is only going to be 100 kilowatts, get the space and make sure you have power from the utility for five megawatts but just build it out in 250 or 500 kilowatt chunks.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now