Many heat output factors fly under the radar, analyst says

When evaluating power requirements during a data centre refresh, many organizations are too caught up with the big picture and miss the smaller factors that contribute to data centre heat output, according to an Info-Tech Research Group Ltd. report.

“It’s amazing how many little devices and things that are plugged in the data centre that don’t make it onto their cooling radar, but they’re really big heat sinks,” said Darin Stahl, a lead analyst with the London, Ont.-based consultancy. “It’s easy for server folks to look at the big issues, like the 15 racks full of servers they have, but you really have to measure everything you plug into the room.”

For companies that are just refreshing their data centres or having a cooling crisis, they begin to focus immediately on larger cooling issues and not back to the base power requirements. Devices such as routers can often fall under the radar for data centre administrators, despite the significant impact they can have on heat levels, he added.

More from IT World Canada

Six steps to a green data centre

“Servers kind of turn themselves on and off a little bit, they accelerate and decelerate, so their BTU (British thermal unit) footprint and cooling needs actually changes with depending on the load,” Stahl said. “A router, on the other hand, just runs and runs and a lot of folks will just forget about that rack of telecom. They’ll stick a big Cisco 6500 in the corner and pretty soon that thing’s generating a ton of BTUs.”

Stahl recommended that administrators calculate cooling requirements from the power side first, track down everything in the data centre, and assign a value to each heat output factor. Other heat output factors that fly under the radar include backup power units, power distribution units, outdoor facing windows, data centre personnel, and lighting — which can be calculated by doubling floor space to get the amount of watts being used.

For companies that have begun to strongly embrace server consolidation through virtualization technologies, Stahl warned that the reduction of square footage and server footprint does not always match up with cooling needs. One client, he said, has about 85 per cent of their environment virtualized; reducing their footprint down to one third of what it was prior to the virtualization project.

“Interestingly, what they’re reporting to us is that they’re only cutting their BTUs by about 50 per cent,” Stahl said. “This is still substantial, but they’re not getting down to a third of their BTUs.”

Along with all of these factors, organizations should also try and estimate future cooling requirements such as additional servers or other heat generating equipment, Stahl added.

But like other areas of data centre operations, this ultimately boils down to a risk exercise for data centre administrators, who need to determine their company’s actual tolerance for downtime and avoid over-engineering their cooling devices.

“I’ve been in data centres where you walk in and say ‘wow, I really need a sweater now,’” Stahl said. “That’s because they’re flooding the whole room, and in general we need to see more precision cooling, in the way that we’re actually cooling at the racks.”

According to American Power Conversion Corp. cooling expert Jim Simonelli, cooling a data centre to 68 degrees Fahrenheit is going out of style, as most servers, storage and networking gear is certified to run in temperatures exceeding 100 degrees.

Servers and other equipment “can run much hotter than people allow,” Simonelli, the chief technical officer at APC, said. “Many big data centre operators are experienced with running data centers at close to 90 degrees (and with more humidity than is typically allowed). That’s a big difference from 68.”

People are starting to realize they could save up to 50 per cent of their energy budget just by changing the set point from 68 to 80 degrees, he added

Going forward, “I think the words ‘precision cooling’ are going to take on a different meaning,” Simonelli said. “You’re going to see hotter data centers than you’ve ever seen before. You’re going to see more humid data centers than you’ve ever seen before.”

This is a factor that should also be considered when determining the size and type of your cooling devices, Stahl added.

– With files from IDG News Wire, Jon Brodkin, Network World (US)

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now