“There are others out there that do it generally,” said Jerry Diakow, country sales manager for Raritan Canada, “but nobody’s doing it at the rack level.”
“The ability to start monitoring temperature inside the rack and get a much more granular look at just how hot different parts of your enterprise are,” is new to Raritan’s Dominion PX line, said Nik Simpson, senior analyst with The Burton Group.
“At the moment, Raritan is the only one I’ve heard of doing this,” he said. “I suspect they won’t be the last. I suspect you’ll also see this sort of technology being integrated directly into racks. What we’re going to need is some sort of standard interface for monitoring and collecting the information.”
Temperature monitoring in the typical data centre isn’t much more sophisticated than it is in your house, Simpson said. “It’ll tell you that the whole house is at roughly 75 degrees, but that might mean the upstairs room is at 95 and the downstairs is like a meat locker.”
Because they don’t know where the heat is coming from, enterprises tend to err on the side of colder, Simpson said. “If you go into an enterprise data centre at the moment, most of them, you’ll find you need a long-sleeved shirt at the very least,” he said. “And it sounds like a jumbo jet on the taxiway, basically because if you don’t know enough about where the heat is, you have to cool everything as much as you can, just to make sure the hot spots don’t get too hot.
“Bringing that temperature monitoring down much closer to the rack and giving a much more granular picture of where things are hot and where things are cold will allow you to target your cooling solutions much more effectively and probably raise the overall temperature of your data centre without impacting any hardware adversely.”
Dynamically changing the cooling – responding to temperature increases that vary according to the use of the rack equipment – also saves power, by applying cooling only when necessary, said Simpson.
“You might have an area that gets particularly heavily used in terms of application usage or storage or something at a particular time of the day, and during that time in the day it gets a lot warmer than it is in normal conditions,” Simpson said. “If you know that and you can see that trend happening, then you can start to cool it just while it’s building up, pushing out a lot of heat. You can then reduce the cooling around it during other times during the day.”
Cooling systems eat up about a third of a data centre’s power – 21 per cent for the actually cooling units, and eight per cent for the HVAC fans, according to Diakow. About 50 per cent is drawn by the IT equipment itself, and just as they tend to overcool, enterprises can overprovision power for that equipment.
“What we find that typically happens is that a data centre will gear up for the power that the servers are rated for,” Diakow said. “So they’ll go to the plate on the server and say, ‘this server draws this much power, we need to make sure we have enough power in the data centre for all these servers.’
“What the server manufacturers list is really the maximum power that that product is going to draw. Typically, we also find that these are the servers are rated at at least 25 per cent and sometimes 50 per cent over what they are running at. So there’s an opportunity there to save power.”
Intelligent PDUs that monitor power on an outlet-by-outlet basis allow data centre managers to adjust power up or down according to each server’s actual draw, Diakow said.
Simpson said deep visibility into electricity use is a prime consideration in choosing a PDU. Another is its efficiency in converting higher-voltage feeds and breaking them out for use on the individual boxes.
“That’s one of the areas of power loss, and that power loss goes away as heat as well, so not only do you lose power, you also have to spend additional money extracting the heat that (power) loss created.”