Data centres are continuing to grow in size and complexity, and with energy costs on the rise there’s increased interest in finding the right gear and using it properly. There is also, of course, an environmental argument, though there’s no doubt a certain amount of “greenwashing” going on.
On the hardware side big players HP, IBM, Sun, and recently Intel have embarked on data centre consolidation initiatives. The savings have been impressive. This is an indication of both how inefficient big iron was in the past, but also of some very real advances.
To get it right requires organizational change, a new take on design – of which cooling is a large component – a cost-benefits analysis on the hardware front, and a willingness to consider new technologies such as virtualization.
The organizational challenge comes directly out of the new reality of skyrocketing energy costs. Usually the IT people don’t get the energy bill. It goes to facilities – at one time it could have been addressed almost as a fixed cost. Not anymore. The bills are piling up, and IT and facilities have to get together to address their shared concerns.
Where they do this, of course, is in design, which leads into a discussion around gear and cooling. Design elements also cover off location, flooring, and schematics.
On the location front we can expect some dramatic solutions. There is already a mini data centre boom in Quincy, Wash., due to access to cheap hydro power. As well, with the cost of real estate, and global access to bandwidth, there is serious talk of putting “free air cooling” data centres in northern climes to take advantage of external air sources.
Flooring, while not sounding like a radical innovation, does require advanced thinking in terms of a holistic design for wiring, cooling, and placement. This is not easy to fix in older data centres, but efficient flooring is being built into new ones. A good example is Toronto Hydro Telecom’s new collocation facility and its use of interstitial raised floors. The idea is to separate cabling from air flow, and for the efficiency to result in reduced requirements for air conditioning units.
Ian Collins, vice-president of operations for Toronto Hydro Telecom, sees this as a reflecting a big change in how we look at data centres.
“A few years ago the biggest cost was floor space, now 70% is energy,” says Collins. “The interstitial floor is like a balloon, the floor itself is pressurized, with the air released in specific locations. There are temperature and humidity monitors throughout the floor space. This is 30 inches with a 15/15 split, half for cable sand half for air space.”
Many organizations are reluctant to go with built-in liquid cooling as it can reduce the flexibility required for co-location to multiple clients. However, some data centres are using liquid-cool solutions by companies such as SprayCool and Liebert. SprayCool has designed a closed-loop system for liquid cooling within servers, and Liebert can install pumped-refrigerant or water-based systems on ceilings, walls, or floors.
Henry Van Pypen, general manager of the technical environmental solutions division of TAB Canada, says that liquid-cooling is not widespread because many companies perceive it as risky. There is plenty room for improvement with air cooling.
“The power draw for cooling is pretty significant,” says Van Pypen. “CRACs (e.g. computer room air conditioning units) fill the floor with all kinds of cool air. It’s important to put it in front of the enclosure, to draw it through laterally, and then out the back.”
Van Pypen points out that, with the rapid deployment of blade servers, we are now seeing racks with ten or 15 kilowatts of draw, and with cooling requirements reflecting the increase. At times the hydro demand is so great utilities simply can’t supply the power to the building.
The Green Grid, a consortium of companies dedicated to advancing energy efficiency in data centers, has a sober view of the challenges ahead. The consortium has some heavyweights behind it, and their willingness to co-operate deserves a nod: something must be going on when all the major chip makers and hardware vendors, as well as software firms like Microsoft and VMware, and niche players like SprayCool, are on the same page.
“We are concerned about the trend toward people overstating both their commitments and what they think is achievable,” says Larry Lamers, member of technical staff, office of the CTO at VMware and director of The Green Grid. “This is one of the reasons that the Green Grid has stated over and over again that our focus is on energy efficiency.”
Larry Vertal, senior strategist at AMD and a representative from the Board of Directors for the Green Grid, concurs, and makes clear that even with the best design, and the best technology, it is expected that data centres worldwide will be drawing more and more power.
“AMD sponsored a study suggesting that power consumption in data centres had doubled from 2000 to 2005, and that without best practices and standards, it would double again from 2005 to 2010.”
This why the bulk of the Green Grid’s efforts go into its technical committees, with a long term goal to have the ability to measure energy use in data centres at a granular level, which will then inform optimization strategies.
“This is doable,” says Lamers. “We need to show respectable ROI on energy use reduction per cubic foot of rack space. And reducing server power consumption will reduce cooling – they go hand in glove.”
Brocade, another member of the Green Grid, has a strong push toward greener data centres that focuses on consolidation and virtualization.
Tom Buiocchi, vice-president of marketing for Brocade, which provides networked storage solutions, claims the company can get energy use and cooling requirements down 70 per cent below their competitors.
“We have been able to do this by integrating the technology into ASICS chips. Our leading product can have 384 ports at 4 gigabits per second. This is a four-fold increase in performance, and yet with a reduction in energy consumption.”
Overall, some of the efficiency numbers that are thrown around need to be taken with a grain of salt. For example, consolidators don’t deserve bragging rights for having run massive, inefficient data centres that hogged cheap power. Sun, for example, has modernized its data centres, but the real credit comes from the technological efficiencies they’ve built into their Sun Fire servers: they draw a third less the energy, put out less heat, and are half the size of previous generations.
Still, there is no shortage of demand, and there is only so much technology can do.
“A few years ago YouTube didn’t even exist,” says Mark Munroe, director of sustainable computing with Sun. “Next year there will be 300 million net new handsets. Our real goal should be to get power consumption to rise more slowly that the compute demand curve.”
A noble pursuit, and an admission that no amount of greening of the data centre is going to reduce power demand, although it should slow down growth. That is, until optical networks get us off silicon altogether, and Moore’s Law takes us near the speed of light.