The cool new look in data centre design

Data centre design is undergoing a significant transformation. The fundamentals of the data centre — servers, cooling systems, UPSes — remain the same, but their implementations are rapidly changing, thanks in large part to the one variable cost in the server room: energy.

Still in its infancy, though growing up fast, server virtualization is increasingly being relied on as a power-saving outlet for enterprises rolling out cost-effective data centres or retrofitting existing data centres to cut power costs considerably. What may come as a surprise, however, is that hidden energy costs await those who do not plan the layout of their virtualized data centre wisely. And the chief culprit is heat.

Consolidating the workload of a dozen 1kW servers onto one 2kW machine means that most virtualization hardware platforms produce more heat per rack unit than individual servers do. Moreover, collecting several virtualized servers into a single, high-density rack can create a data centre hotspot, causing it and adjacent racks to run at significantly higher temperatures than the rest of the room, even when the room is centrally cooled to 68 degrees. Blade servers are notorious for this because they run extremely heavy power supplies and tend to move an enormous amount of air through the chassis. Virtualizing them will indeed significantly reduce data centre energy costs, but it won’t provide a complete solution for reining in your data centre’s energy needs. For that you have to retrofit your thinking about cooling.

Cooling on demand

For the most part, big beefy air conditioning units that push air through drop ceilings or raised floors remain regular fixtures in the data centre, but for enterprises building out for energy efficiency or seeking to retrofit for added energy relief, localized cooling — mainly in the form of in-row cooling systems — is making a splash.

“We originally designed our in-row cooling solutions to address hotspots in the data centre, specifically for blade servers, but it’s grown far beyond that,” says Robert Bunger, director of business development for North America at American Power Conversion (APC). “They’ve turned out to be very efficient, due to their proximity to the heat loads.”

Bucking the “big air conditioner” paradigm, in-row cooling systems such as APC’s are finding their place between racks, pumping out cold air through the front and pulling in hot air from the back. Because cooling is performed by units just inches away from the source rather than indiscriminately through the floor or ceiling, data centre hotspots run less hot. What’s more, rather than relying on a central thermostat, these units function autonomously, tapping temperature-monitoring leads placed directly in front of a heat source to ensure that the air remains within a specified temperature range. If a blade chassis starts running hot due to increased load, the in-row unit ramps up its airflow, dropping the air temperature to compensate.

Moreover, the unit ratchets down its cooling activities during idle times, saving even more money. All told, the cost-cutting benefits of localized cooling are quickly proving convincing, so much so that Gartner predicts in-rack and in-row cooling will become the predominant cooling method for the data centre by 2011.

Modular air conditioning

For enterprises considering localized cooling, APC’s in-row units are available in both air- and water-cooled models that provide from 8kW to 80kW of cooling output. The smaller APC units — the ACRC100 and the ACSC100 — are the same height and depth of a standard 42U rack, but half the width. The company’s larger ACRP series retains the full 42U-rack form factor but pushes out far more air than the smaller units do.

Power and cooling giant Leibert is another vendor offering localized cooling solutions. Its XD series in-row and spot-cooling systems are similar in form and function to their APC counterparts. Leibert also offers units that mount on top of server racks, drawing hot air up and out. Both APC and Leibert have rear-mounted rack ventilation and cooling units that exhaust hot air into the plenum or cool the air before passing it back into the room.

The modularity of these systems translates to significant startup savings. Whereas whole-room solutions must be sized for anticipated growth, localized cooling units can be deployed as needed. A large room that starts out only 30 per cent utilized will require only 30 per cent of projected full-room cooling hardware upon initial deployment.

There are downsides to these units, to be sure. The water-cooled systems require much more piping than centralized units do, and water pipes must be within the ceiling or floor of the room. The air-cooled units can place large heat loads into the plenum above the data centre, resulting in airflow and heat exhaust problems. Moreover, because these solutions are built to provide just enough just-in-time cooling, the failure of a single unit can be taxing. Either way, whether you’re rolling out a new energy-efficient data centre or retrofitting one already in place, a comprehensive understanding of your building’s environmental systems and the expected heat load of the data centre itself is required before implementing any localized cooling solutions.

Cool to the core

For some enterprises, individual high-load servers bring the kind of heat worthy of a more granular approach to cooling. For such instances, several vendors are making waves with solutions that bring a chill even closer than nearby racks: in-chassis cooling.

SprayCool’s M-Cool is a water-cooling solution that captures heat directly from the CPUs and directs it through a cooling system built into the rack. The heat is then pushed through a water loop to completely remove the heat from both rack and room. Cooligy is another vendor offering a similar in-chassis water-cooling solution. SprayCool’s G Series takes the direct approach to cooling a step further, functioning like a car wash for blade chassis, spraying nonconductive cooling liquid through the server to reduce heat load.

Enterprises intrigued by in-chassis cooling should keep in mind that these solutions are necessarily more involved than whole-room or in-row cooling units and have very specific server compatibility guidelines.

The high-voltage switch

Virtualization and improved cooling efficiency are not the only ways to bring down the energy bill. One of the latest trends in data centre power reduction — at least here in the States — is to use 208-volt power rather than the traditional 120-volt power source.

When the United States rolled out the first electrical grid, light bulb filaments were quite fragile and burned out fast on 220-volt lines. Dropping the voltage to 110/120 volts increased filament life — thus, the U.S. standard of 120 volts. By the time Europe and the rest of the world built out their power grids, advances in filament design had largely eliminated the high-voltage problem, hence the 208/220-volt power systems across most of the rest of the globe.

What’s important to note is that each time voltage is stepped down, a transformer is used, and power is lost. The loss may be as little as one per cent or two per cent per transformer, but over time and across a large datacenter, the penalty for transformer use adds up. By switching to a 208-volt system, you need one less transformer in the chain, thereby reducing wasted energy. Moreover, 208/220-volt systems are safer and more efficient; more current is required to push the same wattage through 120 volts than 208/220, increasing the risk of injury and losing additional power in transit.

For those considering capitalizing on the switch, rest assured that nearly all server, router, and switch power supplies can handle 120- or 208-volt power and most are auto-switching, meaning no modifications are necessary to transfer that gear to 208 volts. Of course, the benefits of 208-volt power in the data centre are not the kind to cause a sea change. But as energy costs continue to rise, the switch to 208 volts will become increasingly attractive.

Retrofit for advantage

When it comes to budgeting for the data centre, most line items can be forecasted. Determining the cost of hardware to build and maintain the room is relatively easy. The costs of providing power to all those systems tends to sway in the breeze, however, and even a small jump in the unit price of power can put a big mark on an otherwise pristine balance sheet.

And where there is variable cost, there is the potential for competitive advantage.

Virtualized servers, localized cooling solutions, and cost-conscious means of delivering power to the server room are changing the underlying principle of database design to a search for greater energy efficiency. The killing-flies-with-a-shotgun approach to cooling and powering the data centre has been banished to the history books along with the 85-cent gallon of gas. Retrofitting existing data centres is never easy or inexpensive, but in this case, the benefits are immediate.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now