— Refresh your servers. Each new generation of servers delivers more processing power per square foot — and per unit of power consumed. For every new BladeCenter rack Industrial Light & Magic is installing, it has been able to retire seven racks of older blade technology. Total power savings: 140 kW.

— Charge users for power, not just space. “You can be more efficient if you’re getting a power consumption model along with square-footage cost,” says Ian Patterson, CIO at Scottrade.

— Use hot aisle/cold aisle designs. Good designs, including careful placement of perforated tiles to focus airflows, can help data centers keep cabinets cooler and turn the thermostat up.

— Look for the most efficiently designed servers. Hardware that meets the EPA’s Energy Star specification offers features such as power management, energy-saving power supplies and variable-speed cooling fans. The upfront price may be slightly higher but is typically offset by lower operating costs over the product’s life cycle.

— Consider cold-aisle containment. Once you have a hot aisle/cold aisle design, the next step for cabinets exceeding about 4 kW is to use cold-aisle containment techniques to keep high-density server cabinets cool. This may involve closing off the ends of aisles with doors, using ducting to target cold air and installing barriers atop rows to prevent hot air from circulating over the tops of racks.

— Use variable-speed fans. Computer room air conditioning systems rely on fans, or air handlers, to push cold air in and remove hot air from the space. A reduction in fan speed of 12.5 per cent cuts power use in half.

 — Turn on power management. Most servers ship with energy-saving technologies that do things like control cooling-fan speeds and step down CPU power during idle times, but it’s not turned on by default — and many data centers still don’t enable it. Consider enabling it by default, except in environments where high availability and fast response times are mission-critical.

— Create zones. Break the data center floor into autonomous zones, where each block of racks has its own dedicated power and cooling resources. Zoning involves careful separation of hot and cold air but usually doesn’t require that an area be physically partitioned off.

— Douse hot spots with closely coupled cooling. A series of high power-density racks can create a hot spot that the room air conditioning system can’t handle, or that forces IT to overcool the entire room to address a few cabinets. In those cases, consider supplemental spot-cooling systems. These require piping chilled liquid — either cold water or glycol — to a heat exchanger that’s either attached or adjacent to a high-density cabinet.

— Retrofit for efficiency. While new data center designs are optimized for cooling efficiency, many older ones still have issues. If you haven’t done the basics, optimizing perforated-tile placements in the cold aisle or putting blankets over cabling in the floor space are good places to start.

— Install temperature monitors. It’s not enough to monitor the room temperature. Adding more sensors allows better control in the row or rack.

— Turn up the heat. The key to raising efficiency is your intake temperatures on the cabinets. The higher the intake temperature, the more energy-efficient the data center. While you probably can’t cool an entire cabinet with the room set at 81 degrees at the intake, you probably don’t need to be setting the temperature as low as 65, either.

Related Download
Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge Sponsor: Schneider Electric
Why Cloud Computing is Requiring us to Rethink Resiliency at the Edge

Register Now