Monday, June 27, 2022

The Green Data Centre: Cards and switches

Savings in the cards

Just as the fewer servers you run means lower power consumption, so too does the number of switches they connect through. Jerome Wendt, president and lead analyst with Data Centre Infrastructure Group, an independent consultancy based in Omaha, Neb., points to a couple of new technologies that can reduce the number of cards and switches in the data centre.

Infiniband isn’t actually brand new; it emerged from the Future I/O versus Next Generation I/O battle of the late 1990s. For a high-end data centre, there are advantages, says Wendt; rather than installing six or seven Ethernet or Fibre Channel cards in a server, the job can be done with a pair of Infiniband cards.

On the downside, an Infiniband director is required to segment out Ethernet and Fibre Channel traffic. And, says Wendt, “companies have to get up to speed on Infiniband.”

Green IT

For more articles on environmentally friendly technology practices, visit IT World Canada’s Green IT Knowledge Centre

But the cost of Infiniband is coming down to the point that it’s comparable to Fibre Channel. “It’s not prohibitively expensive,” says Wendt – about $600 gets you 20 Gbps of throughput, down from $800 to $1,000 a year ago, and about a quarter of the cost of the required Fibre Channel cards. Driving prices down is the demand from the Linux-based high-performance computing market, connecting clusters and supercomputers, says Wendt.

Wendt says new network interface cards (NIC) from Intel and Next-Gen can create V-NICs or virtual NICs. These allow every virtual server on a physical host to have its own MAC and IP address instead of sharing with the other servers in the box. They’re more expensive than the previous generation of NICs, and Wendt doesn’t see the market taking off tomorrow. They’ll likely see more play as more companies adopt 10Gbps Ethernet. “It’ll be another six to 12 months before we see any adoption,” Wendt says.

And Next I/O has developed a PCI Express card that extends the backplane of the physical server to an appliance with segmented Ethernet and Fibre Channel connections. “You can connect all your different protocols” without buying cards for each, Wendt says. “It’s an interesting way to play server farms.”

In general, Wendt says, higher throughput means fewer switches and directors to draw power. It may not be an immense saving, but the incremental adds up.

  1. Blades + virtualization + multicore = power savings
  2. Keeping cool
  3. Savings in the cards
  4. Storing your savings

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Dave Webb
Dave Webb
Dave Webb is a freelance editor and writer. A veteran journalist of more than 20 years' experience (15 of them in technology), he has held senior editorial positions with a number of technology publications. He was honoured with an Andersen Consulting Award for Excellence in Business Journalism in 2000, and several Canadian Online Publishing Awards as part of the ComputerWorld Canada team.

Related Tech News

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.