According to users at the Ethernet Technology Summit here this week, Ethernet could be a lot greener and “fabric-friendly.” This shows that recent efforts by vendors and standards organizations — such as the Data Center Bridging work by the IEEE and Cisco System Inc.’s next-generation Nexus platforms — to ruggedize and reduce the power consumption of Ethernet switches and routers for data center applications are still incomplete.
Facebook is the fourth largest Web site in the world after Google, Microsoft and Yahoo, Lee said. And it is still growing: The social networking site jumped from 50 million users to 400 million users in the past two years, he said.
But that growth has not come without some Ethernet pain points, Lee said. Facebook has warehouse-sized data centres running racks and racks of commodity PCs — between 20 and 40 per rack — all running quad core processors.
And with each PC equipped with a 1Gbps Ethernet NIC, there could be as much as 40Gbps coming out of each server cabinet.
“We have to use 10G and a lot of 10G” to connect to aggregation switches and then backhaul traffic into the core of a single 700,000 square-foot data (65,000 sq. m) centre, Lee said. That’s one of the reasons Facebook said it needs 100G and even Terabit Ethernet now.
But 100G Ethernet and Terabit Ethernet won’t be around for a while. So Facebook has to settle for 10G as its data center fabric.
“It’s a pain point we see with Ethernet,” Lee said. “How do we build an Ethernet network to support tens of thousands of servers and thousands of 40G [cabinets]? It’s not easy because the switches aren’t big enough.”
Some of the bigger Ethernet switches on the market support 500-plus 10G ports, but only 25% or less of them at line rate. That still doesn’t meet Facebook’s needs.
“With the current technology, each one of these switches thinks of itself as an island — no switch has any idea that the other switch exists,” Lee said. “That’s the problem. At this scale, if one of these links is taking errors, go find it. Good luck.”
The problem is exacerbated when hundreds of millions of users are “watching” your software — or logged onto and using Facebook — and you have to resolve the fault within minutes, Lee said. Instrumentation is a key requirement.
“If you have lots of paths, you can’t isolate which path is broken,” Lee said. “SNMP won’t help you. You have no counters on that link. That whole exercise is very painful.”
So is acquiring a switch that’s green enough to meet state regulations for energy efficiency. That’s what the University of California, San Francisco (UCSF) is facing right now after Gov. Arnold Schwarzenegger less than a month ago mandated that the IT departments in the State of California reduce energy usage by 30 per cent in two years.
“The network that has to be green is not just in the data center; it’s all over the enterprise,” said Jeffrey Fritz, director of enterprise network services at UCSF. “It’s in those 700 wiring rooms all throughout the [UCSF] facility. The greening of IT equipment has to start now; it should have started five years ago.”
UCSF’s wiring rooms are hot, Fritz said. One he walked into recently was 93 degrees on a day when San Francisco was struggling to reach 60 degrees. The school’s wiring closets are overcrowded and overheated and it’s not because the HVAC systems are deficient, Fritz suggested.
“There’s coming a time where we won’t buy your equipment if it doesn’t meet certain environmental requirements because we won’t be allowed to,” he told a room of system and component suppliers.
The problem is not with the servers — they’ve been making consistent energy efficiency strides over the past few years. But the network equipment?
“Same stuff we’ve been putting in for the past 10 years,” Fritz said. “We dropped the ball. If we build a green data center and forget to green the network, what have we accomplished? Where is the logic with this approach?”
UCSF issued a network equipment RFP that includes incentives for vendors to respond with energy efficient proposals. The school is also receiving rebates from utility PG&E for demonstrated reduction in power consumption.
But when additional ports are needed, UCSF has to absorb US$90,000 to US$150,000 to upgrade wiring rooms to accommodate the power and cooling requirements of the network expansion, said Felicia Silva, associate director of network operational services at UCSF.
“I can’t continue to approve purchases for network equipment when I’ve got wiring rooms running at 98 degrees (36.6C),” Silva said. “Give us something to buy.”