Startup Teak Technologies has debuted a 10G Ethernet switch for blade servers. The hook is a new traffic-management and congestion-detection technology built into the hardware, which, the company says, eliminates latency and makes standard 10G connections look slow.
Teak’s I3000 is a 20-port 10G Ethernet switch that fits into IBM BladeCenter chassis, and provides 16 internal 10G Ethernet interfaces for blade servers, and up to four uplink connections connecting the chassis to a data-centre network.
The vendor says its “Congestion Free Ethernet” technology lets the switch communicate with other Teak switch blades in a data-centre network and co-ordinate traffic flows to avoid dropped packets and delays. The company claims its switch technology is cheaper and provides more reliable links than standard 10G Ethernet switches connecting to a blade server chassis, or other in-the-chassis blade server switches.
“Data centre switches have behaved in isolated silos. Traffic is just thrown at them from the endpoints,” says Sanjay Dua, chief marketing officer for Teak. “If the switch can handle it, it does. If not, it starts to drop packets. That’s when congestion occurs, and application availability suffers due to latency.”
Teak’s switches work with special NICs in blade servers, developed with the startup’s congestion-management and protocol-offloading technology. Neterion, a maker of 10G Ethernet NICs, is the first partner to announce it is working with Teak. (It’s Teak-infused NICs work with Microsoft Windows and Linux servers).
The Neterion/Teak NICs and the blade server switch module communicate information about upstream network congestion, using Layer 2 signaling mechanisms.
If the switch module detects congestion from other Teak-enabled server blades, or with other switches outside the chassis, it can signal the NIC to “back off” the rate at which it is pumping frames into the network. Teak switches connected to each other can also send this message to throttle down traffic rates during high congestion periods. This split-second limiting of traffic during congestion can save a machine from having to re-transmit the Ethernet frame if the data is dropped by a congested switch. The end result is faster-reacting applications, and data streams that don’t drop frames, an essential trait for Ethernet-based storage, or clustered computer systems.
“We have reactive mechanisms [in the switches and NICS] that are able to detect the likelihood of congestion, and can react by communicating to endpoint’s IOs to back off momentarily,” Dua says. “If you can establish co-ordination between switches and endpoints, and between switches and next-hop switches, you can create this collaborative environment, which leads to what we call a congestion-free zone in a data centre.”