The industry-wide support for Infiniband was on display last week as a wave of Infiniband component vendors showed their wares at the Intel Developer’s Forum (IDF) in San Francisco.

The next-generation switched-fabric I/O technology will begin to appear as an option on servers, storage systems, and routers later this year. But many experts and users agree that the challenge for Infiniband will be how quickly it can mature into a mainstream I/O technology while keeping ahead of the competing I/O technologies such as iSCSI and Ethernet.

Over twenty Infiniband product vendors, including Adaptec Inc., Banderacom Inc., Infinicon Systems Inc., Mellonox, and Voltaire, were on hand at IDF demonstrating Infiniband products that will find their way into systems from companies such as IBM Corp., EMC Corp., and Dell Computer Corp. Capable of I/O speeds of 2.5Gbps, 10Gbps, and room in the specification for 30Gbps, Infiniband also supports multiple data protocols.

Supporting the roll out of Infiniband, Intel Corp. last week announced the availability of DTA (Device Test Agents) for Infiniband-enabled computing systems. The DTA implementation specifications allow companies designing Infiniband-ready systems to test products in real-world environments running actual applications in conjunction with Infiniband-enabled systems from other vendors, said Jim Papas, director of initiative marketing for Intel’s enterprise platforms group in Hillsboro, Ore.

But experts such as Nathan Brookwood, principal analyst at Saratoga, Calif.-based Insight 64, said that for Infiniband to deliver on its promise, the I/O technology must maintain its allure to IT professionals who have the choice of simpler, existing I/O technologies.

“Infiniband represents a major overhaul of people’s software infrastructures. The software to deal with Infiniband on every [system] is dramatically different from what we have today,” Brookwood said.

Alternative I/O technologies such as iSCSI and Gigabit Ethernet are rapidly accelerating in performance, and are simpler to manage compared to Infiniband, Brookwood said. But IP I/O such as iSCSI and Gigabit Ethernet present their own problems by often overtaxing CPU capacity, calling for TCP-IP offload engines as a fix. Infiniband could capitalize on IP’s weakness here, but customers will decide which I/O technology takes the greater market share.

“The battle over the next two years will be whether or not TCP-IP and iSCSI can do enough of what Infiniband is promising to do so that the rationale for doing Infiniband never pans out,” said Brookwood.

Many potential Infiniband customers would like to see the technology develop faster than it currently is. Nathan McQueen, media systems architect at the University of Washington in Seattle said, “I’d like to see the time lines to delivering Infiniband directly onto the motherboards decrease” from the current four- to five-year wait. McQueen sees Infiniband dramatically improving architectures such as server blades, and likes the idea that Infiniband was built from the ground up as a datacentre I/O solution. But waiting for Infiniband’s evolution limits how its can be deployed, he said.

From Here to Infiniband

Switched-fabric I/O technology could make strides into enterprise networks due to several factors.