To InfiniBand and beyond

Widely deployed interconnect technologies such as Ethernet and iSCSI may have some stiff competition from a new, faster and heftier networking protocol that has begun to make waves in the industry.

Called InfiniBand, the architecture is a next-generation short-distance input/output technology that uses a networked approach to connecting servers and network devices; it is ideally suited for use in data centres. Today InfiniBand operates at speeds of 2.5Gbps to 10Gbps and is expected to reach speeds of up to 30Gbps as it evolves.

Designed to overcome several limitations of the shared-memory and bus I/O architectures, InfiniBand is “well-positioned to quickly become the transport of choice for server networks that support inter-processor communications and server clustering,” according to Boston-based Aberdeen Group white paper, InfiniBand Architecture: Planning the Next Generation Data Centre.

Although relatively new to the industry, InfiniBand has caught the eye of more than 150 companies worldwide, including the likes of IBM Corp., Intel Corp., Microsoft Corp., Sun Microsystems Inc. and Dell Computer Corp. These companies make up the InfiniBand Trade Association (IBTA) an organization geared to forwarding the development of the two-year-old technology.

According to Don Kerr, director of the Canadian Advanced Systems Group with Dell Canada in Markham, Ont., InfiniBand is one of the most exciting technologies Dell has seen. He said the company believes the technology will be absolutely critical in enabling Dell to deliver more scalable systems that not only provide better performance, but does so in a standardized manner.

Kerr explained that when looking inside most servers and computers today, there are buses that have memory and processors and disk controllers plugged into them. He said that the problem with these buses is that they have a finite capacity in terms of being able to add things to them. The more processors and the more memory put on the buses, the faster they get flooded.

“What this has meant from a computing perspective is that the amount of raw computing power that you can put together and apply to a specific problem has been limited to the capacity of those buses,” he said.

In order to address this, many companies worked toward clustering – multiple servers that are linked together in order to handle variable workloads. However, the problem is the number of different protocols the clusters must support.

“Some would run on Ethernet, some on fibre channel, some on good old SCSI technology,” Kerr continued. “You had this huge mishmash of technologies. What InfiniBand tries to address is ‘how do I break the barriers of scalability that are associated with traditional bus architectures, and how do I eliminate the need for all these different protocols that one would use in terms of doing interconnects?’ What InfiniBand promises is the ability to, very quickly and on the fly, reprovision computing resources against the task that you want them to do. It is that whole flexibility issue.”

Aberdeen Group is convinced that InfiniBand will see tremendous success as products begin shipping early next year.

“In many instances, [InfiniBand] will replace TCP/IP as the high-speed, server-to-server interconnect technology,” said Peter Kastner, executive vice-president and chief research officer for Aberdeen. “It is a new technology, which will be widely embraced by server manufacturers, which has important implications for both data centre networking and storage networking. InfiniBand works at very high speeds…with very low latency and is a very efficient and transparent protocol.”

Companies with hundreds of thousands of multi-tiered architecture sites such as ISPs, application service providers and large Web sites will be initial targets for InfiniBand’s functionality, Kastner added. He said that by implications, these companies have to move messages through the tiers of the architecture, and InfiniBand can do this very efficiently.

“Because each InfiniBand connection can run in parallel with others, and because it uses relatively little CPU time, you are able to create huge amounts of I/O bandwidth,” Kastner explained. “Because of the low latency time, you are able to send messages from, say, blade to blade in a blade server environment or rack to rack, therefore making it easier and more efficient to build clusters.

“On the communications side, there are companies that will take InfiniBand and offload IP processing onto communications engines…turn IP packets into messages, so that there is less overhead within the data centre itself.”

While InfiniBand has become a well-known term amongst the vendor and manufacturer community, it has yet to become familiar ground within the enterprise. With the help of the InfiniBand Trade Association, founding members like Dell are trying to get the InfiniBand message out to the technology community.

For the Ontario Ministry of Transportation, the name InfiniBand is unfamiliar, yet the functionality of the technology makes perfect sense.

“Right now we are using a token ring environment, which is only 60Mbps using TCP,” said Elizabeth Fiorito, the Ministry’s network administrator in Thunder Bay, Ont. “We are switching to Ethernet and we will definitely get the (higher) throughput speeds. But something like (InfiniBand) definitely sounds like something we would be interested in. It certainly would be ideal for local area networks.”

Dell said it expects to be shipping InfiniBand servers sometime in the first half of next year.