Standard looks to break the Gigabit Ethernet bottleneck

For network managers who’ve outfitted their servers with Gigabit Ethernet network interface cards in recent years, the results definitely have been of the “glass half-full” variety. While Gigabit Ethernet NICs have allowed servers to deliver more than predecessor Fast Ethernet, other system bottlenecks usually combined to keep overall throughput half, or less, of Gigabit Ethernet’s potential maximum. Will Remote Direct Memory Access-based NICs change all this?

Vendor efforts to push the benefits of Gigabit Ethernet notwithstanding, the fact remains that while it is no big deal for Layer 2/3 infrastructure to run at wire speed, actual end-to-end communication is another matter. In short, going up the stack almost always slows you down.

The Tolly Group has studied this issue for years as part of its ITclarity hands-on research program. Let me summarize what we’ve found to give you an idea of what opportunities and challenges RDMA vendors face.

In our initial study, “Gigabit Ethernet to the Desktop – A Reality Check on the Benefits and Burdens of Gigabit Ethernet over Copper”, published in 2002, we focused on determining the maximum throughput achievable between a pair of high-end machines running IxChariot – a standard network benchmarking test tool.

Even using machines outfitted with high-performance, 64-bit, 66-MHz PCI bus architecture, throughput for the most highly optimized bidirectional “file transfer” application topped out at around 750Mbps – out of a possible 2Gbps (1Gbps each way).

So while this is far better than what Fast Ethernet could offer, it is less than 50 per cent of the theoretical maximum – and, worse, the test application deliberately only simulates file transfer (to isolate network performance). Because no data actually is read from or written to disk, performance with real applications likely would be worse. And it is.

Last year, we extended our study to benchmark the effective throughput when running actual applications. Given that our focus was Gigabit to the desktop (rather than back-end, server-to-server) and that streaming applications tended to provide the best throughput, we sought out desktop back-up applications that effectively would upload data from client to server.

The results, published in “Gigabit Ethernet to the Desktop: An Evaluation of Back-up Utilities over GbE and Fast Ethernet Networks”, were sobering – which is a nice way of saying they were terrible.

We ran tests using multiple products from Dantz and, when the tests started, Veritas Software (it has since sold the product). With Fast Ethernet as the transport, the effective throughput (data delivered/time) was 60Mbps or 70Mbps. Not bad, given that packet overhead is always present.

When run again using Gigabit Ethernet, the results always went up – but marginally. The best results observed never broke 115Mbps – or 11.5 per cent of the theoretical maximum for unidirectional traffic.

Analyses of the traces show massive inefficiencies with how the back-up application moves data. Implemented “transparently” using the Common Internet File System protocol (aka Server Message Block), the application lets the Gigabit Ethernet link spend most of its time simply waiting to do something. The lower layers, where RDMA is focused primarily on helping, was, in our tests, never the bottleneck.

Granted, things might be different in “server-to-server” applications, but my experience has shown that too many programmers seem to be blissfully unaware of how to write code that can take advantage of the underlying transport.

RDMA optimizes the process at the bottom of the stack. It is more efficient, reduces latency and offloads the server CPU. But will the presence of massive bottlenecks higher up in the stack make all of that irrelevant? The answer to that question might make all the difference in the world to vendors of RDMA products.

Tolly is president of The Tolly Group, a strategic consulting and independent testing company in Boca Raton, Fla. He can be reached at [email protected].

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now