A few years back, when I was buying tests instead of running my own benchmarks, I asked a prospective supplier what tool it used to measure Ethernet switch performance. Its answer: Microsoft Word. I kid you not.
In one respect, this makes sense: a key issue with network devices is how well they enhance productivity. So why not measure performance enhancement with an application end users actually run over the network?
I have two responses to that, one of which is unfit for a family publication. The other underlines how difficult – and how important – it is to correlate network and application performance.
This approach would have given me a number, but I’d have no idea how much of that number was actually attributable to the switch we tested. Some unknown percentage of that number would have been due to extraneous factors such as network interface card hardware and drivers, disk I/O, IP stack, operating system and, of course, Word itself.
This entire approach would violate a cardinal rule of testing, which is to isolate one variable at a time.
And then there would be the issue of scale. Some Ethernet switches support many thousands of sessions. Scaling tests to that level would have meant using hundreds or thousands of PCs, with the attendant configuration and synchronization nightmares.
None of the foregoing should imply that I’m against application-layer testing. On the contrary, it’s critically important. But with network device measurement, it’s even more important to correlate application performance with lower-layer events in the network.
Consider the effects of TCP behavior. It’s been estimated that 90 per cent of TCP-based Internet traffic arrives out of sequence, leading to retransmissions and lost connections, which degrades application performance. But how much? The TCP state machine is complex, and most available tools don’t correlate events at the transport and application layers.
This isn’t just a problem for us testing types. The dearth of scalable seven-layer test tools makes it difficult for vendors and end users to get valid assessments of all kinds of devices with Layer 7 capabilities – boxes such as server load balancers, proxy caches and VPN gateways.
Fortunately, test equipment vendors are beginning to release seven-layer products. Spirent Communications is developing a tool called WebSuite. The first module tests firewalls by trying to open 10,000 TCP connections per second while launching a denial-of-service attack against the firewall. The tool reports on events at Layers 2 through 7.
Another tool is WebAvalanche from Caw Networks. The vendor says this Web traffic generator can issue 15,000 Web requests per second and sustain 1 million concurrent TCP connections. I’ve worked with WebAvalanche, and while I can’t go into details I can say Caw’s claims, if anything, are understated. Better still, the Caw tool directly correlates behavior at Layer 3 through Layer 7.
It’s good to see companies like Spirent and Caw taking up the seven-layer challenge, but they need company. What the industry really needs is more tools that measure the network, at all layers, not just up where Word lives.
Newman is president of Network Test, an independent benchmarking and network design consultancy in Westlake Village, Calif. He can be reached at [email protected]