Matrix E1

Network World

Attention network executives with maxed-out backbones: Enterasys Networks Inc. has a switch for you. The Matrix E1 that will be announced next week offers a 10 Gigabit/sec (Gbps) interface that blazed through our laboratory tests with line-rate throughput and impressively low latency.

Enterasys is marketing the US$25,000 Layer 2 switch as an aggregation device that moves traffic between a single 10Gbps interface and 12 Gigabit Ethernet interfaces, a useful configuration for data centers and server farms.

On the downside, the 10Gbps interface on the Matrix E1 is not 10G Ethernet, but a proprietary variation. The 10Gbps offering uses optics the IEEE considered but rejected for 802.3ae, the 10G Ethernet standard. As a result, this interface won’t talk with 10Gbps interfaces from other vendors.

Note that all 10Gbps interfaces are proprietary now and will remain so until the IEEE ratifies the 802.3ae specification, probably between March and June 2002.

Enterasys will ship two standards-ready versions of 10G Ethernet interfaces before the IEEE ratifies 802.3ae, says John Pappas, director of technical marketing. Those include the 10GBase-LR and 10GBase-LX4 interfaces.

Upgrading to the standards-compliant versions will be simple because the modular 10Gbps interface on the Matrix E1 is field upgradable.

Setting up and managing the Matrix E1 is a snap. The device offers command-line and graphical user interface options for configuration, with the former bearing more than a passing resemblance to Cisco’s IOS. While the command-line interface isn’t a complete IOS clone, it’s close enough so that a newcomer to Enterasys products could navigate it within minutes.

For network management, the Matrix E1 offers an integrated remote monitoring agent, a feature not always present in Layer 2 devices in general. The Matrix also supports the virtual N management information base.

Enterasys says it doesn’t plan to add Layer 3 IP routing support to the Matrix E1.

Unlike most early adopter products, the Matrix E1 10Gbps interface delivers line-rate throughput regardless of frame length or number of hosts.

To see how well the Enterasys boxes aggregated bandwidth, we set up a test bed with two Matrix E1 switches linked over a 10Gbps backbone. Then we pounded the backbone with line-rate traffic from a Smartbits traffic generator/analyzer from Spirent Communications PLC.

In the worst case, we offered 64-byte frames at line rate representing 5,000 hosts. That’s the equivalent of taking an entire mid-sized corporation’s traffic, cranking it up to 100 per cent utilization and then feeding it all into one interface. Average usage levels on corporate production nets are typically in the 10 per cent to 30 per cent range, with only occasional higher spikes. Even in this scenario, the Matrix E1 didn’t come close to crumpling.

We ran tests with 64-byte and 1,518-byte frames and emulated one host per interface. We offered traffic to 10 1G Ethernet interfaces on each Matrix E1, with all traffic destined across the 10Gbps backbone to 10 other 1Gbps interfaces on the other switch. Because we offered an aggregate of 10Gbps of traffic, the Matrix E1 should have been able to forward all traffic without loss. That’s exactly what we saw. The Matrix E1 moved traffic at line rate in all cases. Even when we reconfigured the Smartbits analyzer to represent 250 logical hosts per interface, the result again was line-rate throughput.

The results with large numbers of hosts represent a big improvement for Enterasys. Earlier switches such as the Cabletron SSR suffered from a poor hashing algorithm that caused massive frame loss whenever those switches were hit with a large number of entities – be they media-access control addresses, IP addresses or port numbers. With the Matrix E1 switches, Enterasys no longer has that problem, at least not at Layer 2.

For some applications, latency is an even more important metric than throughput. With these delay-sensitive applications such as voice-over-IP and streaming media, even small amounts of delay can seriously degrade session quality.

Our measurements suggest the Matrix E1 won’t add much delay for any type of traffic. Per-frame latency was remarkably low regardless of the number of logical hosts we used.

The two Matrix switches introduced an average of about 6.5 microseconds of delay with 64-byte frames and an average of about 47 microseconds with 1,518-byte frames. Because we used two switches in our test bed these measurements should be divided by two to describe single-box delay.

We also noted only negligible variation in average latency between tests with one logical host per interface and tests with 250 logical hosts. Ergo, it’s possible to add hosts to the network at will without fear of performance degradation.

For network managers looking to break bandwidth bottlenecks right now, Enterasys has a full 10Gbps pipe shipping next month.

Newman is president of Network Test, an independent benchmarking and network design consultancy in Westlake Village, Calif. He can be reached at dnewman@network

Switch jitters

While low latency is critical for delay-sensitive voice or video, it’s not the only requirement. Delay also should remain constant from frame to frame. Some voice and video applications are even more sensitive to variations in latency than latency itself.

To see how well Enterasys Networks’ Matrix E1 handles latency variation, we measured the delay added for every single frame we offered (more than two billion frames in some tests) and noted the difference between minimum and maximum measurements per stream.

In nearly all cases, latency variation was about the same or greater than the latency measurement.

Latency for 1,518-byte frames with 250 logical hosts per interface was around 47 microseconds. However, maximum latency averaged about 93 microseconds per stream, and minimum latency averaged about 28 microseconds per stream. The result is an average latency range of about 65 microseconds, 28 per cent higher than average latency.

The variations aren’t enough to upset most applications by themselves; performance degradation really begins to show up in the millisecond range. However, two factors are worth bearing in mind. First, we ran our tests on a congestion-free network. In a congested network queuing certainly will add to latency, and latency variation will only make matters worse.

Second, latency and latency variation are cumulative, so a network comprised of many Enterasys devices might begin to show significant latency variation. Given the low-microsecond delay numbers posted by the Matrix E1, this second factor probably won’t be a concern for any but the largest enterprise networks.

Prices listed are in US currency.