Jim Quinn: The life and times of resilient Ethernet

In the history of data networks and the high-tech market, there are few technologies that have a long history of being a dominant force. Ethernet is one of them. It has proven to be resilient in a number of ways.

I had the good fortune in the early ’80s to be at Digital Equipment Corp. when the experimental Ethernet systems from Robert Metcalfe and David Boggs from Xerox PARC were being commercialized. This was an early sign of the ability to take Ethernet from its starting rate of 2.94Mbps to 10Mbps. Ethernet won the battle over token-ring architectures by the logic of using lower-cost components, pragmatic media and the virtues of Carrier Sense Multiple Access/Collision Detection. Then came the period of changes and a continual manifestation of Ethernet to adapt itself to new definitions and respond to new requirements.

The latter part of the ’90s saw in rapid succession: the refinement of the Ethernet specifications to support 100Base-T (100Mbps on Category 5 wiring), the movement to full duplex (to remove the need for collision detection and thereby allow for the increase in distance) and then the development of gigabit Ethernet. The 21st century shows the continuation of this heritage to adapt to changing requirements and still maintain the essence of the protocol with developments in the 10 gigabit arena.

The questions that I pose to anyone:

– Why does Ethernet survive the challenges?

– Why is there still a long-range expansion of Ethernet, despite that fact that it is the senior citizen in the high-technology evolution game?

As I puzzled over the longevity of Ethernet, the answer came to me at SuperComm when I talked to the people from the Ethernet in the First Mile Alliance. Two major realizations came from my discussions with them.

The first is that the general view of most of the communications world is the customer is at the “last” mile, not the first. The last-mile point of view has the central office as the major point of focus with the customer on the periphery.

For Ethernet, there has always been the strong view of the consumer as the focus – even as far back as the early days at Xerox PARC of having an easy way to share printers. This is clearly manifested in the ongoing theme to reduce development costs by assuring that the same driver design will work regardless of the underlying media type or its speed.

The second was the open acceptance of new and different media types to make delivery of Ethernet to consumers fit their needs and the needs of the service provided. Some of the new approaches include the use of a single fibre with different lambdas for the up- and downstreams, and the use of passive optical networks where the downstream is a shared broadcast and the upstream is a managed TDM done by control of the transmitting lasers. There is also Ethernet over DSL and the changes necessary to support Ethernet over voice-grade copper.

What has kept Ethernet strong and growing is listening to the user community and being open to change in response to the operating environment without compromising the essentials. Maybe this is a model we all should look at the next time we are arguing that our technology is the “right” one.

Quinn is vice-president of technology at The Tolly Group. He can be reached at jquinn@tolly.com. Kevin Tolly is on vacation.