Site icon IT World Canada

Explosive power of 100G Ethernet

COMMENT ON THIS ARTICLE

The IEEE’s latest project could significantly ramp up the speed of traffic delivery across the Internet.

In November, the IEEE’s Higher Speed Study Group announced it was working to create a 100G Ethernet standard and subsequently has laid out a tentative timeline for benchmarks along the way, including the development of a formalized task force by the middle of this year and delivery of the final spec by 2010.

While these goals seem attainable, the study group members are in a race against time to accommodate the increasing demands of content creators and consumers around the world. I recently spoke with John D’Ambrosia, chairman of the study group and scientist of components technology at Force10 Networks, about the impact 100G Ethernet will have on the network industry.

What is driving the need for 100G Ethernet? There are many applications where you’re seeing the need for 100G emerging. Some examples are Internet exchanges, carriers and high-performance computing. You’re also seeing a need when you look at what’s happening with personalized content, which includes video delivery such as YouTube, IPTV and HDTV. There’s also video on demand. All of this together is driving the need for 100G Ethernet.

Consumers are also contributing to this. For instance, people have digital cameras that churn out large files that they want to share across the Internet. Content-generation capabilities are increasing rapidly at both the professional and consumer level. This is creating a basic ecosystem problem — people are sharing content at a higher level, and all of that has to feed into today’s pipes.

Is there enough bandwidth today to meet the needs of businesses, content providers and consumers? You do have 10G Ethernet already, and if you use link aggregation — which allows you to pool your 10G links to create a bigger pipe — you can go higher. But bandwidth needs are quickly surpassing these bandwidth limits.

When we did an analysis to check the viability of a 100G Ethernet standard, we found that the top supercomputers could already use that much bandwidth today.

However, these standards are not something you whip out in 18 months. Right now we’re trying to define what will be in the 100G project. That’s a time-consuming process — you have to create baseline proposals, develop the spec and get comments. We have to go through the document and make sure we got everything right.

But, yes, we are hearing people say we need it now, even though a final spec is at least three to four years away.

Do you foresee a lot of pre-standard technology on the market? Some companies are already talking about 100G. I think the reality is that there are a lot of different technologies that are going to be needed to fully deploy 100G Ethernet. You’ll need new optics, backplanes and chip technologies. 10G backplanes won’t be sufficient, so you need to make a leap there.

In regard to 100G Ethernet pre-standard, [IT managers] are very nervous about going with pre-standard technology. They’ll do it if they have to, but it will be hesitantly. And they will surely keep their eyes on what’s happening in the standards bodies.

You mentioned that companies are using link aggregation to get to higher speeds today. Why is that a problem? Link aggregation scales up to a limit and then it becomes an issue. Depending on who you to talk to, you’ll hear that two, four or eight links can be aggregated together before you have management and troubleshooting issues. Also, those cables take up precious real estate, and you have power and cooling considerations. Using up those ports for link aggregation also creates lost revenue opportunities, because any port that’s tied up is not bringing in revenue. There are a lot of issues with scaling, too, and it’s not easy to foretell cost estimates with link aggregation.

There is a lot of talk that the YouTube phenomenon is among the key drivers for a 100G Ethernet standard. Are there other issues out there that a 100G Ethernet spec will solve? YouTube is interesting — it’s experiencing 20 percent traffic growth per month and is constantly adding 10G links to support this growth. However, YouTube is not the only reason for 100G.

The study group has had to prove that there is a need by addressing five criteria: broad market potential, compatibility, technical feasibility, economic feasibility and distinct identity. This has to be a unique and necessary solution.

A major part of this is broad market potential. You don’t generate a spec for one customer that’s out there. While YouTube is one of the content providers I talked about earlier in terms of applications and exploding bandwidth requirements, it is not the only one.

We are also considering the move to HDTV for many households. Comcast charts the difference between standard traffic rates and high-definition traffic rates at 3.5M bit/sec. vs. 19M bit/sec. If you look at the number of HDTVs being sold, that higher rate becomes critical to support.

Will the typical IT or data centre manager be affected by the move to 100G Ethernet? People with large data centres will start to feel it if they don’t feel it already. Applications will start driving bandwidth requirements of aggregated and individual links. One IT manager I know works in construction and he told me how he could already use 100G today because of the reports his vertical application generates. Each report uses up about 30M bit/sec. or 40M bit/sec. of bandwidth. He’s got a 60G pipe handling the load, but he worries that new platforms such as Vista might alter his requirements. He’s already looking for workstations with 10G Ethernet links.

The medical industry is another example. The folks working on the human genome mapping could use 100G to share information among university research groups. They already generate reams and reams of data. There are also MRIs — the bandwidth requirements for these imaging machines are phenomenal. They can generate 500M bytes of data an hour. Think about the fact that the diagnostics being done for those images is now handled offshore in some cases. That’s a lot of data to send back and forth.

And finally, there’s disaster recovery and backup that needs to be dealt with within companies. All the data we’re creating and consuming personally and professionally has to be stored and protected.

Is there enough cooperation in the study group to make this happen on time? I have to commend the study group for all the hard work they are doing to move this process forward. The IEEE requires 75 percent or greater for consensus building, and we’ve been able to achieve that. I just can’t speak highly enough about the members of this group and their commitment levels.

Sandra Gittlen is a freelance technology editor near Boston. Former events editor and writer at Network World, she developed and hosted the magazine’s technology road shows. She is also the former managing editor of Network World’s popular networking site, Fusion. She has won several industry awards for her reporting, including the American Society of Business Publication Editors’ prestigious Gold Award. She can be reached at sgittlen@charter.net.

COMMENT ON THIS ARTIC

Exit mobile version