How Google’s OpenFlow backbone works

FRAMINGHAM, Mass. — Google, an early backer of software-defined networking and OpenFlow, shared some details at the recent Open Networking Summit about how the company is using the technology to link 12 worldwide data centers over 10G links. I caught up with Google principal engineer Amin Vahdat to learn more.

Why did you guys set out down the OpenFlow path? What problem were you trying to solve?

We have a substantial investment in our wide-area network and we continuously want to run it more efficiently. Efficiency here also meaning improved availability and fault tolerance. The biggest advantage is being able to get better utilization of our existing lines. The state-of-the-art in the industry is to run your lines at 30 per cent to 40 per cent utilization, and we’re able to run our wide-area lines at close to 100 per cent utilization, just through careful traffic engineering and prioritization. In other words, we can protect the high-priority traffic in the case of failures with elastic traffic that doesn’t have any strict deadline for delivery. We can also route around failed links using non-shortest path forwarding, again with the global view of network topology and dynamically changing communication characteristics.

Standard network protocols try to approximate an understanding of global network conditions based on local communication. In other words, everybody broadcasts their view of the local network state to everybody else. This means if you want to affect any global policy using standard protocols you’re essentially out of luck. There is no central control plan that you can tap into. So what OpenFlow gives us is a logically centralized control plan that has a global view of the entire network fabric and can make calculations and determinations based on that global state.

One hundred percent utilization is incredible. And you can do that without fear of catastrophe?

Right, because we can differentiate traffic. In other words, we are very careful to make sure that, in the face of catastrophe, the traffic that is impacted is the relatively less important traffic.

Is control of the network completely removed from the routing hardware and shifted to servers?

You used an interesting word — completely. There’s going to be some vestiges of control left back on the main device, but maybe for simplicity’s sake let’s say it’s completely removed. We’ve shifted it from running on an embedded processor in individual switches — and that embedded processor is usually two or three generations old; if you open up a brand new switch today it wouldn’t surprise me if you found an 8-year-old Power PC processor — to a server, which could be the latest generation, multicore processor, etc. So getting 10X performance improvements is easy and even more than that isn’t hard.

I understand you built your own gear for this network?

We built our own networking gear because when we started the project two years ago there was no gear that could support OpenFlow.

Will you continue to use your homegrown gear or make the shift as companies come out with OpenFlow tools?

Our position is that, once there are switches out there that deliver the functionality we need with a nice OpenFlow agent, we would be very open to them.

Is there a hell of a lot of difference between a switch and a server these days anyway, beside the interfaces, obviously?

Great question. I think that there is a fair amount of difference in terms of instruction set and flexibility, but there certainly is an increasing amount of similarity. One thing I think the switching world would benefit from is an increasing amount of programmability. Having more flexibility in being able to do different things with different bits in your packets would be useful. There are some startups looking in this direction.

I understand another key benefit of SDN/OpenFlow is being able to play with a lot of “what if” scenarios to enable you to fine-tune the network before going live.

Exactly. So one of the key benefits we have is a very nice emulation and simulation environment where the exact same control software that would be running on servers might be controlling a combination of real and emulated switching devices. And then we can inject a number of failure scenarios under controlled circumstances to really accelerate our test work.

Are you actually pumping in fake traffic?

Yes, there’s some amount of fake traffic. Obviously, we’re not necessarily able to reproduce the complete scale. The nice thing is, if you think in terms of the total amount of traffic we might have in a data center, it’s going to be substantially larger than total WAN traffic, so while our WAN traffic is substantial, LAN traffic is substantially more.
So you cut over this new network but didn’t take down the old one; what do you estimate the new network is accounting for in terms of your total inter-data center load?

Over a two-year period we have been shifting, incrementally, over to the new network, and it’s fair to say that a substantial majority of the traffic is now on the new network.

Was OpenFlow fully baked as you were implementing it, or did you have to improvise a lot?

We had to improvise. OpenFlow is standardizing the interface, and I think this is very important for the community. So what OpenFlow and software-defined networking really enables us to do is separate the evolution path for hardware and software. In other words, you can get the hardware that meets your needs and separate that from the software that meets your needs for a particular deployment. Historically, those two things have been wedded together.

So from an OpenFlow standardization perspective, it’s very, very important to have hardware that can interoperate with a variety of software controllers. From our side, since we’re building our own hardware, that was less important. But we definitely had to improvise, and certainly the OpenFlow standard has evolved and we’ve had to be nimble with that internally.

Did you have any setbacks of note?

I think Urs Hölzle [senior vice president of technical infrastructure and Google Fellow] said it best when he said it actually has gone more smoothly than he expected with less down time. The main issues we ran into from an OpenFlow perspective is the first version doesn’t fully allow you to take advantage of all of the hardware capabilities in modern switch silicon in an easy way. That’s not to say it’s not possible, it’s just not easy. So we have to do some work to get around some of those mismatches and, if you will, interface them. But this is now substantially improved from the OpenFlow standard perspective.

How far away is OpenFlow from being fully baked?

I think it’s going to be a multi-year process, but the message we want to send is it’s at a point now where it’s incredibly useful and can deliver substantial benefits in a variety of settings.

Given the advantages, do you expect to see service providers move to OpenFlow?

Well, we certainly hope so. What we’re hearing from the large service providers is that it is difficult to scale and make money. With OpenFlow I think we’ve demonstrated how to make your network much, much more efficient.

What are the next steps?

The whole industry is just getting going. I think five years from now we’re going to be able to look back with some sense of accomplishment. But we have the baseline for introducing new functionality and now we can do that much more rapidly than we could have otherwise. So for example, we have made the first steps in our optimization algorithms for managing traffic, but now we can deploy a whole range of new, more advanced optimization techniques. But at a technical level, we need to tighten the control loop. Today the time to measure, react and reprogram is a big challenge in software-defined networking overall because many of these software and hardware components weren’t designed for having a tight control loop. So we have to address that.

Is your network controlled from a single NOC?

No, it is replicated and distributed for fault tolerance. And again, from a community perspective and certainly from our own perspective, coming up with the right software architectures to have in an SDN paradigm, replicated distributed control is fundamental. And getting that right in a repeatable way will be a very important challenge for the community in the next few years.
Anything that we didn’t touch on that you think is important to get out?

One of the key points here is that the Internet has been remarkably successful and really couldn’t have gotten to the point that it’s gotten to without fully decentralized control and operation. But for it to get to the next level it does require logically centralized control. In other words, logical centralization can be fundamentally more efficient. We have built these amazing protocols over a 40-year period and now we’re essentially transitioning to maybe the next step in the Internet’s growth.

(John Dix is editor in chief of  Network World U.S)

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now