OpenFlow demystified: A vendor’s primer

FRAMINGHAM — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
By Omar Baldonado, head of product management, Big Switch Networks.

OpenFlow, the new networking technology that recently burst out of academia and into industry, has generated considerable buzz since Interop Las Vegas 2011. The protocol is simple but its implications on network architectures and the overall US$16 billion switching market are far-reaching.

I’ll review OpenFlow’s origins and the variety of problems it is solving, cover its current common architecture, and then look forward to why OpenFlow is such a disruptive technology that will revolutionize how network functionality is delivered and how networks are designed and operated.

Origins
OpenFlow began at a consortium of universities, led by Stanford and Berkeley, as a way for researchers to use enterprise-grade Ethernet switches as customizable building blocks for academic networking experiments. They wanted their server software to have direct programmatic access to a switch’s forwarding tables, and so they created the OpenFlow protocol. The protocol itself is quite minimal — a 27-page spec that is an extremely low-level, yet powerful, set of primitives for modifying, forwarding, queuing and dropping matched packets. OpenFlow is like an x86 instruction set for the network, upon which layers of software can be built.

In an OpenFlow network, the various control plane functions of an L2 switch — Spanning Tree Protocol, MAC address learning, etc. — are determined by server software rather than switch firmware. The early researchers went even further when defining the protocol, allowing an OpenFlow controller and switch to perform many other traditional control functions (such as routing, firewalling and load balancing).
Today, the OpenFlow protocol has moved out of academia and is driven by the Open Networking Foundation, a nonprofit industry organization whose members include many major networking equipment vendors and chip technology providers and has a board of some of the largest network operators in the world like Google Inc. [Nasdaq: GOOG], Microsoft Corp. [Nasdaq: MSFT], Yahoo, Facebook, Deutsche Telekom and Verizon Communications Inc. [Nasdaq: VZ].
Evolving use
Three years ago, OpenFlow was driven entirely by a small number of universities and (gracious) switch vendors who believed in supporting research. OpenFlow allowed those researchers to test brand-new protocol designs and ideas safely on slices of production networks and traffic.

Two years ago, the programmability of OpenFlow started to attract interest from hyper-scale data centre networking teams looking for a way to support massive map-reduce/Hadoop clusters. These clusters have a very specific network requirement: Every server needs equal networking bandwidth to every other server, a requirement commonly known as “full cross-sectional bandwidth.” (Note this is not exactly the norm in large data centers today — over-subscription of 8x-32x is often seen to control costs.)

Shortly thereafter, public cloud (IaaS) operators began investigations into OpenFlow architectures as they saw the same full cross-sectional bandwidth requirements from tenants with a large number of VMs that were widely dispersed across many racks and rows throughout their data centers. These IaaS providers were equally driven by the need for strong multi-tenancy support, a requirement that outpaces the capabilities of traditional scripts and VLANs due to scale and speed restrictions. This spawned a new class of OpenFlow applications and research into network virtualization.

Today, it is this multi-tenant networking use of OpenFlow that is leading the way as OpenFlow moves from the domain of the hyper-scale data centers to IaaS providers and the enterprise data center.

Architecture in practice
Most current OpenFlow solutions incorporate a three-layer architecture, where the first layer is comprised of the all-important OpenFlow-enabled Ethernet switches. Typically, these are physical Ethernet switches that have the OpenFlow feature enabled. We’ve also seen OpenFlow-enabled hypervisor/software switches and OpenFlow-enabled routers. More devices are certainly coming.

There are two layers of server-side software: an OpenFlow Controller and OpenFlow software applications built on top of the Controller.

The Controller is a platform that speaks southbound directly with the switches using the OpenFlow protocol. Northbound, the Controller provides a number of functions for the OpenFlow software applications — these include marshalling the switch resources into a unified view of the network and providing coordination and common libraries to the applications.

At the top layer, the OpenFlow software applications implement the actual control functions for the network, such as switching and routing. The applications are simply software written on top of the unified network view and common libraries provided by the Controller. Thus, those applications can focus on implementing a particular control algorithm and then can leverage the OpenFlow layers below it to instantiate that algorithm in the network.

This three-layer OpenFlow Architecture should feel very familiar to software architects. For example, consider the Web application server architecture: applications sitting on top of a Web application server sitting on top of a database layer. Each of the lower layers presents an abstraction/API upward that simplifies the design of the layers above it.

Today, the term “OpenFlow” is used in two senses — it can either refer to the “OpenFlow Protocol,” the tightly defined “x86 instruction set for the network,” or to an “OpenFlow Architecture,” with its layers of switches, controllers and applications.

Disruption
OpenFlow has been a controversial topic in the networking industry in part because of early claims that the goal was to commoditize switching hardware. Obviously, given that the protocol requires cooperation between switching hardware and controller software, this goal was something of a non-starter for the switching partners that needed to get involved in the effort. While this debate still goes on in some corners of the industry, most of the companies sitting close to OpenFlow have already seen that it is a way to accelerate innovation and actually differentiate their hardware and overall solutions.

The big picture is that OpenFlow and the larger movement in the networking industry called “Software-Defined Networking” promise true disruption because they enable rapid innovation — new networking functionality implemented as a combination of software applications and programmable devices, effectively bypassing the multi-year approval/implementation stages of traditional networking protocols. This acceleration is possible because of the layered design of the software/hardware architecture.

The most recent example of an industry being transformed by such architectures is the mobile industry and their app stores. Phones prior to this were monoliths: a single company made the phone, its OS and its “apps” — an address book and maybe a couple of games. Today, we’re in the era of “Software-Defined Mobile Devices.” The new architecture has mobile applications on top of cleanly defined mobile development frameworks, and those frameworks provide an abstraction of the underlying hardware. This has spawned the creation of whole ecosystems of companies with new apps coming out constantly. Many of us have customized our devices with dozens of apps that make the device do exactly what we need them to do.

Now come back to networking, an industry ripe for this same sort of transformation. Today, OpenFlow architectures are starting to be deployed with a few targeted applications being rolled out. Over the next few months and years, we should see a steady progression of new networking software applications coming to market from new ecosystems of companies, delivering to customers exactly what they need from the network. That’s the grand vision, and that’s what’s causing all the buzz.

(From Network World U.S.)

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now