Aiming to keep the Internet as innovative as possible, researchers from both the University of Toronto and the University of British Columbia have joined a global group working to improve the efficiency of the Internet.
The research activity, conducted by PlanetLab – a global distributed testbed for developing, deploying and accessing planetary-scale network services – allows researchers to pioneer and investigate novel Internet services that spin over much of the Web, said David Culler, Intel Research Berkeley Lab and the University of California at Berkeley.
Members are collaborating to test the implementation of an overlay network as a way to achieve greater Internet efficiency.
While many Internet-related projects will be conducted at PlanetLab, the key ingredient is the architectural principals that will allow other research endeavours to transpire – the overlay network – where nodes run at the end points of the Internet and operate on the existing Internet for transmission.
Researchers are touting the overlay as an opportunity to grow the Internet and innovate at the edge of the network, while leaving what Culler refers to as the “opaque” core intact.
“Today’s Internet is built around an elegant and important architectural principal. It’s a simple architecture based around simple end-to-end transparency,” Culler said. “With that design principle comes limitations. In particular, the Internet is opaque, making it difficult to adapt to current network conditions and difficult to distribute applications widely.”
The overlay makes it possible for an application to spread itself over many machines on the Internet, and also forms an application-level network that is overlaid onto the existing Internet.
Overlay networks integrate their own intelligent routers and servers on top of the Internet to enable new capabilities without affecting the performance of the Internet today. Applications are decentralized, with pieces running on many machines, and can self-organize to form their own networks and include some form of application processing inside the network, instead of at the edges of the network. One example of an overlay network enabling a new kind of Internet application is robust video multicasting.
Culler said the Internet was originally created as an overlay on the telephone system.
“As modem use becomes more common it has a tendency to work its way down into the underlying infrastructure,” Culler said. “So what PlanetLab is doing is building up the capability of novel services to create these overlays over a large number of machines and that way address these challenges of how we can continue to innovate and deploy.”
Larry Peterson, of Intel Research at Princeton University, says overlays are an attractive way to introduce a disruptive technology into the Internet.
“It’s a way of putting the technology on top of the existing infrastructure and still send packets to the Internet,” he said.
Several researchers at PlanetLab have raised concerns about how to deploy disruptive technologies into what Berkeley’s Culler called the important infrastructure known as the Internet.
“With the success of the Internet, there is a cost to making changes, which is sometimes referred to as ossification,” he said. “The Internet has become far more rigid and far more brittle that, like our bones, we depend on them, but that structure also limits how much you can morph and change what it is.”
PlanetLab currently has 170 computers distributed at 65 research centres around the world. Within the next few years, researchers would like to have more than 1,000 computers in the network, Peterson said.
The concept of PlanetLab, which is physically hosted at Princeton University in Princeton, N.J., started in March 2002 as a grassroots effort. Intel Corp. researchers hosted a meeting for top network systems researchers who were frustrated by lack of infrastructure, said Rick McGeer, Centre for Information Technology Research in the Interest of Society (CITRIS) scientific liaison at Hewlett-Packard labs. The researchers discussed the implications of implementing an overlay network.
Intel provided seed funding for PlanetLab, donating 100 computers and technical support to the endeavour. HP has also joined the team of researchers along with several other universities around the world.
Exact details still need to be formulated on how corporations will enter the fray of the PlanetLab, but eventually it might be based on the Web Consortium hosted by the Massachusetts Institute of Technology (MIT) in Cambridge, Mass., where membership fees might be required as will a membership agreement. So far, UBC and U of T are the only Canadian participants.
Beyond involving hardware distributed over the world, Peterson said the research project is also about the software that runs on the machines.
Each node on the network is deployed on a Linux kernel, with extensions to support isolation. It supports many different services running simultaneously, Peterson said.
“[We selected] Linux because it’s easy and convenient,” he added. There are also bootstrapping and software distribution mechanisms, as well as management and monitoring services on the Linux kernel.
“Collectively, software gives us the ability to do distributed virtualization and allows us to run many overlays simultaneously,” Peterson said.
Peterson added that another technical idea that surfaced is the notion of a slice. A slice, Peterson explained, is a vertical cut across all of the machines in the overlay network, where a service runs inside of that slice. The service (overlay) has access to some fraction of the processes on all the machines. Each service runs in a slice of PlanetLab’s global resources.
Researchers attest that PlanetLab is not just a place for experimenting, but it is also a deployment platform.
“When we look back ten years ago on what the Internet has become, we fully expect it will provide much more than the translation of computer names and the transport of data that it does today,” Culler said.
PlanetLab is on the Web at www.planet-lab.org.