Syndicated

Rapidly ramping network traffic is a big issue for many enterprise organizations and data centre, but for social networking site Facebook which has to deal with more than 1.2 billion active user monthly, the problem looms larger.

The most pressing challenge for Facebook is not traffic in and out of its data centres but rather data moving between servers inside the facilities, said Najam Ahmad, director of technical operations for Facebook.

Each time a user logs onto the site, hundreds or thousands of servers are activated to compute different aspects of the user’s News Feed on the fly, he said during a talk this week at the annual Optical Fibre Communications (OFC) conference in San Francisco.

The “east-west” data traffic is constantly growing faster as more complex Facebook apps are introduced, according to Ahmad.

To address the issue, Facebook has to keep abreast of the latest networking technologies.

While many enterprise companies are using 10-Gigabit Ethernet, the technology is considered a minimum at Facebook and the firm is rolling out its own optical fibre, deploying 100 Gbps links in its data centres and considering  silicon photonics technology.

Ahmad said Facebook has not deployed 10-Gig in two years. The company is now using 40 Gigabit Ethernet and some 100-Gigabut links.

In order to link four data centres spread across a campus of about 10 to 20 acres, he said he would like to have a fibre technology that spans one to two kilometres and carry 100Mbps – for starters.

Ahmad said speeds could be raised to 400 Mbps as traffic grows.

Such a connection would need single-mode fibre rather than the multimode fibre, which has a shorter range, found in many data centre today.

The system will likely use silicon photonics technology. Ahmad favours the technology because Facebook wants to use rack-level computing.

In rack-level computing, storage and memory are concentrated in separate racks and connected at high speed to form the equivalent of many servers.

For the long-distance links between data centres, Facebook is starting to build up its “dark fibre” capacity rather than lease connections from a carrier’s network.

Buying fibre capacity is cheaper than leasing, but by lighting up its own fibre, Facebook also gains more control over the system to enable it to rapidly respond to traffic fluctuations.

Read the whole story here

 

Share on LinkedIn Share with Google+ Comment on this article
More Articles