Software-defined architecture will deliver network, storage and server resources on demand to applications, says Diane Bryant, as Intel launches its new line of Xeon processors

Diane Bryant. Photo courtesy of Intel Corp.

Networks, servers and storage in the data centre must evolve into new architectures to meet the demands of a mobile workforce and computers, cloud computing and big data analytics, an Intel Corp. executive told a press and analyst conference in San Francisco on Monday.
 
Diane Bryant, senior vice-president and general manager of Intel’s data centre and connected systems group, spoke to reporters and analysts at the launch of the chipmaker’s Atom C2000 processor line.
RELATED CONTENT
8 key SDN considerations
PLUMgrid rolls out virtual network infrastructure
Ciena showcases SDN, high-speed optics

While networking companies are plugging software-defined networking, Bryant aired the notion of a software-defined infrastructure, in which network, storage and server resources are allocated according to the demand of applications.

Networks will evolve from fixed-purpose boxes to applications running on standard Intel architecture, Bryant said. The network’s backplane can run on Intel’s Xeon processor. In a traditional IT environment, when a new service request is filed, IT scopes the project, balances user demands, and manually configures the network, a process that takes two to three weeks. Bryant said Intel is provisioning new software-defined network (SDN) services in minutes internally.

SDN also allows compute resources to be moved to the edge of the network. Today’s base stations for mobile communications provide limited programmability, which causes latency in service delivery. Future base stations will have intelligence and computing power pushed out to them, reducing latency and improving localized services, Bryant said.

That edge intelligence will be important as voice and gesture become the primary interface to mobile devices, said Jason Waxman, general manager of Intel’s cloud infrastructure group.

Sean Brown, senior manager of Innovation at speech technology company Nuance Communications Inc., said the company’s Xeon-powered data centres are working to balance the compute load between the data centre and devices, as speech technology moves from reactive to more proactive natural language processing.

On the storage side, Bryant said today’s storage area network (SAN) model needs making over. SANs deliver high-performance storage and lots of data protection, but are delivered as shared capacity. When developers are specifying storage demands for an application, they don’t want storage to be a performance bottleneck, so there a “huge exaggeration” of storage needs, Bryant said. That leads to underutilization of storage resources.

The new storage architecture will see storage resources delivered as a service, according to the demands of the application, Bryant said. Tiering of storage into “hot” and “cold” – performance and capacity – tiers will be automated. And as solid state drives replace hard disks, performance will increase. Bryant said a recent Intel project cut the sort time for 1TB of data from four hours to seven minutes, mostly because of the shift to SSDs. That’s critical for big data projects, she said.

Servers also need a new architecture, Bryant said. Racks of servers with their own I/O, memory and compute resources end up underutilized. An application can be constrained by I/O demands, for example, while compute and memory resources are underutilized. The future server will break down that “artificial structure,” Bryant said, and pool resources, responding to the demands of the application. That leads to high utilization and lower capital and operating costs.

Waxman refers to that as “composable resources.” It’s one of the three fundamental elements of the data centre of the future, he said, along with orchestration of resources by software defined networking, and “workload-optimized technologies.”

The latter can take the form of custom silicon for customers – he said Intel has worked with online auction house eBay and social networking site Facebook on processors with 50 per cent frequency variations to cope with fluctuating demand – and hardware accelerators for particular workloads, like video or cryptographic services, Waxman said.

There are two processors in the new Xeon C2000 line. Avoton is aimed at low-energy, highdensity microservers and storage, while Rangeley targets network devices. The system-on-a-chip processors are based on Intel Silvermont microarchitecture and 22-nanometre technology. They will feature up to eight cores, with integrated Ethernet and support for up to 64GB of memory.

“Power continues to be a constraint at the data centre,” Bryant said. The C2000 line draws as little as 13 watts; comparatively, the 2011-era Sandy Bridge processor drew as little as 20. Bryant also previewed the 2014 Broadwell line of 14-nanometre processors, which she hinted will lower current draw even further.

Related Download
Stock exchange lowers latency and increases availability with HP Sponsor: HP
Stock exchange lowers latency and increases availability with HP
This case study provides an overview of why the National Stock Exchange turned to HP to meet specific needs for a next-generation server and storage infrastructure with high availability and ultra-low latency to support online transaction processing and data warehouse solutions.
Register Now
Share on LinkedIn Share with Google+ Comment on this article
More Articles