LAS VEGAS – NetApp is going all in on hyper converged infrastructure (HCI).

During its NetApp Insight conference, the Sunnyvale, Calif.-based storage and data management firm revealed that NetApp HCI, a computing and storage platform expressly designed for enterprise-scale applications, would be released before the end of October.

NetApp director of emerging technologies Aaron Delp says the company’s HCI offering will be better suited to enterprise storage than the competition.

“For us, it’s all about workload consolidation,” Aaron Delp, NetApp’s director of emerging technologies, told IT World Canada. “We don’t want folks to run one application on this environment – we want folks to run hundreds of applications on this environment.”

NetApp, of course, is not the first data storage provider to enter the growing hyperconvergence market, which currently includes Nutanix and HP Inc., among others. The term refers to the increasing prevalence of converged systems – hardware appliances that contain multiple components in a single box – in enterprise IT.

So when the company first announced its own version of the technology back in June, it emphasized four advantages NetApp HCI would have over its “first-generation” rivals: guaranteed performance rather than unpredictable performance; flexible scaling; automated infrastructure; and consolidation.

Delp, too, said that while HCI represents both NetApp’s new “mindset” and “the next evolution” in the data storage industry, one drawback of first-generation technology had been the limited number of workloads it could carry: though easier to use than previous converged systems, HCI 1.0 forced IT workers to store their data on “multiple islands” rather than consolidating them into one location.

“You had, ‘Here’s my database.’ ‘Here’s my VDI environment,'” he explained. “The idea way back when was, ‘I want to simplify my operations, but I want to consolidate them as well.’ HCI solved one of the two – it solved simplicity. We’ve solved consolidation.”

The solution, it turned out, involved building what Delp called a “dynamic pool” of resources: chassis of varying size and function broadly divided into computing nodes and storage nodes which, true to their names, compute and store data. Each comes in three sizes: Small, medium, and large.

“We took three key processes – consolidation, scaling, and automation – and brought them forward from storage to compute,” he said. “Those are the core tenets of our HCI platform now. It was a really nice architecture fit for where the market is going.”

Check out a video demonstration below.



Related Download
IBM Reference Architecture for Genomics Sponsor: IBM
IBM Reference Architecture for Genomics

Register Now