primary

NAS or SAN? Flashing or spinning disk? Data storage choices used to be simpler for enterprises — but in the era of cloud, edge networks and big data, they are grappling with more options to store their data and making sure it gets moved to the best media to meet the needs of applications.

As Primary Data found in a study it released last month, organizations are increasingly managing more storage systems and multiple vendors, a situation the company set out to make easier. This week at EMC World, it announced that its storage-agnostic DataSphere platform can unite EMC systems including ScaleIO software-defined scale-out block storage and Isilon scale-out NAS, as well as Amazon S3 cloud storage.

Primary Data CEO Lance Smith said this data virtualization allows enterprises to integrate the capabilities of their diverse storage resources across file, block, and object access protocols so they can scale their EMC Data Lake across different storage platforms from that vendor.

Smith said DataSphere is able to move live data based on service level objectives across different storage systems using different protocols, including ScaleIO software-defined scale-out block storage, Isilon scale-out file-based storage, and object-based cloud storage such as Amazon.

The DataSphere architecture separates the data path from the control path so that applications are pointed to the storage asset that makes the most sense based on whether the data is hot or cold. IT is able to define application performance, protection and price requirements. As data becomes hot or cold, said Smith, DataSphere makes sure it is placed automatically on the storage platform that best meets evolving objectives without application disruption.

Hot data would go on local flash storage, for example, while a file that hasn’t been used in some time, which is now considered cold enough to be deemed frozen, said Smith, could be moved to cheap cloud storage. “We send it out to any object store that anyone wants,” he said. “If someone pushes their entire infrastructure into the cloud, we are still applicable in that scenario.”

Smith said the challenge is getting enough metadata to make some automated decisions so that IT can manage the organization’s data without throwing a lot of resources at it.

Henry Baltazar, research director for the Storage Channel at 451 Research, said the value proposition of Primary Data is not in and of itself the ability to differentiate data, but that once they have that data they are able to move it around, a feature that used to be only available within a single storage array.

It’s a problem that’s been around for more than a decade, he said. “A decade ago, we had disk and faster disk. The performance differential wasn’t a big deal.” But with emergence of flash as a faster tier and availability of cheap, slower cloud storage, there’s more impetus to move data around. Even still, the cost of moving data is expensive.

Baltazar said Primary Data’s ability to be storage system agnostic is significant, given that many organizations end up with multiple vendors in their environments, in part due to merger and acquisition activity. Just because one company buys another, he said, it doesn’t mean they will rebuild their storage infrastructure.

And while some organizations do buy all of their storage from one vendor, Baltazar said most companies will have multiple types due to innovation over time, and generally multiple vendors can be found in most enterprise data centres. “Object storage is very new. You may not buy EMC even if you already have EMC.”

It could give enterprises more power when making purchase decisions because they would be able to play one vendor off another, Baltazar added. “Primary Data could be the referee.”



Related Download
Top tips for securing big data environments Sponsor: IBM
Top tips for securing big data environments
Download this white paper to find out how your organization can improve security decision-making and monitor big data environments.
Register Now