Going virtual raises storage-management issues

If you’re an IT executive, chances are you’re already thinking about storage virtualization. Nearly one-quarter of companies with at least 500 employees have deployed storage virtualization products already, and another 55% plan to do so within two years, a recent Gartner survey found.

Storage virtualization is an abstraction that presents servers and applications a view of storage that is different from that of actual physical storage, typically by aggregating multiple storage devices and allowing them to be managed in one administrative console. (Compare storage virtualization products.)

The technology is emerging fast onto the enterprise scene for good reasons: In many cases, it can reduce the management burdens associated with storage; and offer better models for data-center migrations, backup and disaster recovery.

Enterasys Networks reaped these benefits recently when it moved a data center from Boston into its headquarters in Andover, Mass.

“In days gone by, before storage virtualization, that might have been an all-day, if not an all-week kind of process,” says Enterasys vice president of marketing Trent Waterhouse. “Because of the storage virtualization technologies, the entire move happened in less than 30 minutes.”

There are still common pitfalls that storage administrators should ponder, as well as questions they should ask before they roll out a storage-virtualization project. Here’s a look at some of the top issues.

Managing capacity

With storage virtualization, allocating storage is easy — perhaps too easy. “You have the ability to affect more systems in the whole forest if you do something,” says Jonathan Smith, CEO of ITonCommand in Denver, Colo., who cautions fellow IT shops to pay close attention to both the storage and performance needs of each application. “You just didn’t have that power before. Now all of a sudden you can do whatever you want.”

Smith, who is using LeftHand Networks virtualization on HP storage, says an IT pro might see a lot of empty space in a given storage volume and be tempted to fill it up. Overusing a resource, however, can decrease performance if the storage is allocated to a database or some other I/O-intensive application. (Compare storage products.)

“Make sure you size it correctly and really understand how much horsepower [your applications need],” Smith says.

These concerns are especially true when it comes to thin provisioning, a component of virtualization technology that lets an IT administrator present an application with more storage capacity than is physically allocated to it. This eliminates the problem of storage over-provisioning, in which storage capacity is pre-allocated to applications but never used.

With thin provisioning, more than 100% of storage capacity can be allocated to applications, but capacity remains available because it won’t be consumed all at once.

You can play it safe by allocating small volumes that never exceed the physical storage, or allocate as much as you want to each application, then monitor your systems closely, says Themis Tokkaris, systems engineer at Truly Nolen Pest Control in Tucson, Ariz. It’s best if you can find a happy balance between those two extremes.

“You have to monitor your pool so you don’t run out of space, because that would really crash everything,” Tokkaris says.

How server virtualization fits in

A common question is whether it makes sense to virtualize storage if you’re not also using server virtualization. The short answer is yes — though it’s true you won’t get as much flexibility as IT shops that virtualize both servers and storage.

“If you virtualize both, then you have the maximum flexibility when deploying new applications,” says Chris Saul, IBM’s storage-virtualization marketing manager.

Nevertheless, there are benefits to just virtualizing storage.

Improved disaster recovery, availability and data migrations can all be gained without having virtual servers, says product marketing manager Augie Gonzalez of storage virtualization vendor DataCore Software. In addition, storage virtualization by itself can provide thin provisioning, as well as the simplified management structure that comes with pooling storage devices and managing them from a central console.

On the flip side, virtualizing servers without virtualizing storage is problematic. It doesn’t make sense to have multiple virtual servers on a physical machine that aren’t able to share data, says Enterprise Strategy Group (ESG) analyst Mark Peters.

“You can gain tremendous benefits from storage virtualization, even without server virtualization. It’s harder the other way around,” Peters says. (Compare server products.)

Virtualization in a heterogeneous environment

Given that virtualization is designed to combine multiple storage devices, it’s not immediately obvious why it makes sense to virtualize your storage if it all comes from a single vendor.

There are compelling reasons, however, says storage analyst Arun Taneja. “A lot of people think storage virtualization has a prerequisite of heterogeneity, that it only comes into play when storage from three companies is involved,” he says. “I say, forget it, it has value even if you are stuck with a single vendor.”

The storage market is more proprietary than just about any other IT space, and this creates problems even if you have just one storage vendor, Taneja says.

Say you’re an EMC customer with two Symmetrix DMX boxes, and “you just want to combine the power of those two boxes and manage it as one,” Taneja says. “[Without storage virtualization] you can’t do it. That’s how ridiculous the world of storage is.”

This “ridiculous” level of exclusivity in the storage market obviously takes on a new dimension when you’re managing storage from multiple vendors. That leads to the next issue.

Choosing a vendor

Enterprises’ primary procurement dilemma is whether to purchase storage-virtualization products from a storage vendor or a third party.

If your true objective is flexibility, especially if you’re planning major data migrations, a third party is the way to go, Taneja says. Such vendors as FalconStor Software and DataCore are capable of managing storage from multiple vendors simultaneously, whether they are EMC, HP, IBM or Hitachi. Truly Nolen chose a third party, DataCore, even though the company uses only HP storage. The company evaluated virtualization vendors including HP, EMC, and Dell EqualLogic, but settled on DataCore because it was less expensive and offers the flexibility of using whichever hardware vendor it likes, Tokkaris says.

The major storage vendors promise to be able to manage a heterogeneous environment. Examples include IBM’s SAN Volume Controller, NetApp’s V-Series, and EMC’s Invista. As a general rule, though, vendors support their own storage products first and others second, if at all.

“They always support their own systems first,” Taneja says. “That means EMC’s Invista supports DMXs and Clariions, and they might support some other foreign devices; but the support for foreign devices always lags, and support for foreign devices is always incomplete. The whole idea is don’t support your enemies’ boxes.”

Peters predicts that as storage virtualization becomes more common, market pressure will force vendors to do a better job supporting their rivals’ technology.

If you get storage from just one vendor, however, the solution is simple.

“I say to the IT people I talk to, if you’re a Hitachi Data Systems customer and you like working with them and you’re stuck with them, just buy their virtualization to make life more manageable within Hitachi product,” Taneja says.

Sifting through the hype

By most accounts, storage virtualization is a no-brainer. Who wouldn’t want to manage multiple storage devices from a single console, and gain data mobility that makes disaster recovery a breeze?

Storage virtualization will be about as common as automatic transmissions in automobiles within a couple of years, ESG’s Peters thinks. “There are certain technologies that are just smarter and better than people doing it manually,” he says.

Even storage virtualization vendors, however, can admit there are instances when the technology isn’t a fit.

Storage virtualization is not for everyone, says Kyle Fitze, an HP director of storage marketing. Virtualization actually adds a layer of complexity, he argues. You have to manage the individual storage devices, as well as the virtualization layer, he notes. Despite virtualization, you still have to perform such tasks as reconfiguring devices after adding physical disks to storage arrays, he adds.

As a general rule of thumb, the more complicated your storage environment, the more benefit virtualization brings.

“There’s a complexity/benefit tradeoff,” Fitze says. “If their current environment is difficult to manage and complex . . . adding a virtualization layer can simplify that complexity. If it’s a small, efficiently managed environment without data-protection challenges, then virtualization just for virtualization’s sake is probably not a good idea.”

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now