SAN FRANCISCO-- As IT contemplates the rapidly expanding universe of storage options, at least one detail has become clear: In the majority of infrastructures, most data just sits around, feeling lonely, while a small percentage is more or less constantly in use. Addressing this issue in an elegant and cost-saving way paves the road to lower capital expenditures for storage, as well as reduced power and cooling costs, with a side order of performance gains. What's not to love?
Several storage tiering solutions are available today, but they tend to be on the upper end of the market. For most solutions, you choose SAS disks, perhaps with an older SATA-based unit that's already in place; you might equip another array with solid-state disks for extra juice. Without any smarts to bind these together, you wind up with manual tiering: Old data sits on the SATA/SAS boxes, and the high-turnover data lives on the SSDs. It's a workable solution, but requires care and feeding to maintain proper residence for each type of data.
Dell's EqualLogic iSCSI SANs now offer automated tiering across arrays, even across arrays of disparate types. In the lab, I ran a Dell EqualLogic PS4100E with 12 SAS drives and a PS6100XVS with a hybrid disk set -- eight SSDs and 16 SAS drives. Each unit was equipped with redundant controllers and two 10GbE interfaces per array.
Multiple arrays, one system
The PS4100E and PS6100XVS were placed in the same storage group and managed as a single entity. The Dell EqualLogic management software allows the use of groups to maintain volumes that can be spread across multiple individual arrays. In the days of yore, it was important to maintain consistency between the arrays so that volumes wouldn't be spread across faster disks in one unit and slower disks in another, but it's no longer a requirement.
Because both arrays are members of a group with a single IP address and iSCSI gateway, hosts that bind to the various iSCSI LUNs perceive only a single storage host on the other side. iSCSI traffic is load balanced between the active interfaces on the controllers and the arrays themselves.
Further, working in concert with the automated storage tiering features, the controllers understand which storage blocks are experiencing the most turnover. The controllers move these "hot" blocks to and from the fastest storage, ensuring that the data needing faster access will not wind up on a slower array, but will be prioritized on the set of SSDs, should they be available. This capability is also available with traditional disks, but the inclusion of the SSDs -- specifically, the hybrid 6100XVS coupled with the lower-cost PS4100E -- really shows off the benefits of these features in production workloads.
Let's envision a fairly traditional storage workload for a medium-size infrastructure. We have a bunch of hypervisors driving several hundred VMs, along with general-purpose file sharing, and a passel of databases that drive a Web application tier to provide critical line-of-business applications.