Getting it together

From business-critical decision support information to endless amounts of possibly useful customer data, companies are storing more information than many can handle. Often, they are doing so without a solid network architecture, leaving them with the equivalent of a filing cabinet with unlabelled folders and drawers that often stick shut.

To deal with this, companies need to rethink storage approaches, according to Bill Russell, executive vice-president and CEO of Hewlett-Packard Enterprise Computing.

“Companies that treat data like a strategic asset — that know how to manage and analyse data to gain new insights into their business — will be in the best position to capitalize on the new business models, the new market opportunities, and the new ways to attract and develop customers,” he said during a presentation in New York City.

“Storage capacity requirements are growing at a rate of over 50 per cent per year…and, of course, you don’t solve this storage problem by throwing more capacity at it. It requires a fundamental shift in how you design and implement your storage infrastructure,” Russell said.

By far, the most popular emerging storage management option is the Storage Area Network (SAN). SANs come in a variety of topologies, but the essence of a SAN is to bring all of the storage devices together on one network to better distribute server access to those devices.

SANs are usually based on high-bandwidth Fibre Channel links and include components such as RAID controllers, disk drives, tape libraries, network switches or hubs, and host adapters.

Commonly touted benefits of SANs include: greater efficiency, since servers can utilize various storage devices instead of all waiting for one; better overall network performance as storage and back-up activities are moved off of the LAN and onto the SAN subnetwork; and disk mirroring for redundant protection.

Vendors confidently state that the benefits of a SAN include lower cost of ownership, but as with any technology that depends on many other factors such as the cost of the hardware, the security of the design, and the big kicker in storage management, interoperability.

Security

In terms of secure design, it’s important to ensure that in granting servers access to multiple storage devices, some servers are not permitted to make a wild grab at any device they can access.

Michael Casey, a research director with Gartner Group in San Jose, Calif., said this is a problem particularly with Windows NT.

“Many of the UNIX variants aren’t as bad…you have to explicitly tell them these (storage devices) exist before the server tries to take control of them,” Casey said.

Furthermore, as Bruce Gordon, director of strategic planning with Clariion in Southboro, Mass., explained, allowing every server to openly see every device presents a security threat.

“If one of those machines is hooked up to the Internet,” Gordon said, “a malicious person could get in there and reconfigure the storage software to start writing over another machine’s storage.”

Management

Casey said the disks need to be masked so only the servers with permission to access particular devices can see those devices.

This can be done through topological design, software, or by virtue of Fibre Channel host adapters all having unique worldwide names, similar to IP addresses, Casey said.

“You can use that unique host adapter name to map the host adapter to a specific set of disk drive volumes that the subsystem presents, so that host adapter and the host that it’s in can only see the disk drives that are assigned to it. That way you can share a pool of a hundred drives among multiple servers by assigning particular logical volumes to particular hosts, even though the hosts are sharing the same connection.”

Gordon and Casey both said management could be done in a switch, but agreed that such a method is slow and expensive.

“[This] requires that the switch crack the packets open and see where they’re going, which isn’t normally something a switch has to do. Either it slows it down or the switch has to be a more powerful and more expensive switch to handle the same amount of traffic. So it’s the wrong place architecturally to do it,” Casey said.

Both agree that the right place to perform management is in the software with an array-based topology. Gordon explained this arrangement allows the switch to manage the ports but doesn’t require it to look at packets and make decisions. That is handled by the software.

Many vendors claim that Network Attached Storage (NAS) is in competition with SANs, but Casey said the two are located in different parts of the network and don’t even operate on the same concept. He said NAS involves a LAN-attached file server responding to file requests over some type of file transfer protocol.

“SANs are a back-end network…that tie the servers together with storage. They’re not moving files, they’re moving SCSI blocks,” Casey said.

So a NAS system could have clients attached on one end and still be connected to a SAN or other back-up system on the other end, just like any other server, he said.

Fibre Channel versus SCSI

In general, vendors are encouraging customers to buy Fibre Channel products instead of SCSI, even if they are not going to implement a SAN right away.

Hewlett-Packard, Clariion and StorageTek, among others, have all pointed out that it’s better to buy the Fibre Channel product now (usually at a higher cost than SCSI) and be SAN-ready for the future.

But some disagree. “That’s baloney,” said Alea Fairchild, managing director with the consulting firm Greiner International in Boechout, Belgium. “If you need SCSI, go with SCSI. There are benefits to SANs, but sometimes the organization just needs simple storage” such as the direct-attached model where a server simply has a drive connected to it.

Gartner’s Casey said Fibre Channel is very beneficial from the server to the storage subsystem, but the link from the storage subsystem controller to the disk drives need not be Fibre.

“The back-end connection to the disk drives can continue to be SCSI for all intents and purposes for another couple of years and it won’t make much practical difference,” Casey said.

The major benefits of Fibre Channel over SCSI are higher throughput speeds, the ability to cable over longer distances and ability to connect more devices.

“Our recommendation in general is people who need those benefits should go to Fibre Channel host connections,” Casey said. “They don’t necessarily have to go with Fibre on the back end right away. They can go with Ultra SCSI or the next generation SCSI which is even faster, and many of the vendors will continue to support back-end SCSI disks for another couple of years.”

Interoperability and standards

The big headache in the SAN arena right now is the lack of standards and associated interoperability problems.

“You can’t just go plug a host adapter from here and a switch from there and a subsystem from a third place and expect them to work together,” Casey said.

He recommends buying only from vendors who have done integration testing on the specific configurations for that customer and who are willing to certify that the parts will all work together. He said the concept of a heterogeneous SAN is about five years away.

The Storage Networking Industry Association (SNIA) is working towards standards in this area.

Roger Reich, vice-chairman of SNIA in Colorado Springs, Colo., said the organization is focused on four key areas of standardization: GUI-based management interfaces into network storage devices; third-party copy, the ability to run back-up between storage elements without the data having to pass through and bog down the server; specifications for file systems that run over top of all network storage elements; and the actual definitions of storage terminology.

Reich said customers cannot confidently implement a multi-vendor storage network right now, but he said that’s no reason to avoid purchasing SAN technology.

“Everyone knows that the real Holy Grail of implementing the SAN is this heterogeneous or multi-vendor interoperability. We want to get there as fast as we can, but there’s absolutely no reason to delay an investment in SAN technology today as long as it’s cost-justified. SANs are just too powerful a technology to wait for in that regard,” Reich said.

“If you’ve got a back-up application today that needs a SAN, if you’ve got an on-line storage application that needs the sharing and connectivity that a SAN can provide from an individual vendor, you should go and buy it.

“It is undoubtedly true that some of the hardware that is shipping today will not be completely compatible with the truly heterogeneous solutions tomorrow, but it is SNIA’s objective to limit that pain to the end user,” Reich said, adding that most incompatible equipment could probably be bridged to the future network.

Advice from the trenches

Wayne Chemy, senior architect and principal with Vespera Logic, has been installing a SAN for Newcourt Financial in Toronto. He said the most important lesson he’s learned is the value of building an isolation layer into the network that hides the storage elements from other services.

Chemy also found that the design had to take into account the specific needs of various applications, and that there was no need to go with full Fibre Channel.

“It didn’t make sense for us to say one suit fits all. We have some applications where it makes sense to have the higher bandwidths and the extra costs associated with Fibre, but as well we do some Ethernet trunking just to facilitate some of the smaller applications.”

Chemy said he did have to muddle through interoperability problems, but the isolation layer and varying transport kept those problems out of the business-critical level. He advised that anyone considering building a SAN start with a strong set of services that are transport-independent to protect against shifts in the market.

“If you’re storage independent, you can take advantage of your old storage as well as net-new storage. For instance, in this framework we’ve got StorageTek with Clariion kinds of storage in the back. We’ve got some older stuff that’s running off of straight SCSI and we’ve got newer stuff in this implementation that’s got Fibre all the way down to the disk. That allows you to get the framework ready and react to change but not get stuck and boxed into a certain path.

“There’s a lot to the said for that without going leading or bleeding,” Chemy said.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now