BellSouth Corp. has seen its disk-based storage grow 130 percent, from 239TB to 550TB, during the past three years, even as its tape-based storage rocketed from 350TB to 2 petabytes — mainly due to business growth from new electronic channels and business continuity planning efforts, including creation of redundant systems.
To get a handle on that growth, the Atlanta-based telecommunications company consolidated its resources onto storage-area networks (SAN), moving from seven data centers to two. Now, however, CIO Fran Dramis believes his company must redistribute storage assets to the edges of its networks for efficiency’s sake, while keeping management of those systems under a single umbrella. He spoke with Computerworld at this week’s Storage Networking World about the project, which is expected to take up to five years.
Why did you centralize storage in the first place if you planned on decentralizing it again?
We centralized really as a way of gaining control. We actually saved money in the process of centralizing, but that’s not why we did it. We centralized in order to put in the right kind of structure. Then we’re going to physically decentralize the information again, but … under a common management framework [that offers the] ability to share physical storage capabilities in a distributed fashion. I’m a proponent of information being closer to those using it. But also I’m a proponent of having businesses in day-by-day control of their technology.
What advantages do you find in having the storage closer to the end user?
There are a lot of applications that have latency requirements. There’s also a reason to have local storage that really doesn’t have any cross-business needs. It really goes back to why distributed computing started: It’s a more effective way businesses could have quick access to information.
Does IP storage fit into your IT plans?
As we build the future public-switched network, we’re trying to combine IP technology with some of the characteristics of SANs. We believe that the network is going to have to have the responsibility for storage management in addition to just data transport management. In fact, the network will also have responsibility through grid computing for delivering compute management. So we’re building our future networks with the capability of having storage, transport and compute management as part of the network fabric. I just used IP as a surrogate for a flexible data architecture.
Future networks are going to have to understand the content of transport and actually manage it differently — really managing objects and putting intelligence on the transport of those objects to different places.
Where does IP fit in the meantime?
A lot of our networks on our data side, such as our VPNs and others, are going to be based on IP technology. That’s a given. If it happened by itself without storage as an element, that would be trouble. That’s why we believe in multiprotocol switching: a combination of some of the storage management capabilities and the IP capabilities with the quality of service elements that we get in service contracts.
Are standards important to this effort?
We need standards in storage management, and we need them to be comprehensive. I hesitate to comment on one particular standard (CIM/WBEM). But by comprehensive, I do mean adopted across the storage, compute and transport industry framework. Having standards that aren’t mindful of [those other things] doesn’t get us where we need to go. We can’t be thinking of these as separate things.