Optimizing IT resources aids bottom line

As vice president of MIS at Circuit City Stores Inc., William E. McCorey Jr. expects to spend a lot of time this year kicking the tires on his company’s technology infrastructure.

Circuit City is very close to achieving a formal Six Sigma quality certification. Key to obtaining the certification is ensuring that Circuit City’s hardware and software infrastructure is optimized for maximum system availability and service levels, McCorey says.

The Richmond, Va.-based electronics retailer isn’t alone. “Enterprises are being asked to do more with less. Their budgets are decreasing [and] their skill sets are decreasing, while what they’re being asked to do is getting increasingly complex,” says Theresa Lanovitz, an analyst at Gartner Inc.

As a result, expect to see IT efforts targeted increasingly at improving the use of existing technology through projects such as hardware consolidation, performance monitoring and application tuning, she says.

One of the big projects Circuit City is considering for 2003 is a Unix server consolidation. The company is exploring the possibility of reducing its 240 Hewlett-Packard Co. Unix servers into Superdome and RP class servers from HP. It has similar plans to consolidate its 13 IBM AS/400 midrange systems into a significantly lower number of large AS/400 boxes.

Circuit City is also using VMware Inc.’s virtualization software to boost usage rates on its Wintel servers. So far, it has deployed VMware on about 80 of its more than 230 Intel boxes.

The goal of these efforts is to reduce infrastructure and administrative costs by up to 15 per cent, says McCorey. The use of Palo Alto, Calif.-based VMware’s virtualization technology on Wintel servers has already pushed utilization rates from 20 per cent to nearly 60 per cent. At the same time, Unix server utilization rates, which now hover around 60 per cent, are expected to be pushed to more than 80 per cent as a result of the planned consolidation.

“We look to be a service utility,” McCorey says. “Our goal is to keep the systems up and running as much as possible and to improve our performance.”

Monitor performance

Performance monitoring is a key component of infrastructure optimization, says Suzanne Gordon, vice president of IT at SAS Institute Inc. in Cary, N.C.

SAS monitors network performance and key server performance statistics using resource measures from Simple Network Management Protocol and Windows Management Instrumentation collectors, Gordon says.

Over the past year, SAS has also been using a home grown system to consolidate all of its change management and resource-monitoring data. The system features automated alerts that notify administrators when a performance metric “falls outside of normal boundaries,” Gordon says.

The technology has allowed SAS to identify underused systems “to see if we can retire or reallocate them,” she explains.

For instance, a special conference-room scheduling system is being replaced by Microsoft Outlook’s scheduler, and several related specialized reporting systems are being retired. Users will then be asked to find that same scheduling information in the company’s customer relationship management system, Gordon says.

As part of its technology optimization, SAS also recently switched from a frame-relay network to an Internet-based virtual private network (VPN) to capitalize on the VPN’s lower costs and faster speed, Gordon says. Server and storage consolidation projects are also in the works. To reduce maintenance, SAS is consolidating 45 Unix servers that host some of its internal applications into 10 to 15 servers.

Centralize IT resources

Integrating and centralizing the operations and management of IT infrastructure is also a good way to optimize it, says Mario C. Carlos, head of IT at Manila Electric Co. in Pasig, Philippines.

Centralization maximizes the sharing of resources, makes it easier to develop and implement a disaster recovery plan, minimizes spare capacity, and provides for a less complicated data communications infrastructure, Carlos says. It also makes it easier to secure the infrastructure and provides for easier backup, recovery and disk storage management, he adds.

The utility company has moved from a highly distributed, decentralized environment to a “centralized and partially meshed architecture,” Carlos says. Technologies key to the transition were storage arrays from EMC Corp. for data sharing across mainframe and open systems, Storage Technology Corp. applications for remote backup and synchronization of local and remote data, and IBM’s Parallel Sysplex system to share mainframe resources across the organization.

Understanding business and service-level requirements is key to right-sizing the infrastructure, says McCorey. “You can’t design a system unless you really understand what the application is going to do for the business and what the impact of either a downtime or poor performance is going to be,” says McCorey, who uses usage histories and information gleaned from running similar applications to forecast needs for a new application.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now