Glass house faces rocky issues

As far back as the mid-1990s, companies began bucking the IT decentralization trend by moving Unix and Windows servers into their data centers. But many data centre managers are still struggling to coordinate systems management processes between their mainframes and the newer servers.

The big roadblock is getting business managers and various IT groups to agree that common mainframelike systems operation procedures are needed in areas such as software change management and the scheduling of batch processing jobs, said a half-dozen attendees at last week’s spring conference of the American Federation of Computer Operations Management (AFCOM) here.

“We still have to get our hands around open systems,” said Pete Lillo, manager of data center operations at NCCI Holdings Inc., a Boca Raton, Fla.-based company that collects and processes data for firms in the workers’ compensation industry.

NCCI’s data centre currently houses a mainframe, 20 Unix servers and about 130 Windows-based systems. Early last year, Lillo said, a new CIO persuaded managers at the company to adopt a unified change-management process.

But there’s more to be done, Lillo said. NCCI puts its application source code data in four separate repositories that were set up for the different types of systems. A proposal to develop a single repository or provide a common view of the existing data has received preliminary approval but is still in the evaluation stage, Lillo said.

Lillo is also seeking funding for enterprise job scheduling software. He said there’s no common scheduling tool now, which means that data center workers at NCCI “really don’t manage [the process] the way I’d like it to be managed.”

Other IT managers at the semiannual conference run by AFCOM, an Orange, Calif.-based association for data center professionals, said they face similar challenges.

Coordinating operation support procedures on Unix and Windows servers “is the biggest problem we have,” said David Sandberg, chief of the business management division at the Defense Information Systems Agency’s data centre in Ogden, Utah. The facility controls 140 servers and a pair of mainframes. Sandberg said he’s looking to create a team that would take charge of tracking the configurations of all of the systems and potentially set up a common change-management process. But nothing is definite yet, he added.

Gary Yeck, an analyst at Meta Group Inc. in Stamford, Conn., said much of the resistance to adopting more rigid systems operation procedures boils down to a distrust of mainframe methods.

“Walk into any data centre and talk to the distributed [systems] people, and they think mainframe is a dirty word,” Yeck said. “They don’t want to carry forward any of the disciplines we’ve learned from the mainframe world.”

But some IT managers are finding ways to get around that resistance. For example, Royal Bank of Canada is due to shortly finish converting the last of 43 server-based applications as part of an 18-month project that was designed to let its data centre personnel use mainframe tools to do multiplatform job scheduling and workload balancing.

Business units didn’t have to sign up, said Patrick Morassutti, manager of enterprise scheduling and batch automation at the Toronto-based bank. But they were told that if they didn’t, they would have to monitor overnight batch jobs themselves. “They get scared when you say that,” he said. “No one’s opted out yet.”

The need to install tools that automate IT operations across multiple systems may be self-evident to data centre managers. But AFCOM attendees said the senior executives who have to sign off on such projects want to see something more tangible: returns on investment.

Pete Lillo, manager of data centre operations at NCCI Holdings, said he’s currently working to cost-justify a proposed investment in enterprise job-scheduling software. The tools that he’s considering would probably cost between US$200,000 and $400,000, Lillo estimated, adding, “And I have to justify in business terms why it’s necessary.”

Lillo is due to present the proposal within the next couple of weeks. If all goes well, he said, an enterprise scheduler could be in place by year’s end.

Mark Levin, an analyst at Meta Group, said many data centre managers aren’t well versed in doing ROI calculations. And the cost of new data centre tools may be exacerbated by the need to hire contractors to handle the actual installation work, he added.

One of the potential benefits of multiplatform tools is a reduced need for system operators. Patrick Morassutti, manager of enterprise scheduling and batch automation at Royal Bank of Canada, said an installation of systems management middleware developed by Alpharetta, Ga.-based Stonebranch Inc. enabled the bank to cut 18 operations jobs.

But all of the affected data centre workers were reassigned to other positions, and Morassutti said the big justification for installing the software as part of an 18-month project to standardize multiplatform operations was the fact that uncoordinated job scheduling was causing systems errors. “Downtime costs us more money than people do,” he said.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now