Mainframe code presents problems

By some estimates, the total value of the applications residing on mainframes today exceeds US$1 trillion. Most of that code was written over the past 40 years in Cobol, with some assembler, PL/1 and 4GL thrown into the mix.

Unfortunately, those programs don’t play well with today’s distributed systems, and the amount of legacy code at companies such as Sabre Holdings Corp. in Southlake, Tex., makes a rewrite a huge undertaking.

“We’re bound by our software and its lack of portability,” Sabre vice-president Alan Walker said of the 40,000 programs still running on IBM Transaction Processing Facility (TPF), Agilent Modular Power System and other mainframe systems.

With a shortage of Cobol programming talent looming in the next decade and a clear need for greater software agility and lower operating costs, IT organizations have begun to make transition plans for mainframe applications. The trick lies in figuring out which applications to modernize, how to do it and where they should reside.

Applications fall into one of three groups based on scale, said Dale Vecchio, an analyst at Gartner Inc. Applications under 500 MIPS are migrating to distributed systems. “These guys, they want off,” Vecchio said. As organizations begin peeling away smaller applications, they may move to a packaged application; port the application to Unix, Linux or Windows; or, in some cases, rewrite the applications to run in a .Net or Java environment, he said.

In the 1,000-MIPS-and-up arena, the mainframe is still the preferred platform. Applications between 500 and 1,000 MIPS fall into a grey area where the best alternative is less clear. An increasingly common strategy for these applications is to leave the Cobol in place while using a service oriented architecture (SOA) to expose key interfaces that insulate developers from the code.

“If you expose those applications as a Web service, it’s irrelevant what that application was written in,” said Ian Archbell, vice-president of product management at tool vendor Micro Focus International PLC in Rockville, Md. “SOA is just a set of interfaces, an abstraction.”

“SOA at least allows you to break the dependency bonds,” said Ron Schmelzer, an analyst at ZapThink LLC in Waltham, Mass.


Cobol isn’t going away, but it’s also not moving forward. While the Cobol code base on mainframes is projected to increase by three per cent to five per cent a year, that’s mostly a byproduct of maintenance, said Gary Barnett, an analyst at Ovum Ltd. in London. No one is learning [Cobol] in school anymore, and new applications aren’t being built in Cobol anymore,” said Schmelzer. “Cobol is like Latin.”

Vendors such as Micro Focus have abandoned the idea of evolving the Cobol language for distributed application development. “Micro Focus is not about a better Cobol compiler,” said Archbell.

Instead, its approach is to “embrace and extend,” he said. “We expose things like aggregated CICS transactions as JavaBeans, Web services or .Net or C# code. It’s wrappering.” But with so much legacy code, that process won’t take place overnight. “It could take 20 years,” Archbell said.

Sabre still has more than 10,000 MIPS of applications on mainframes, and Walker plans to migrate everything off over the next few years. The company’s TPF-based fare-searching application, used by LP and travel agents, has been rewritten to run as a 64-bit Linux program on four-way Opteron servers.

Sabre migrated the back-end data to 45 servers running MySQL that each contain fully replicated data. The new system is more flexible and “pretty cheap” compared with the mainframe, Walker said.

He questions the conventional wisdom that all high-end applications need to stay on mainframes, noting that the search application was in the thousands of MIPS. “It’s pretty obvious that you don’t need mainframes to do large-scale transactions,” he said, pointing to the successes of eBay Inc. and Inc.

Barnett points out that very few of his clients have been successful at completely rewriting large-scale applications. In Sabre’s case, it’s worth noting that the application was CPU- and memory-intensive and that competitive pressures would have forced a rewrite anyway. “We solved a larger problem,” which was the need to generate hundreds of results instead of the 10 to 20 the TPF system could deliver per search, Walker said.

Simply rewriting millions of lines of code to deliver the same features not only wouldn’t cut it financially at The Bank of New York Co., but also would require a lifetime of work, said Edward Mulligan, executive vice-president of the technology services division. A gradual transition to packaged applications might help such businesses, said Ovum’s Barnett.

“Eighty per cent of core business processes in banks are the same. In 10 years, it will make little sense to have your own, unique homegrown savings program,” he said.

Mulligan has been migrating some smaller applications, freeing up expensive mainframe capacity. The big reason is cost. When the vendor of his problem management software refused to bring licensing in line with equivalent packages in the Windows arena, he migrated to a cheaper Windows version.

The total operating costs of running applications on the mainframe can be “easily” 10 times that of a Unix or Windows architecture, said Sabre’s Walker.

While IBM has begun offering sub-capacity, usage-based pricing, few third-party vendors of mainframe software have followed suit. “Vendors who don’t embrace flexible pricing are accelerating the decline in their business,” said Barnett.

At Sabre, Walker plans to continue to migrate off the mainframe, which he said is simply too expensive.

QuickLink 066920

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now