Site icon IT World Canada

The curse of IT infrastructure

In July, the General Accounting Office published what I consider a rare insight into IT spending. The agency broke down the US$26 billion Department of Defense IT budget into the following categories: business systems, $5.2 billion; business infrastructure, $12.8 billion; mission support (including its own separate infrastructure), $8 billion.

Less than half of the total cost is accounted for by the share of spending that directly and visibly supports users. The lion’s share goes toward the “infrastructure” – the hole from whence bugs, disruptions and mysterious failures come.

Here we have an audit confirming what I have seen creeping up on IT for more than 20 years: It isn’t the applications but the need to support a costly infrastructure that has been dampening the funding for technological innovation.

You can always get votes for adding another attractive application. But hardly anybody will sign up to support an infrastructure that may be serving customers who aren’t paying their way. Selling tickets for seats in fancy rail cars was always easy. Finding money to pay for the track, switches, signal equipment and the fuel depot was always much harder.

The root cause of IT failures and excessive IT costs in large organizations lies in rickety infrastructures put in place one project at a time. What you usually have in large organizations is not a secure, low-cost and reliable infrastructure but a patchwork of connections cobbled together without sufficient funding and rushed to completion without sufficient safeguards.

The currently fashionable approach is to impose centrally dictated “architectures” to cure the pains from incompatible and redundant systems. Such architectures are just another way of achieving order through centralization and consolidation. Unfortunately, under rapidly changing conditions, such a cure may be worse than the original disease.

Invariably, centralization involves the awarding of a huge outsourcing contract to a vendor for whom a critical piece of the infrastructure is carved out, such as the management of desktops. Associated servers, switches and data centers may also be included in the IT territory ceded to the outsourcer, while the resident IT bureaucracy always keeps tight control of a few fatally critical components in order to retain its absolute power.

This approach to fixing infrastructure deficiencies is flawed because the sequence for fixing a broken setup is backward. Contracting for an infrastructure should be the last – not the first – step in putting improved systems in place.

First, IT managers should focus on determining which applications must be delivered immediately. The reliability, affordability and timing of application services will dictate which one of the many conceivable infrastructures would work best to solve high-priority problems.

Second, the organization’s management structure and business goals must be set. I don’t see how one can get funding for overhauling infrastructure as a separate investment. As a credible business case, such investments offer notoriously sterile ground. Infrastructures must be designed so that each step can be financed with incremental funding. Such economics make outsourcing of infrastructure services to a computing “utility” the preferred solution. The recent huge wins by a computer services firm offering “on-demand” usage pricing is a good sign that customers are ready to buy computing “by the quart” instead of owning a farm.

Third, a feasible transition plan for legacy applications must be developed and tested prior to making the least risky technical choices.

Only after the completion of this sequence would it be safe to proceed with outsourcing. Precipitous contracting for infrastructure services is only for the hasty and the impatient (who will be long gone when the auditors finally show up).

Exit mobile version