In his landmark 2003 Harvard Business Review article IT Doesn’t Matter, Nicholas Carr set off a flurry of international debate with his claim that the ubiquity of IT has made it a commodity, devoid of the strategic advantages promised in the days of record IT spending. Three years later, his article still sparks controversy, with readers interpreting his claims as the end of innovation of IT and the obsolescence of those of us who have chosen IT as their profession. If that’s not enough, there’s mounting evidence that his predictions are materializing.
Gartner recently predicted that by 2011, 10 percent of today’s IT departments will no longer exist, 10 percent will reach commodity status, and 75 percent will fundamentally change in role and function. In the March issue of CIO Canada, all four CIOs interviewed on the findings (and Carr for that matter) felt the predictions were somewhat aggressive and premature, but that they are likely a reality in the longer term. Decades of investing in IT in order to innovate and gain competitive advantage are giving way to a commoditized IT environment.
What we’re seeing today is IT’s function shifting from innovation to utility, much like the evolution of the electricity and railway industries used as analogies in Carr’s article.
It’s easy to interpret Carr’s and Gartner’s predictions as doomsday warnings, spelling the end of IT as we know it. The reality is that commoditization does not mean the end of IT’s strategic value. Utility and innovation are not mutually exclusive, but they must be balanced for the purposes of IT’s long-term sustainability.
What’s required to achieve this balance isn’t a reinvention of the profession, but rather a shift in cultural and organizational mindsets, and the application of common-sense best practices honed from other commoditized industries. Much of this shift, and indeed the emerging true competitive advantage of IT, can be summed up in one term: operational excellence.
A cultural shift to operational excellence
In order to make the successful transition to a commoditized IT environment, shifts need to take place at both the executive level and the front lines of IT. At the executive level, this shift is all about operational excellence. Admittedly an overused and hardly a new term, operational excellence is perhaps best defined by Treacy & Wiersema in their best-selling book, The Discipline of Market Leaders. In it, they state that an organization must choose one of three value disciplines as its underlying operating model:
• Product Leadership — a focus on the core processes of invention, product development and market exploitation.
• Customer Intimacy — an obsession with core processes of solutions development, results management and relationship management.
• Operational Excellence — processes for end-to-end services that are optimized and streamlined to minimize costs and hassles.
While a capability in all three is important, one must be the primary value discipline to drive decisions, resolve conflicts, and set priorities. And while CIOs the world over want their departments to drive innovation, their companies need to rely on IT as a source of operational excellence in order to survive, let alone compete. It’s the classic paradox of the CIO — utility vs. innovation — but one that’s very real and reinforced by expectations increasingly imposed on IT by business users and executives.
For CxOs, this shift to operational excellence essentially means maintaining their expectations of IT — enabling, streamlining, cost-cutting — with the possible addition of looking at IT also as a source of risk and responsibility.
For CIOs, the shift is somewhat more profound. The role of a CIO in an operational excellence environment is two-fold: add strategic value to the company through new technology and application development; and as quickly as possible, shift all applications from the high-cost, high-risk development environment to the low-cost, utility-grade environment of support and maintenance, and ultimately to decommissioning when they cease to add value.
Separating I.T. into Disciplines
At the IT level, a similar cultural shift needs to take place, in this case around the separation of application development (AD) and application support and maintenance (ASM), a.k.a. application management, application optimization, etc. Traditionally grouped together under the wider application management umbrella, AD and ASM have evolved to the point where they are as distinct from one another as civil engineering is from electrical, or obstetrics from pathology. Consider how AD and ASM differ in terms of (respectively):
• Organization: project management vs. continuous improvement
• Culture: visionary builders vs. problem solvers
• Governance: aristocracy vs. democracy
The distinction between these two disciplines cannot be overstated and it is imperative that they be separate entities within IT (along with other disciplines such as help desk, infrastructure, etc). Analogies abound to illustrate how this common-sense approach is applied to virtually every other mature, commoditized industry: Buildings: construction vs. property management; Electricity: generation vs. distribution; Railways: construction vs. maintenance.
The cultural shifts described above are not difficult to achieve, but they are definitely the hard part of adapting to a commoditized IT environment. With the cultural shifts underway, the next (and by comparison, far easier) step is to apply common-sense principles to IT that have been proven and honed in other commoditized industries. These principles aren’t meant to replace methodologies such as ITIL, CMM, CoBit or others; if anything they are common-sense steps that will help IT organizations get even more value from adopting such frameworks.
Need for universal understanding
Without everyone in a commoditized or utility-grade industry speaking the same universally understood language, chaos can be expected as a result of confusion and miscommunication. In electricity, a Kwh on the grid in Montreal is the same as a Kwh coming on the grid from a wind turbine in California. In rail transport, standard Railroad Performance Metrics (RPM) are the same whether you’re on the Canadian Pacific or the B&O railway lines.
Similarly, IT organizations need to adopt a universally understood language — an application taxonomy — to classify things like application types, support functions, metrics and more. Yes there will be local interpretations in specific business units, but the organization as a whole needs to speak the same language.
In the same vein, it’s also important that IT reports to the business regularly, with similarly understood metrics. Just as a business must report to its shareholders using commonly accepted financial terminology and according to generally accepted accounting principles, so too should IT in its regular reporting to shareholders (the business). Specifically, IT should report at least on an annual basis to the business in three key areas, all according to the application taxonomy:
• Application Inventory — what do we have, what does it do, what is its status?
• Activities — developments, support & maintenance activities, enhancements.
• Key Success Metrics — up-time, time to respond, and time to resolve (to bugs or requests).
Accountability and discipline
Software, if not modified, will run the same way every time; it never wears out. Where problems (glitches, system outages, etc.) almost always occur is when the wrong change is made to an application at the wrong time and/or by the wrong person. Hence, security and integrity around change control are among the most important principles in a commoditized IT environment.
Analogous to cheque-signing authority or secure access to physical facilities, change control needs to adhere to certain very established, rigorous rules around such issues as: backup and recovery; separation of duties and environments; testing and back-out; data and source code integrity; change control process, and more.
The reality is, even if all such rules are followed, no system is every 100 percent guaranteed to operate as planned 100 percent of the time. However, adhering to strict change-control procedures will go a long way towards increasing the reliability, predictability and quality (i.e. utility grade qualities) of IT.
Related to change control, and a constant top-of-mind issue for CIOs, is the notion of governance. Different models exist to govern internal IT organizations and to ensure IT is represented in the overall corporate governance framework, but an increasingly common successful governance mechanism is the concept of an application steering committee. Formed for each business group of applications, accountable to the business owner and ultimately the executive, the steering committee should meet regularly to assess applications and set goals, and should be composed not only of functional stakeholders from IT, but also the application owner (business unit leader).
A final component of accountability and discipline, and to some degree universal understanding, is documentation; not documentation in the traditional sense of the word — manuals, code-embedded comments, etc. — but in the form of tickets. It is essential that every activity performed on any application be logged in a permanently stored ticket. Effectively serving as an application’s medical record, tickets ensure that no matter who owns what application in what state, an accurate activity and status log follows that application throughout its lifecycle.
An inherent part of the shifts in mindset described earlier is instilling within IT organizations a culture of problem solving, and a culture of continuous improvement. With AD and ASM disciplines (again) separated, it’s important that the utility IT organization has cultural values including: intolerance to software bugs; keeping the lights on at all costs; if a problem arises, it must be fixed; if it isn’t broken, increase its reliability, usability and productivity.
Continuous improvement is also a process, and this is where innovation in IT still thrives. The imperative of electricity companies is to keep the lights on, but the industry is constantly innovating to develop new, cleaner, alternative sources of energy. Likewise, the railway industry’s mandate is to ensure the trains arrive on time, but that doesn’t stop innovation in rail-car design. So it is in IT — yes, the first role is to keep the systems up, but regular, continuous improvements to existing systems is the channel for IT to innovate.
Utility computing vs. utility-grade computing
Three years ago, Nicholas Carr made headlines with his still controversial prediction about the commoditization of IT. More recently, not only have his predictions begun to take some shape, but he’s also sparked further (although nowhere near as pronounced) debate with his 2005 MIT/Sloan Business Review article, The End of Corporate Computing. In it, Carr again articulated the end of IT as we know it, this time predicting that the near future will see computing take on a utility model, where companies tap into computing power and data the same way they tap into electrical outlets today.
Just as has happened with his first article, evidence is mounting that Carr’s corporate computing predictions are starting to materialize. With Salesforce.com, NetSuite, MySAP and even Google as clear examples, one need only look at the growth of ASP, SaaS or virtual appliance delivery models to see how.
As much as we like to debate his findings, we have to give Carr credit for making sense. But we also have to answer the question: If everything we’ve done up to this point is now a commodity, and everything we’re developing now is en route to obsolescence, what do we do now?
The answer: achieve operational excellence. Balance utility and innovation. In short, before we get too caught up in the utility computing model, let’s make sure our current computing runs like a utility.
–Dr. Peter Thompson is president and CEO of RIS, an IT services firm specializing in applications support and maintenance, and author of Maximizing IT Value through Operational Excellence, which includes a reprint of Nicholas Carr’s original IT Doesn’t Matter article