IT managers have to be prepared for a constantly changing environment that can adapt to new applications being added from the cloud, mobility and the Internet of Things
Can you imagine a world where “in production” is just the breath you take between changes (unlike today, where change is an event that disturbs production)?
That’s where we’re headed, and faster than most of us would like. It’s being driven by an ever-growing number of parts in play (what the cloud, mobile, and Internet of Things revolutions are creating), vendor churn in mature markets, and an annual need to increase enterprise agility.
Few of us — today — have a portfolio that’s up to the task of incessant, daily (or more than daily) change. But that’s where we’re headed, in a world where almost every product is IT-enabled (or, as the financial sector well knows, the product is nothing more than yet another application).
Taking two to three years to implement a major, far-reaching application system made sense in years gone by (even though the month-after-month whinging about “why isn’t it done yet” was a pain in the neck). It would remain as a stable platform, changing little, for a decade.
Indeed, doesn’t our normal depreciation treatment for such things — 10 years — reflect that languid pace of long periods of stability punctuated by a major change cycle?
Agility, on the other hand, requires frequent change. That means the piece parts of our portfolios have to interact well, so that the changes can be isolated (to keep the test cycle in line with the scope of each change).
Modify the packages you buy to make up your portfolio, and your multiply your testing and time needed to respond to deliver a new change. No wonder the backlog never shrinks!
With process control and sensing vendors on their own change cycles, and with (increasingly) your enterprise’s products interacting with your systems after they’ve been sold, there’s a lot of different things upending the stability of your portfolio.
Given that, in the next few years, your infrastructure and middleware are likely to be a hybrid of resources drawn from various cloud-based providers alongside dedicated facilities, that you’ll be asked to integrate more (not less) business-bought technology, and your firm is likely to go through at least one merger or acquisition, and it becomes clear: every investment we can make in making our portfolios change-friendly (rather than change-resistant) is a good thing.
So here are some ideas, from early adopters of the process of virtualizing their portfolios.
- Data Abstraction: Adopted by a national service provider whose systems must operate continuously, data abstraction saw the IT group in question virtualize the data connections between systems by defining a master data model for the business, and building insulating transform routines to convert data from sensing stations, and from applications in the portfolio, into and out of the abstracted form. Changing any piece of the puzzle now only requires that the abstractor in question be tested (and changed if needed); no other part of the portfolio “sees” anything different happening elsewhere. (As a side benefit, this organization is now creating multiple data products for sale that “hang off” real time operating data, since the abstraction also allows for load balancing in the data store.
- Process via Message: Moving from monolithic systems to small-scale instantiations that do a task well and quickly means that as transaction flows increase additional instances of a message handler can be brought to bear to handle the load. This is an excellent way to have “just in time” infrastructure to meet peaks (a key reason to head to a hybrid cloud approach). These processing engines, in turn, are small, making change much more rapid and testing (especially if coupled with the data abstractor model) quick and comprehensive.
- Unmodified Packages Only: When modified function is required but a package is the base, the modified functions are handled by outboard routines that are separate components, rather than by modifying the package itself. This ensures that the package in question (a) can be easily updated when its vendor issues a new release, (b) is ready for use from a cloud provider of that service, who will have their own change cycle, and (c) makes local differentiation more responsive (since it’s in a separate routine).
- Rescaling the Finances: This sort of component-driven, fragmented portfolio allows for a rescaling of financial overhangs that impede change. More projects are of a size to be treated as operational expenses and do not require capitalization (and hence depreciation). Shorter depreciation cycles — the two year Canada Revenue Agency one for routine software rather than the 10 year fundamental software one — can be used, making the financial burdens go away.
We’ve spent a decade or so trying to reduce our portfolios to fewer but larger components. On the hardware side, we look at tens of servers and try to ignore the fact that hundreds or thousands of system images are actually virtualized there. It’s time to reduce our software likewise to a few major connector approaches — and many components that add up to agility.Related Download
Defining data services for virtualizing and automating IT
This Evaluator Group Technology Insight paper looks at how IT agility, achieved through virtualization and automation, can help established Enterprises ensure their competitive edge and respond to the heightened market competition, particularly that of public cloud-based IT services.