It’s too easy for all of us to get lost in the latest announcements of new services, new products, and new “starts” on architecture (from single machines through client/server to clouds and an Internet of Things) and fail to see the big picture.
Yet, as IT professionals — especially those of us in daily contact with the business — we had better see that picture.
Rooted at the core of that big picture when facing business clients is “what’s IT for, anyway?” — and it’s a question we’d better have a firm answer for.
For 50 years that answer has been “automation,” in one form or another. It started life as a replacement for accounting machines and grew to a host of diverse applications all offered online — but think back over business cases for projects past. How many of them, even today, are rooted in “the system will be better, faster, less error-prone, cheaper overall”?
There’d be less wrong with that if we hadn’t made a fatal turn in the mad rush to fix Y2K as the 1990s ended. Back then, for the one and only time in our professional careers where everyone had a hard stop date — you were done by December 31, 1999, or you had failed — most IT organizations turned to packaged solutions in a desperate attempt to beat the clock.
The problem wasn’t the mad dash to packages. The problem was that at that point the reason we have IT organizations fell by the wayside.
Anyone can implement “the same architecture, the same packages, the same components” repeatedly. Systems integrators and outsourcers of all types have been demonstrating that ever since.
The problem is, as Nicholas Carr pointed out a decade ago when this was in its early days, is that when everyone has the same advantage no one has an advantage.
Flash forward to 2014. Every organization’s challenge today is to simultaneously differentiate itself in a crowded market and radically reduce its cost profile while doing it. (That includes government agencies, where overlapping responsibilities and the inevitable cutting of the civil service are forcing a slow but steady rethink of what governments do, how they should do it, and at what cost, regardless of who’s elected to office.)
If you’re a “me, too” enterprise, just another fashion outlet in the mall, another bank with the same product mix, another grocery chain, the squeeze is on. Internet-delivered services, driven by smart device apps, are forcing you on one side; rapidly shifting distribution costs and accelerated product turn requirements battering you on the other, all while a tapped-out consumer isn’t spending as much.
Anyone can be “Amazoned”, in other words, suddenly finding their category under attack.
If IT is for anything at the moment, it has to be for turning “me, too” into something unique. A retailer’s Web presence can’t be tacked on the side of their store systems, barely talking to each other; a bank has to be able to invent new products daily to escape the “me, too” of interest rate changes; health care providers have to presume that fees will be capped, then shifted to block grants, then cut, all while adding delivery metrics — and get ahead of that.
So three thoughts to start you off with:
First, packages cannot be the be-all and end-all of differentiated enterprises. Packages are commodities. A true life-cycle costing of modifications will show you that it doesn’t take long before a custom solution is cheaper than a modified package. So, yes, we’re going back to designing some parts of our portfolio and building it ourselves.
Second, the shift is on from “automation” as the primary directive to “information” as the core. That means that we’re going to have a lot more analysis capability, preferably closer to real-time — and the systems had better be flexible enough in use to take advantage of the signals given in the information. (If all the use cases force a single repetitive process to emerge, the information can’t be used effectively.)
Third, what makes up our systems as we go forward will be a collection of disparate parts. We’ll have some in the public cloud, some in private clouds, some older pieces, some components from packages still in use, lots of bridges, tunnels and connectors. We’ll want that resulting mashup to run continuously, regardless of what happens, and to be as easy to introduce changes into as Amazon and Facebook find (both of those, and many other Web properties, update on the fly, while their systems are under load — and can switch from one facility to another on the fly without missing a beat).
That’s an IT base that lets the business differentiate and seize moments of opportunity. One built around years to come to the top of the queue, and another year or two to deliver, isn’t. Neither is one locked into “the answer is the package”, because what was right then probably isn’t as good a fit now.
Enterprises have always engaged in marathons. It’s just that, in the years ahead, the entire marathon will be run at 100 yard dash speeds. IT’s for not just keeping up with that pace — but setting it.
If your current portfolio of applications and infrastructure can’t move that quickly, smoothly, seamlessly — well, that’s the work you have to do. Now.
Sponsor: F5 Networks
Hybrid Cloud: The Case for an App-Centric Strategy
As organizations deploy and migrate applications to the cloud, their success depends on adopting an app-centric strategy. An app-centric strategy allows you to maintain control over your cloud applications—providing the same availability, performance, and security services across your hybrid environment.