Service-level agreements no longer enough

Longtime readers know I’m a fanatic about service-level agreements. I regularly advise clients about SLA best practices, negotiation and enforcement strategies. And we talk often about how to develop service-level management and monitoring infrastructure that ensures that carriers live up to their promises.

But all that’s old school in a world where service providers are essentially just bandwidth providers. What happens as providers move from bandwidth providers to application providers? That is, when the service provider isn’t just generating bits on a wire, but is delivering applications, storage and computing services from the cloud? A couple of things change in this new scenario. First, SLAs necessarily evolve from simple infrastructure metrics (latency, jitter, packet loss) to application-level metrics (application availability, response time). Second, monitoring and measurement needs to be far more comprehensive.

Let’s say you’re relying on a hosting provider to deliver a key application. You should be able to track server availability, application performance, storage availability and network performance — not just router uptime. And if the application is hosted on a virtualized server, this can be mighty complex.

Finally, there aren’t the same built-in upper limits with application services as with network services. If you purchase T1 access to MPLS, you’ll never use more than a T1’s worth of bandwidth. But if you purchase access to an application, users may consume more CPU cycles than anticipated — and service consumption (and costs) will skyrocket.

The upshot? As services evolve, SLA best practices need to change, too. A key component that emerges as part of SLA management is the notion of policy management and orchestration. Providers and their customers need to be able to manage and monitor a broad range of physical infrastructure, and seamlessly integrate that into a provisioning and billing system. They also need the ability to perform trend analysis and predictive modeling, to anticipate surges (or decreases) in demand.

I’m intrigued by a service offering being rolled out by BT Innovate (the arm of the British Telco that includes the research labs, among other things). Called Total ICT Orchestration, the management solution provides dynamic allocation of end-to-end network and IT resources based on SLAs (and ultimately, business priorities). It also will include a policy manager, which includes a master control system that connects all the resource objects and provides operational umbrella over the top. The service works in conjunction with BT’s managed network, storage and virtualized computing offerings — essentially enabling the carrier to provision, manage and deliver an application end-to-end.

Not all of this is new — plenty of providers are moving to a cloud computing model (storage in the cloud, computing in the cloud, applications in the cloud). What’s unique about BT’s approach — so far as I can tell — is that it focuses on a part of the problem that most services don’t: the provisioning, management and policy. As IT departments move increasingly towards software-as-a-service, this is something they’ll have to keep in mind.

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now