Site icon IT World Canada

A cloudbursting primer

The term “cloudbursting” was coined by Amazon Web Services evangelist Jeff Barr to describe the use of cloud computing to deal with overflow requests, such as those that occur during seasonal rushes to online retail sites.

Rather than invest in additional hardware, software and personnel to scale and manage the myriad pieces of infrastructure necessary to increase capacity for Web applications, cloudbursting enables you to take advantage of the cloud to increase capacity on an on-demand basis.

Cloudbursting addresses two basic problems. First, companies periodically need additional capacity, but the return on investment for infrastructure to handle peak loads is exceedingly long because the extra capacity is only used occasionally.

Second, companies are hesitant to move all infrastructure to a cloud computing provider due to security and stability concerns.

While cloudbursting doesn’t eliminate that exposure, if there is a problem with the cloud it isn’t the disaster it would be if the cloud handled everything.

Cloudbursting effectively enables organizations to treat the cloud like a secondary data center. They maintain and control their infrastructure and applications while leveraging the ability of clouds to expand and contract dynamically, making it financially feasible to use additional resources periodically without a large investment.

What’s the catch? The actual network and application delivery infrastructure requirements are fairly straightforward and based on existing, well-understood methods for implementing global load balancing. This makes cloudbursting appear rather simplistic, but as is usually the case, application issues such as data replication and duplication make the entire process more difficult if not impossible for some applications.

While databases can be replicated in real time over the Internet, this is only feasible if you have a high-speed, low-latency link between the data center and the cloud provider. This means most organizations won’t be able to take advantage of real-time replication, or mirroring, to address replication and data duplication issues. A more likely scenario is that it will be necessary to keep the cloud version of the data as up to date as possible and replicate it on a regular basis.

Once the application instance in the cloud is no longer necessary, the data will need to be merged with the local database, through import or replay of transaction logs. Some developers have solved this issue by implementing replication applications of their own which trigger on database activity and use Web services to replicate the data back to the local data center. These solutions are not perfect and carry the risk of incurring manual intervention to clean the data when it is reintroduced.

Integration with other applications, too, is fraught with difficulty. A rule of thumb is that, the more integrated an application is, the less likely it is a candidate for cloudbursting. The applications best suited to cloudbursting are those with very little integration with other applications and whose data is not transactional.

How does it work?

Providing you have an application that fits the bill, cloudbursting works like a global load balancer, distributing requests across multiple data center installations. The load balancer is charged with monitoring the local data center and determining when it is close to peak utilization, then shifting requests to a secondary data center which is, in this case, a cloud computing provider.

The cloud computing instance of the Web application is then brought online and begins serving visitors. How the cloud accomplishes this is highly dependent on the deployment model used by the provider, but it is assumed for the purposes of this discussion that the application is deployed and available at the cloud computing provider’s site.

The load balancer continues to monitor the local data center and redirects requests as long as volume is high enough to push the local application over capacity. When traffic abates, the load balancer stops forwarding visitors, the application in the cloud goes idle and is eventually taken offline.

How do you do it? While this sounds fairly simple, there are several pieces of infrastructure that need to be in place in order to successfully implement a cloudbursting strategy.

1. You must have the application deployed and available inside the cloud. It may be possible to deploy applications on-demand to the cloud computing provider, but most providers will likely require the application be deployed before it is needed.

2. You must have a global load balancer capable of deciding when to direct requests to a secondary site.

3. You must have a way to determine when your application infrastructure is near capacity. An application delivery controller (intelligent load balancer) is the most efficient mechanism for making this determination.

“At or near capacity” for your organization could be a single metric such as application response time, concurrent connections or aggregate server load, or it could be a combination of factors. Basically, you are determining the threshold at which you want visitors and customers to access the cloud instead of the local instance of your application.

This information is necessary in order to properly configure your application delivery controller so it can communicate with the global load balancer in a timely fashion and start to redirect traffic before it becomes critical.

Cloudbursting is a new twist on a fairly well-understood architecture. The difference between cloudbursting and traditional global load balancing across multiple data centers is in the use of the cloud and in the savings realized by organizations that take advantage of cloudbursting instead of building their own infrastructure.

Cloudbursting can also be an efficient method of assisting with scaling rapid growth sites at which the rate of traffic growth is outpacing the ability of IT to obtain, prepare and deploy infrastructure. And cloudbursting can also be extended as a disaster recovery plan in order to reduce the costs associated with building out and maintaining a secondary, idle data center.

MacVittie is a technical marketing manager at F5 Networks. You can contact her at L.MacVittie@F5.com

Network World (US)

Exit mobile version