Cloud computing is the latest buzzword, but is it really a new paradigm shift? Ask 10 different people what cloud is, and you’ll get 10 different answers – and that’s why there’s so much cloud confusion.
Cloud is about how we create and consume applications, allowing them to run anywhere, on any device, at any time, thanks to a collective cloud of resources on the Internet – from hardware to software. In theory, there are no system boundaries and you’re not tied to any physical location.
Clearly, we’re not there yet, but Gartner Group predicts we’ll see mainstream adoption in the next two to five years. But it’s not quite as simple as it appears.
Cloud is a word that we’re all using in a different context, said John Sloan, senior research analyst with Info-Tech Research Group. But many vendors – including Microsoft, IBM, HP, VMware and Citrix – want to be part of the game, even if the game isn’t defined yet. Then there are pioneers that see cloud for what it will become and are willing to take risks, such as those that can’t build out their business in a traditional way.
Cloud computing involves the provisioning of applications from abstracted compute resources, derived from aggregated and virtualized commodity hardware, according to Sloan. Provisioning is typically metered and elastic (as needs grow, provisioning grows).
“SaaS is not the cloud,” he said. “It just lives there.” With software-as-a-service, the customer is simply contracting the use of an application that is hosted and provisioned from a compute cloud. The same applies to platforms in the cloud (PaaS) and infrastructure in the cloud (IaaS). But, to add to the confusion, many vendors are changing their branding to “private clouds,” since they own their own clouds.
But is cloud computing ready for prime time? Data and application mobility is where clouds really fall short, said Sloan. The cloud metaphor suggests something that is ubiquitous, like the Internet – which cloud is not. We’ll see real value when we get past proprietary silos of clouds and integrate internal or private and external or public clouds. “It should be the cloud, not multiple clouds,” he said.
At this point, availability and reliability may not be good enough for critical application loads, and though cloud computing is supposed to be free from physicality, the location of processing and data remains a consideration.
Some SaaS providers own and manage their own clouds, so they have higher assurances of security, but some contract to another provider. If they’re storing data on U.S. soil, they’re subject to the Patriot Act. “Clouds are supposed to have no physicality, but there is, because they’re based on infrastructure somewhere,” said Sloan, “so that can become an issue.” The FBI can access records held in the U.S. by applying for an order of the Foreign Intelligence Surveillance Act.
I Love Rewards, a start-up that creates online corporate-branded reward programs, is one of the few Canadian companies using cloud computing, through Amazon Web Services (AWS). Amazon isn’t the default provider of cloud, but it has the biggest name, said Farhan Thawar, chief software architect of I Love Rewards. And it tends to be expensive (he pays about $70 a server).
It also means the company’s data lives in the U.S. “[Some customers] may not choose us because we do host in the U.S.,” he said. Customers include Rogers, Marriott and KPMG; it has 100,000 active members hosted in the cloud.
The company was an early beta tester for AWS and I Love Rewards Express was designed with EC2/S3 in mind. Using a virtualized infrastructure, it’s able to handle changes in volume, such as a spike in demand over Christmas – it can scale from 32-bit to 64-bit and back down again, for example.
“We keep hot servers ready if we need another instance,” said Thawar. “We’re not building new pieces. It’s one system that’s virtualized through that one piece.” And he never thinks about buying hardware. “We’re transitioning a couple of legacy clients, at which point we will have no servers in the office.” But it can be expensive, depending on your requirements, and in some cases it could be more cost-effective to build your own cloud (he hosts his own server on Slicehost).
This is why we’re seeing task-specific clouds. Netfirms is a Web hosting company that provides an enterprise grid-hosting platform, which allows customers to handle traffic spikes in a secure, automated way. This works for customers who want to harness multiple machines, but with an unmodified infrastructure and no administration. “In other words, a free lunch,” said Darius Antia, chief technology officer with Netfirms, which has created task-specific clouds for CPUs, bandwidth and storage.
One of its customers is Pixlr, an online photo editor that was collapsing under the load of customer requests and needed a place to scale. “With a level of abstraction of individual clouds, [customers] inherently get the benefit of multiple CPUs, bandwidth and storage,” he said.
But today’s clouds solve the needs of only one per cent of users, he said, and common computing frustrations remain – but he’s hopeful standards will come about that allow arbitrary software to run in the cloud.
Standardization remains an issue. You can take a virtualized machine and move it to Amazon EC2, but if you then want to move to another provider, that provider will have its own way of doing things, said Reuven Cohen, founder and chief technologist of Enomaly. He’s also the creator of the Cloud Interoperability Forum, an advocacy group that promotes interoperability (which includes Cisco, IBM and Sun, among others) and the IEEE program chair on cloud computing.
The idea behind cloud computing is to be free of system boundaries, geographical limitations and internal ownership. But enterprise IT wants control, while business units want flexibility. There’s also the challenge of data portability and interoperability, trust and security. “Why should I trust a book seller with my infrastructure or some startup with my data?” he said, adding this is a perception-based challenge.
He also says there are no true cloud providers in Canada. “It just isn’t happening,” he said, so there’s a need to create an infrastructure offering geared toward Canadian customers. The opportunity is in defining a uniform cloud interface – an API for all APIs.
This could be a good time to do that, since the marketplace is under increased financial pressure and budget constraints. Companies are looking for different ways of meeting their business targets, and multiple pressures will come down on IT departments, said Gordon Kerr, distinguished engineer at IBM Canada. And, with fluctuations in the business cycle, they’d like to be able to control their computing capability. They want better proof of concepts and they want to see results much more quickly.
“The question now gets into what applications are suitable to the cloud environment,” he said. For larger enterprises, it may come down to analytics or research. For SMBs, it might be about building capabilities in areas where they don’t have in-house skills or resources.
But perhaps most importantly will be the ability to provide rural communities with the same compute power available in major urban centres around the world. “With commodity bandwidth,” he said, “the ability for rural communities to participate in cloud environments is going to dramatically improve over the next two or three years.”