FEATURE: The causes behind two common problems plaguing virtualization deployments and how IT departments can craft their strategies from the outset to avoid these hurdles
For any IT department, the first 20, 30 or even 40 per cent of physical servers are the easiest to virtualize. But after that, many virtualization strategies grind to a surprising halt in the face of unexpected hurdles such as server sprawl and server stall.
“They don’t have enough tools to manage this and that’s where virtualization projects fall,” said Timashev.
A recent study, commissioned by the Columbus, Ohio-based vendor of virtualization management tools, found that 49 per cent of global IT leaders of medium-sized companies have trouble resolving IT issues as a result of lack of visibility into virtual environments. Moreover, 45 per cent reported poor visibility was slowing adoption of virtualization.
Timashev said traditional management tools are built for visibility into physical environments. But virtualization is a curve ball that presents a new set of challenges that IT departments are often not prepared for.
“Virtual stall is just typical of big projects in IT,” said Mann, formerly an IT analyst. “They can get out of control a little bit.”
The issue, according to Mann, is “barriers of scale.” ROI is easily reaped when virtualizing non-disruptive systems, such as test and development servers. But the real challenge is when IT departments want to extend that reach and start virtualizing mission-critical applications in the absence of resources and the right talent.
It’s less easy to devise a strategy that will scale to the entire enterprise, said Mann, who has seen many strategies failing to account for potential hurdles down the road. Instead, IT departments often get caught up in the glamour of virtualization and the golden promise of a massive ROI.
Mann said that laying out a solid strategy at the outset should entail certain cornerstones including business leadership buy-in, a recognition that a virtualized environment is complex and will incur costs, and a plan to develop the appropriate skills in-house or bring them in from outside in order to support going virtual.
But for those enterprises that are already along the path of virtualization and find themselves stalling, Mann said a reorientation of efforts will mean a shift in mentality. Move away from focusing solely on server consolidation and start looking at optimization and automating the IT environment.
Proper virtualization management is an equation of technology, people and process. Mann said to scrutinize processes that are eating up valuable staff time, such as provisioning or tracking inventory, and automate those mundane tasks.
Getting the right skill set is necessary to manage the virtual environment. While some enterprises may not be able to afford to bring onboard virtualization skills, which are usually a little more costly, Mann said consultants are a viable option while staff get up to speed. “Skills are definitely an issue,” said Mann.
“One issue with virtualization is it’s just so darn easy to create a server,” said John Sloan,
Deploying a physical server entails requirements gathering, hardware procurement, configuration and deployment. But Sloan said that the traditional lengthy process of defined steps doesn’t happen with virtual machine deployment.
As a result, this causes accountability problems for IT departments who lose control of virtual machines. Sloan said some server instances may continue to run when they’re not needed, which impacts the efficiency of the entire environment.
Dealing with server sprawl, said Sloan, requires going back to the basics and remembering that virtualization doesn’t add capacity; it allows for flexible allocation of existing resources. And, while the goal is to share a pool of compute and storage capacity, Sloan said, whether virtualizing or not, the infrastructure should be segmented into service tiers for better control.
For instance, the mission critical tier means highly available, good processing capacity, and locked down in terms of ability to add virtual machines. Lower tiers would mean lesser concern about availability and performance, yet have more capacity to add virtual machines.
“We took baby steps along the way,” said Stewart.
Going virtual, for Stewart, wasn’t just about saving data centre rack space and electricity. The level of service the IT department would be able to provide to users was an important element of the strategy.
A cautious and well-planned approach was key, especially given budget and support limitations. Among the first steps for Prairie South was to ensure IT staff was trained to maintain the virtualized environment. Virtualization was applied to non-disruptive operations before shifting to more mission-critical ones.
After having successfully virtualized the low-hanging fruit, Stewart said the next step, this summer, is to move the Exchange server to a virtualized environment. Then, the more mission-critical systems, such as payroll, human resources and student information will follow.
But Stewart does acknowledge how easy, in the absence of a solid strategy, it can be for any IT department to fall prey to server sprawl and stall. “You can run into the situation where everybody wants a server or service out there and you eventually will end up eating all your resources,” he said.
At the core, Sloan said, the problem comes down to capacity planning and management, which is as important as ever considering organizations are starting to build internal clouds.
In short, management is about reducing risk and reaping the most value from your capacity. Planning is about laying the foundation for future capacity requirements three to five years from now.
Virtualization, while wonderful, will only magnify the absence of capacity planning and management, said Sloan. “It doesn’t magically create capacity,” he said.
Follow Kathleen Lau on Twitter: @KathleenLauRelated Download
Optimized Security and Simplicity for Complex Distributed Enterprise Networks
This IDC Analyst Connection looks at the the benefits of using a UTM platform integrated with network connectivity and how it will save the enterprise money, reduce the number of vendors' products needed to be purchased, improve the communications between devices, offer the opportunity for organizations to deploy more sophisticated capabilities, and vastly improve security.