Despite rumors to the contrary, virtualization is not just for the datacenter. From the most complex workstation applications to the simplest DLLs, virtualization is leaving an indelible mark on client computing. A good example of this is application virtualization, a label applied to products that insulate running programs from the underlying desktop. The idea behind application virtualization is to eliminate many of the support-draining configuration problems that plague conventional desktop implementations. These products virtualize the interaction between a given program and supporting OS resources, like the file system and, in the case of Windows, the system registry database. All these products isolate applications from the OS image, but the approaches are quite varied.
At one end of the spectrum are products like Altiris Software Virtualization Solution (SVS). Tools like SVS employ what might be called the “brute force” method: A simple filter driver is installed in the Windows file system code stack to intercept and redirect I/O calls from SVS-managed applications. When enabled in their respective “layers,” an SVS-managed application appears to integrate seamlessly with the OS. In reality, every aspect of the application’s OS interaction, from loading a DLL to accessing a registry key, is being redirected on the fly to a local cache file managed by SVS.
The advantage to this approach is that it fully isolates the OS from the application: Any changes made by the application — to the Registry, to its own files, to Windows — are in fact occurring solely within the SVS-managed cache file. Since no real changes are occurring, the underlying OS image remains pristine and the application can be “disabled” by simply clicking a button or by remotely disabling it from a supported management console. The downside to this approach is that it has trouble managing multiple versions of the same application; for example, Microsoft Office can sometimes trip up SVS by invoking the wrong version of a component when multiple versions are installed in parallel layers.
At the other extreme you have solutions like Softricity’s SoftGrid (recently acquired by Microsoft and soon to be integrated with the base Windows Server platform). SoftGrid provides a complete virtualization environment: Applications are streamed to the client from a server share and then executed within a customized “sandbox” that completely isolates the code from the OS. The advantage to this approach is that it avoids many of the multiversion issues that plague SVS. However, the trade-off is a more complicated deployment process that requires administrators to create a custom installation image to optimize the code base for streaming.
Of course, no market segment is complete without an interloper to shake things up. Thinstall combines the simplicity of SVS with the fully padded box approach of SoftGrid. By embedding both the virtualized environment and the application image into a single executable file, Thinstall eliminates the need for supporting infrastructure: Just copy or stream the file to the client and execute. No agent is required and the image can be deployed using virtually any traditional management suite, including Active Directory and Microsoft Systems Management Server. The downside is the need to customize the application using Thinstall’s Virtualization Suite toolset.
Classic virtual machines
In some client situations, a more comprehensive virtualization solution is required, such as hosting a legacy application on a new operating system. In that case, it may be best to isolate an application within a complete, virtualized OS environment — the classic “virtual machine” approach. This enables you to run an application within the OS image of your choice while still supporting migration to, and integration with, newer or otherwise incompatible OS platforms.
VMware and Microsoft dominate the classic VM market, with VMware the more visible of the two. Efforts like the VDI (Virtual Desktop Initiative), a consortium of vendors promoting virtualization as a desktop and application management solution, are being driven primarily by VMware.
VMware has also been quick to embrace new CPU and hardware technologies, such as 64-bit processing and expanded memory for next-generation applications. VMware exclusives, such as the ability to take snapshots of a VM’s running state and “roll back” to a saved image have earned affection from the developer community. But in the end, VMware’s willingness to expose its underlying virtualization technology to the masses may pay the biggest dividends.
Projects like the VMware Player, a stand-alone tool for hosting a VMware-created VM on any Windows desktop system, seek to position the VMware file image as a de facto standard for delivering appliancelike application functionality. Already, a large selection of prebuilt VM images is available through the VMware Web site, most containing open source OSes and applications that can be freely redistributed.
Microsoft, by contrast, has allowed its offerings to languish. Virtual PC, once a strong competitor to VMware when it was still a Connectix product, has only recently been updated. Virtual PC 2007 adds support for Windows Vista as a host operating system but not much else. It still doesn’t support 64-bit computing and continues to lag behind VMware Workstation in areas like USB device integration.
One wild card in the VM equation is Citrix Systems. Long the dominant player in server-based computing, Citrix now portrays itself as the true pioneer of application virtualization. Cut through the hype, however, and you’ll find an amalgam of repositioned products punctuated by the addition of an application virtualization and streaming solution similar to SoftGrid. The success of the Citrix strategy will hinge on how well it can integrate this functionality, known as Project Tarpon, with the myriad protocols and presentation layers that make up the Citrix stack. Project Tarpon becomes part of Presentation Server in March.
Interestingly, VMware could learn a thing or two from the Citrix experience. Many of the same pressure points that held back server-based computing — poor local hardware support, limited client mobility, massive back-end hardware requirements — are present, and in some cases exacerbated, in Virtual Desktop Initiative deployments. Instead of hosting multiple user sessions on a single Terminal Server image, you’re now hosting the equivalent of multiple Terminal Servers, each with a single connected RDP (Remote Desktop Protocol) user. The scalability implications are frightening: easily 10 times the hardware required to support an equivalent server-based computing load.
Just as Citrix has reinvented itself as a virtualization trailblazer, VDI players such as Wyse and Neoware and protocols such as RDP and ICA (Independent Computing Architecture) are looking for a second life. They may find, however, that the grass is no greener on the VDI side of the fence.
The virtual road ahead
You can tell that a product category has matured when it spawns an ecosystem of complimentary products. In the case of desktop and application virtualization, the emergence of supporting solutions, such as Kidaro’s Managed Workspace product, demonstrates that the segment is gaining traction. Kidaro’s offering acts as a platform-agnostic wrapper (it works with both VMware and Microsoft Virtual PC) for classic VM-hosted applications, providing an additional layer of integration