Site icon IT World Canada

Virtual networking practices up for debate

Virtual server configuration and management is still a developing art. But a set of best practices for laying out a virtual network for best performance, redundancy, and security is even more up for grabs. Despite the frequency with which questions about virtual networks appear on VMware Communities Forums, it appears as if no two companies use the same approach.

Some companies are limited by hardware availability and security, or a misunderstanding about what the virtual network is all about.

Complicating matters, network administrators are generally not involved in decisions about how to configure networks for virtual servers, either because they don’t wish to be, or don’t realize that they should be. Even when they are, however, network administrators generally lack the basic virtualization education that will help them to make good decisions based on the accepted best practices.

The virtual network begins where the physical network ends at the virtualization host. The network adapters in the physical host are bridged to the virtualization layer. What happens next depends on the virtualization host in use.

For VMware Server, VMware Workstation, Citrix XenServer, and Microsoft Hyper-V, the network bridge terminates at the virtualization layer; the virtualization software then makes a virtual network interface available to the virtual machines. The virtual network interface can either talk to the bridge, to a host-only network, or through Network Address Translation (NAT) device. However, everything goes through the physical host, which causes some security concerns.

VMware ESX and VMware ESXi requires the bridge to terminate at specific virtual switches which are simple layer-2 devices. The virtualization layer makes the virtual switches available to make it easier for administrators to create and secure virtual networks; essentially, the virtual switch is connected to a physical switch as via normal uplink capabilities. VMware ESX and ESXi can have a large number of virtual switches available as well.

Each physical network interface on the physical server can uplink to either a single virtual switch-to which all the VMs could connect-or each physical NIC can connect to a different virtual switch. It is even possible to have virtual switches that have no uplink to a physical switch. These are considered host-only virtual switches.

So what are the best practices?

The first is to configure each physical server with uplinks from at least two different physical switches to one or more virtual switches.

Not only will give that give the virtual-switch layer a way to function even if one physical NIC goes down, but also this allows the virtual switch to load-balance VMs across both NICs if they’re both functioning.

Other than that one guideline, best-practice recommendations on the forum vary widely.

I find it’s also effective to provide a separate virtual switch for the physical switch linking the physical server to storage. That keeps VMs from fighting for the same bandwidth for access to both network and storage resources.

The common wisdom on security is that VLANs on a vSwitch are currently secure-in some cases more secure than many physical switches-but this may not always be the case.

Splitting traffic amongst the available physical NICs give the best redundancy, performance, and security overall, but how to accomplish this split is far from clear.

Virtualization expert Edward L. Haletky is the author of “VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers,” Pearson Education (2008.) He recently left Hewlett-Packard, where he worked in the Virtualization, Linux, and High-Performance Technical Computing teams. Haletky owns AstroArch Consulting, providing virtualization, security, and network consulting and development. Haletky is also a champion and moderator for the VMware discussion forums, providing answers to security and configuration questions.

Exit mobile version