Eric Siebert’s recent post on optimizing the host environment is a very important concern that may frequently be passed aside in the interest of reducing implementation time for virtual environments. In this blog, I would like to pipe in with a few of my own tips related to the host environment. These strategies are applicable to many virtualization platforms, and will transcend products as virtualization advances.
DNS configuration for the hosts
Having a correct DNS environment is important for all systems, not just virtual environments. Pay particular order to the suffix search order, as the first result for queries should be consistent and timely across hosts. Also, consider host entries for fixed systems, with an entry for the host itself, all other hosts, the management system and any other relevant systems with which the host would need to communicate. A specific issue is VMware’s DRS functionality, which can have issues with incorrect DNS configurations.
Time configuration for the hosts
For platforms that are Windows based and members of an Active Directory domain, this concern is somewhat eased. But for Linux systems, you want to have an automated mechanism in place to manage accurate time across hosts. For ESX and VirtualCenter, Eric again has covered this well over on SearchVMware.com with a tip.
Also decide whether you want guest virtual machines to sync time with the host via the driver software (VMware Tools, Guest Additions, etc.) This will relieve issues that go with multiple time zone support as well as separate issues in time synchronization.
Get environment agent notifications right
For virtualization hosts on the server level, all hardware failure notifications should be configured to the fullest extent possible. This can be device alerts (Dell DRAC/HP iLO), SNMP alerts, agent configurations or even blade server management software. With the scope of the virtual environment, maybe even use multiple notification mechanisms.
Single hypervisor per platform
This is more relevant on desktop environments, but it goes without saying that you should not install two products on a single system. Even though it may be tempting to have the functionality of multiple platforms, it may complicate the host environment. Take VMware Server and Sun xVM VirtualBox as an example, they theoretically could exist on same systems because of the VMware Bridge protocol binding and the VirtualBox explicit host adapters able to have their own configuration. This is one of those just-because-you-can-does-not-mean-you-should scenarios.
Host configuration is an area ripe for configuration procedures and policy enforcement to ensure consistent behavior among host systems. The procedural investment can usually help present the virtualization solution with more credibility as well.
Sun has released VirtualBox 1.6.4, and the upgrade process requires some forward planning. Version 1.6.4 is a collection of fixes to the previous release that mostly revolve around shared folders and the VRDP (VirtualBox Remote Desktop Protocol) implementation. Here is what you need to know if you are upgrading:
During the upgrade installation, you are presented with the familiar message about installing a device that has not passed Windows logo testing. These messages are common across virtualization platforms, as these drivers and devices enable the hypervisor to present the virtual machines.
After these messages are accepted, the installation will continue and allow you to access your existing VMs from the previous version that you may have.
The one unfortunate point of the upgrade process is that any host interfaces created on an existing installation of 1.6.2 or earlier will be removed by the upgrade process. Overall, I think VirtualBox’s networking implementation is a little short of both VMware Workstation and VMware Server’s VMware bridge protocol. Before you embark on the upgrade, I recommend you enumerate any host interfaces that you have created. Then, make a quick script in the following fashion that will recreate them with the same names you already have:
VBoxManage createhostif "VM-Bridge1"
VBoxManage createhostif "VM-Bridge2"
VBoxManage createhostif "VM-Bridge3"
Any VMs with a bridged interface will be configured to an invalid network interface after the upgrade to 1.6.4. I have an earlier blog posting about bridged networking on VirtualBox, and the commands and planning points are unchanged from 1.6.2 to 1.6.4.
The VMs will not need to be upgraded directly, but it would not hurt to get the 1.6.4 version of Guest Additions installed to optimize the corrected functionality between these two versions. Once the new version is installed, the systray icon and the
VBoxControl getversion will show the 1.6.4 release.
Version 1.6.4 is still lean, at only 23 MB, it remains a ready to go virtualization platform and is still freely available from the Sun website.
By eliminating wasteful resource use on your host servers, you can make more resources available for additional virtual machines.
Most operating systems today have been developed to run on physical servers in non-virtual environments. Because all the virtual machines are competing for the same resources on the host server, you want to limit the guest operating system so it only consumes resources that it needs to perform whatever function that it has been designated to do.
Microsoft Windows is notorious for wasting server resources in its typical default configuration. Many unnecessary services are loaded that most servers do not need: for example, when’s the last time you needed the Windows Audio, Print Spooler and Wireless Configuration services on your SQL Server? Windows also constantly reads and writes to disk for things like swap and log files and Windows networking tends to be very chatty on a network often generating excessive network traffic.
All of these additional services generate excessive and often unnecessary network, CPU, memory and disk resource usage. It may not be all that much on any one individual server, but add that up across 12 virtual machines on a host and it makes a difference.
Windows Server 2008 takes a step in the right direction with its Server Core installation which strips out many of the unneeded components including the GUI. Many Linux distributions are already optimized to perform specific functions as well. Additionally, there are many virtual appliances available that have very small footprints and make for good alternatives to full-blown operating systems.
Here are some tips for reducing the amount of resources that your servers consume:
- Keep event and audit logging to a minimum
- Disable unnecessary Windows services
- Disable unneeded network protocols
- Disable screen savers and visual effects
- Remove any unneeded applications
- Remove all unneeded hardware from the virtual machine configuration
- If the server was a physicla-to-virtual (P2V) converstion, delete any non-present hardware
- Optimize anti-virus confgurations to exclude specific directories or disable real time scanning
- Disable NTFS last accessed time stamp
- For Linux systems, disable unneeded daemons, services and background tasks and do not run X-Windows if possible.
In the future, operating systems will evolve to become specifically optimized to run on virtual servers. Until then you should take steps to ensure that your guest servers are optimized to run on virtual hosts.
An event like a complete data center power failure is something you never want to experience. Having recently gone through one I thought I would share some lessons learned from it.
This particular data center had a full UPS (uninterruptible power supply) system and backup diesel generator when a routine battery maintenance performed on the UPS shorted some circuits causing power loss to the entire data center. This event made me realize that a little preparedness can go along way in getting servers and virtual machines (VMs) back online after a power failure.
First and foremost, the DNS (Domain Name System) is probably the most important service in your data center. Most servers and workstations use DNS names instead of IP addresses to communicate with each other. Without DNS, servers can’t get to anything by hostname and will effectively be isolated from each other. Most administrators are used to using DNS names, so when DNS is not available they usually do not know the IP addresses of the server and subsequently can’t connect to them. So it is a good idea to have a hard copy of all your servers and their IP addresses somewhere in your data center for you to reference when DNS is not available.
Virtual servers can be even more problematic. If you have all your DNS servers virtualized which cannot be started because of network or shared storage issues, you can run into problems starting other servers and services that rely on DNS. Consider having at least one physical DNS server or having one or two DNS servers running on local storage instead of shared storage.
Another helpful insight: Make sure you know command line procedures for administration on your host servers. You may not be able to connect to your host via a graphic user interface (GUI) until certain systems are up so the command line can be your only way to check the host server health and perform VM operations. Again, it helps to have paper documentation of the host command line utilities and their syntaxes.
Finally you want to make sure you start your servers back up in the proper order due to dependencies that certain servers and applications have. Obviously, with the network unavailable, not much is going to function properly. The storage-area network (SAN) is also critical for your host servers that utilize shared storage for VMs. Windows servers also take a very long time to boot if a DNS server and domain controller are not available when they are starting.
Below is a general order for restarting your servers and applications.
- DNS servers
- DHCP servers
- Database servers
- Application/Web servers
The boy scout motto ‘be prepared’ holds true. A little preparation and planning can go along way to ensuring a smoother recovery.
A product that provides a smooth transition between host and virtual machine helps when selecting a product for virtualization on a workstation. Sun xVM VirtualBox offers a seamless session that can make the transition between your guest and host quite transparent. The seamless window functionality is available on Windows and some Linux guest VMs for VirtualBox. The Guest Additions package is required for both platforms to use the seamless window feature.
To enable the seamless window, press the host key (which is the right CTRL key by default) and the letter ‘L’ together. VirtualBox will present the following information message before engaging the seamless Window functionality:
For Windows host systems, the VM will still reside as a separate window in the taskbar. When you select the VM in the taskbar, the active items are overlaid onto the host. For Windows systems, the guest VM desktop is not shown unless the show desktop command is sent. In the example below, a Windows Server 2008 guest VM is running in a seamless window on the Windows XP host:
This seamless functionality makes it feel less like a VM, and all keyboard and mouse operations are entirely smooth. To exit the seamless window feature, host key and the ‘L’ key together will switch the VM back to a contained window. VirtualBox can also do mixed video modes in the seamless window quite well. For example, if the guest VM is running at a 16-bit color depth, and the host is running at a higher rate the lower rate is mixed in for the VM components well.
The seamless window feature has available in VirtualBox since version 1.5. More information on VirtualBox can be found online at the Sun website.
One of the features that make desktop virtualization packages attractive is the ability to move files from the host to the guest virtual machine without the use of a network. Sun xVM VirtualBox has this functionality, so let’s go through it for use on Windows systems.
Enabling shared folders is fairly straight forward within VirtualBox, but must be configured when the virtual machine (VM) is turned off. In the properties of the VM, the share is configured as shown in the figure below:
Once the VM is powered on with this configuration, it can now access this shared folder. For Windows clients, the shared folder will be visible in My Network Places as shown in the figure below:
The security permissions are available as read-only or full control for the share, and multiple shares can be made available to a VM. The shared folder is presented to the VM as a server name of VBOXSVR, so be sure not to use that name on your network. The VBOXSVR name does not resolve to an IP address but is provided to the VM by the installation of Guest Additions. You can also script out the use of a shared folder with the VBoxManage command as shown below:
VBoxManage sharedfolder add "XP-RedPill" -name "zzVirtualMachineShared" -hostpath "C:\zzVirtualMachineShared"
Using the scripted option can be helpful in assigning a single shared folder to multiple VMs on the same VirtualBox installation. VirtualBox allows a shared folder to be created as a transient share, which is removed when the VM is powered off. This option is either configured in the interface or denoted by using the ‘-transient’ option in the VBoxManage command.
Sun xVM VirtualBox 1.6.2 is freely available for download from the Sun website.
One of the more overlooked placement discussions that happen within the design or re-engineering phases of virtualization projects involves systems that are on an external network.
The placement of external systems can be addressed many different ways, including the use of virtual private network (VPN) authentication servers, web servers or remediation systems for network access control. Consider the following architecture diagram where larger virtualization hosts contain all types of systems within the virtualized environment:
While the networking of these virtual machines may be configured with the same protections as their physical counterparts, there are some concerns with this configuration. This can become even more of a concern in the event where the firewall is a virtual machine as well in the same environment. An architecture that can better protect the internal and external workloads would be to have a separate environment with connectivity and workloads only to the external interfaces. Consider the figure below for the same workload:
In this manner, more hosts may be needed for the same workload to account for maintenance mode and other factors when separated. These additional hosts may be configured with smaller hosts and smaller processor inventory to not incur any additional costs or licensing for anything that is licensed by processor.
If firewall or other core network appliances are virtualized, their placement requires a little more thought because they may have a footprint on both the internal and external networks. In the case of shared resources of internal and external workloads, an outbreak type event on an external system may have resources consumed at the expense of the internal workload. By having the internal and external workloads separated, the risk of attacks within the operating system or an attack that targets virtual machines would be initially contained by internal and external workloads.
This strategy can be applied to all virtualization products, and can also be applied more specifically to network and storage configurations to protect in the same fashion.
VMware has just updated their security hardening guide, which provides recommendations for hardening a VI3 environment.
In addition to the updates for virtual machines and the ESX Service Console, they have now added new recommendations for ESXi, VirtualCenter Add-on components (plug-ins) and for Client components.
Here’s a brief overview of the recommendations for VMs and ESX hosts that have been added to the guide. No new recommendations were made for VirtualCenter except for the Plug-in ones.
- Disable copy and paste operations between the guest operating system and remote console
- Do not use nonpersistent disks
- Ensure unauthorized devices are not connected
- Prevent unauthorized removal or connection of devices
- Avoid Denial of Service (DoS) caused by virtual disk modification operations
- Specify the guest operating system correctly
- Verify proper file permissions for virtual machine files
ESX Service Console:
- Secure the SNMP configuration
- Protect against the root file system filling up
- Disable automatic mounting of USB devices
There are some general recommendations when using plug-ins and some specific ones when using Update Manager, Converter and Guided Consolidation. The guide recommends that the Update Manager and Converter plug-ins not be installed on the VirtualCenter server but should instead be installed on a separate server or virtual machine.
Also added is a section on client components. The guide recommends against the use of Linux-based clients when using the RCLI, VI Perl Toolkit scripts, VM console access initiated from a web access browser session and programs written using the VI SDK. The reason for this is that communications with Linux clients are vulnerable to man-in-the-middle attacks because the Linux versions of these components do not perform certificate validation. This risk can be partially mitigated by ensuring that the management interfaces (ESX Service Console and VirtualCenter) are on trusted, isolated networks.
The guide suggests that client components are to verify the VI Client integrity because of the VI Client extensibility framework that was introduced into VirtualCenter 2.5 which provides the ability to extend the VI Client. It also recommends that one monitor the usage of the VI Client instances by inspecting log files on client systems. Both of these tasks can be quite difficult to do because there are no native methods for doing this.
Finally a section was added for securing the host-level management in ESXi. Many of the recommendations for ESXi are the same ones that were made for ESX. Some unique recommendations for ESXi include ensuring secure access to CIM (the hardware management api’s). Also, admins may want to audit or disable the special technical support mode which is designed to be used in case of an emergency but is sometimes used by administrators to access specific functions in ESXi.
You can read the updated guide in its entirety here.
A virtualization package is ready for prime time when it has a full array of device connectivity between the host and the guest virtual machine (VM). Let’s take a quick look at USB device redirection in Sun xVM VirtualBox 1.6.2.
The USB device functionality has a nice feature that allows a selective mapping of USB devices from the host to the guest. This can be beneficial if you want a USB device (such as a license key) to be available only to the guest VM and not the host, or vice versa. Within the VM’s configuration, you have the option to specify all devices or specified devices to be connected to the guest VM through USB device filters in the properties of the guest VM. These changes must be made offline, and for Windows hosts the VirtualBox USB controller needs to be added with the native driver. Likewise, the USB root hub that arrives via plug-and-play needs to be installed with the driver on the guest VM (which is automatic when guest additions is installed.) The figure below shows a specific device being permitted to be passed to the VM:
The filter icons highlighted on the right side allow the VM to be presented with all USB devices, remove a filter and to add a filter that is based on user entered criteria or a selection among the currently installed devices. The filters are incredibly versatile as you could redirect certain devices by many factors to be available to the guest VM.
When a device is designated to go to the guest VM, it becomes unavailable to the host system. So there may be some practice issues to get used to losing a device when the VM is powered on. One note of caution is that the mouse and keyboard devices, if USB, are inherently made available to both the host and guest. However, if you add a device filter to add the USB mouse to the guest VM, the mouse would be only available to the guest VM.
Likewise, when the VM is powered off, the USB device will arrive back to the host and then be available for use. If snapshots are being used on the VM, the hardware inventory and specific configuration is managed in the snapshot, so USB filters will be deleted if you are reverting to a snapshot made before the filter was made.
Overall, this USB functionality is quite granular and a strong offering for desktop installation. More information on VirtualBox’s USB support can be found in the VirtualBox online user manual.
A general rule of thumb in virtual environments is to always treat virtual machines the same as you would physical servers. While this rule holds true in many cases, IT administrators should be aware of some exceptions to this rule. Let’s go over some reasons that you would not treat your virtual machines like physical servers:
- Patching – You should apply all the same operating system and application patches to a virtual machine as you would a physical server. However it is best to stagger your patch deployments so you do not patch and restart all of your virtual machines at the same time. If you did this concurrently you can cause excessive resource utilization on your host servers which could impact other virtual machines running on the host.
- Securing – Secure the virtual machine operating system as you would physical servers, in addition you should ensure that you have proper security setup on the host server’s management console that allows access to the VM as well as on the virtual machine files located on the host server’s disk system. It does no good to have tight security inside your VM and have weak security outside.
- System Monitoring – This is one area that can be very different for virtual servers. There is no need to monitor virtual machine hardware, if you have converted physical servers to virtual machines you should make sure you un-install any hardware management agents from them. In addition virtual machines boot much faster then physical servers. Because of this, many monitoring systems will not detect server re-boots because the boot process happens quicker then the monitoring interval. You may find that you need to adjust your polling interval for virtual servers so you can detect the faster re-boots.
- Performance Monitoring – Another area that is very different from physical servers. Traditional operating system performance reporting tools are often inaccurate when used on virtual machines because they are unaware of the virtualization layer and the underlying physical hardware. You should always use virtual server specific reporting tools to accurately measure performance on virtual machines.
- Anti-virus – Make sure you install anti-virus software on all your virtual machines the same as physical servers. Again one thing to be careful of is to stagger any on-demand scans and definition updates as to not overwhelm the host server. Having all your VMs running a full scan at the same time can completely bog down a host server.
- Backups – It’s OK to backup your virtual machines using traditional operating system backup agents. Always make sure you do not backup too many VMs on a single host at the same time. There are more efficient ways to perform backups in a virtual environment that you may look into to either complement or replace traditional backup methods.
- Disk defragging – You should periodically defrag virtual machine disks using traditional operating system tools for maximum performance. However be careful not to defrag a VM that has a snapshot running, doing this can cause the snapshots rapidly grow in size and degrade host performance. As usual do not defrag more then one VM on a host at a single time because of all the excessive disk activity that is causes.
Be careful not to do too many of the same operations concurrently. With physical servers, only a single server is effected, but in virtual environments many other servers running on a host server can be impacted.