Today at Red Hat Summit in Boston, two of Red Hat’s emerging technology engineers, Dan Barrange and Richard Jones, presented the new tool sets that their team has developed for work with Xen virtual machines (VMs), including command line utilities, which will become part of the oVirt tool set.
According to Barrange, “You won’t have to lock into any particular technology underneath,” because these new utilities don’t require installation on a guest or require administrators to log in. Like the forthcoming Red Hat Enterprise Linux (RHEL) KVM-based hypervisor, these tools can also be launched from disk. “That’s the competitive advantage to using our tools,” says Barrange.
Red Hat engineer Richard Jones says that these new command line monitoring tools allow for a wider range of kernels and filesystems to be used and will offer better Windows support. Some of the utilities featured today include the following:
- Virt-top. A top-like utility for showing stats of virtualized domains (e.g. for reading network traffic, disk throughput, etc.).
- Virt-df . Virt-df is df for virtual guests. Used for checking how much disk space is being used by virtual machines.
- Virt-p2v. A graphical user interface launched from live CD for physical to virtual migration.
A full list of these tools is available at Richard Jones’s website.
Red Hat has developed these tool sets for oVirt, its next-generation virtualization management console. Unlike the current Virtual Machine Manager for RHEL, oVirt creates a small “stateless” image of the host virtualization layer with no local disks or installation necessary.
Should you assign a virtual machine (VM) more than one virtual processor or not? It’s common for admins to configure virtual symmetric multiprocessing, or VMs with multiple CPUs, whether it is needed or not.The decision to use more then one virtual processor in a VM should be based on an actual requirement by the applications installed on the VM and not simply because two processors are better then one. Many physical servers commonly have multiple CPUs regardless if the applications running require them. While being wasteful of server resources, this does not negatively impact a physical server but most VMs will usually run better with one virtual processor and can actually run slower when more than one is assigned to it.
The reason for this is the hypervisor’s CPU scheduler must find simultaneous cores available equal to the number assigned to the VM. So a four VCPU VM will need to have four free cores available on the host for every CPU request that is made by the VM. If there are not four cores available because other VMs are using them then the VM must wait until the cores become available. Single VCPU VMs have a much easier time because they only need there to be a single core available for the scheduler to process CPU requests for it.
Here are some tips on assigning VCPUs to VMs:
- Limit the number of VSMP VMs on your hosts. The less you have, the better your VMs will perform.
- Assign a VM multiple VCPUs only if you are running an application that requires it and will make use of them.
- Don’t assign a VM the same amount of VCPUs as your host system has total cores available.
- If you are going to use VSMP have at least twice (preferably three or four times) the number of cores available on your host system then that of your VM with the most VCPUs. So if you have a four VCPU VM, have at least eight cores available on your host server and preferably 16.
- If you are converting a multi-CPU physical Windows server to a single VCPU VM, make sure you change the HAL from multiprocessor to uniprocessor.
- Don’t use CPU affinity as it restricts the scheduler and makes it harder to process CPU requests. The scheduler is very good at what it does, so let it do its job.
The virtualization world is still waiting for the official release of the Open Virtual Machine File Format, or OVF, once Distributed Management Task Force (DMTF) puts the finishing touches on what will be an industry standard virtual machine (VM) format. According to DMTF’s Christy Leung the organization plans to announce the release of OVF over the next couple of months.
OVF frees users from platform dependence in virtual environments, enabling them to mix and match platforms without incurring interoperability problems. Despite the clear benefits of a common format in a multiplatform virtualization landscape, a universal format has encountered some roadblocks.
Since late 2007, DMTF has worked on OVF when Dell, HP, IBM, Microsoft, VMware and XenSource submitted a proposal for a standardized format for VMs. At the upcoming Burton Group Catalyst conference later this month, DMTF member organizations — including VMware, Citrix, and Novell — will demonstrate OVF interoperability publicly for the first time. According to Burton Group analyst Chris Wolf, “some vendors moved OVF support higher up on their development roadmap in order to have it ready in time to demonstrate at the Catalyst conference.”
Wolf says that OVF is worth the wait — and the investment in the long term. “OVF has a nice long-term goal of standardizing the way hypervisors mount and run VMs,” says Wolf, “but its immediate use is primarily in importing VMs and standardize how VM metadata is managed.”
Wolf goes on to say that while OVF VMs will soon be able to load onto any hypervisor, a virtual hard disk conversion may be required as part of the import process because of the presence of two primary virtual hard disk formats in play: Virtual Machine Disk Format for VMware and Virtual Hard Disk for Microsoft and Xen. “OVF would have even more value if all vendors could agree to use a single standardized virtual hard disk format,” according to Wolf. “Thus far, the reasons for not having a single virtual hard disk format are more political than technological.”
When DTMF finishes its work, OVF will greatly improve the functionality of virtual machines. “OVF metadata is extensible, so any software vendor could use OVF to embed their management metadata inside VMs, regardless of hypervisor,” says Wolf.
“That is a big deal, as vendors could have a consistent management methodology regardless of hypervisor.”
You may hear the term SCSI reservations frequently when dealing with VMware servers that utilize shared storage. SCSI reservations are used to ensure exclusive access to disk-based resources when multiple hosts are accessing the same shared storage resources. In addition to being used by VMware hosts, SCSI reservations are also used by Microsoft Cluster Server.
SCSI reservations are only used for specific operations when metadata changes are made and are necessary to prevent multiple hosts from concurrently writing to the metadata to avoid data corruption. Once the operation completes the reservation is released and other operations can continue. Because of this exclusive lock, it is important to minimize the concurrent number of reservations that are made. When too many reservations are being made at once, you may receive I/O failures because a host is unable to make a reservation to complete an operation because another host has locked the logical unit number (LUN). When a host is unable to make a reservation because of a conflict with another host, it will continue to retry at random intervals until it is successful; however, if too many attempts are made the operation will fail.
Some examples of operations that require metadata updates include:
- Creating or deleting a VMFS datastore
- Expanding a VMFS datastore onto additional extents
- Powering on or off a VM
- Acquiring or releasing a lock on a file
- Creating or deleting a file
- Creating a template
- Deploying a VM from a template
- Creating a new VM
- Migrating a VM with VMotion
- Growing a file (e.g., a Snapshot file or a thin provisioned Virtual Disk)
Having a minimal amount of reservation conflicts is generally unavoidable and will not have a big impact on your hosts and VMs. To avoid having too many conflicts, try to limit the number of operations that can cause reservations and stagger them so too many are not happening simultaneously. All reservation errors are logged to the /var/log/vmkernel log file on each ESX host. To reduce the amount of conflicts:
- Limit the number of snapshots you have running, as snapshots grow in 16MB increments and every time they grow they cause SCSI reservations.
- Only vMotion a single VM per LUN at any one time.
- Only cold migrate a single VM per LUN at any one time.
- Do not power on/off too many VMs simultaneously.
- Limit VM/template creations and deployments to a single VM per LUN at any one time.
- Consider using smaller LUN sizes (<600GB) and do not use extents to extend a VMFS volume
This blog post was written by Megan Santosus, features writer for SearchServerVirtualization.com.
By now, server virtualization has pretty much proved its mettle as a way to consolidate data centers and reduce costs. As virtualization has gone mainstream, some of the management challenges have become top of mind. Consider the situation for a senior IT manager at a financial services company, who spoke on the condition of anonymity. “Virtualization is great stuff,” he said. “But it does change the way you manage things.”
Two years ago, the financial services company began implementing virtualization — specifically VMware and ESX Server, although the company has since deployed virtualization with Sun Solaris clusters. At that time, the company realized that it had a gap in virtual server management capabilities. “We are making a large push with ESX servers, and we want to manage them holistically with some of the other servers in our environment,” the IT manager said.
To that end, four months ago the company began beta testing CA Advanced Systems Management r11.2; the company already uses the previous version of the software, and one of the enhancements with 11.2 is integration with VMware VirtualCenter. By installing an agent on VirtualCenter and another on the CA management server, the company now collects and aggregates the performance data for virtual machines into a centralized Web-based system. “We take the performance data on the physical ESX server and provide that to our capacity team so they can plan and manage our virtual environment,” the IT manager said.
For the capacity team, virtualization means being able to figure things out in advance such as how many hosts can run on an ESX server, what’s the footprint of the application, and whether it’s best to put components on the same physical box or spread them out. “We can now give the capacity team performance data they need to make the decisions about moving things around,” the IT manager said. Rather than planning, the IT manager likens the process now to capacity modeling. “If we want to move virtual servers running Oracle, Apache and Weblogix, we look at the performance data to make our decisions.”
Megan Santosus is a features writer for SearchServerVirtualization.com.
VMware Inc.’s recent acquisition of B-hive Networks is indicative of just how much of a wrench virtualization has thrown into the performance management arena. (To recap: B-hive’s Conductor software monitors application performance across virtual environments.) “First and foremost, the acquisition shows the importance of being able to manage performance in a virtualized environment,” said Trevor Matz, the president and CEO of Aternity Inc., a provider of end-user performance management software. “The system metrics normally associated with performance tools are pretty meaningless in virtual environments.”
Traditional performance metrics — CPU, memory usage — that are used to monitor the performance of the hardware that provides service to end users don’t have much relevance in virtual environments, Matz said. “Those metrics are associated with a host machine or virtual box itself and don’t indicate what the end user is experiencing,” he added.
Matz said that Citrix Systems Inc., Microsoft and Parallels are all at work on creating tools that collect meaningful metrics in a virtualized environment. “Having comprehensive tools is not enough,” Matz said, adding that there are already more than enough metrics to parse through. “The next big frontier is the ability to transform huge amounts of data into actionable business intelligence that correlates across platforms.”
On Monday, June 9, Symantec Corp. of Cupterino, Calif., announced the release of Veritas Virtual Infrastructure (VxVI), a server and storage virtualization product built on Citrix Systems Inc.’s XenServer technology. By exploiting Veritas’ block storage management model, VxVI hopes to compete with VMware-Infrastructure-3-in-production environments by offering increased capabilities for storage and availability-critical systems.
The new Xen-based virtual infrastructure platform from Symantec provides storage management and high availability with cross-platform connectivity for the virtual data center. It’s essentially XenServer with the Veritas storage management layer on top — all wrapped in a Symantec management console.
According to Symantec Senior Vice President of Storage and Availability Management Rob Soderbery, the time is right for a product that addresses the needs of testing and development, needs that have been underserved by VMware. “Users understand the storage management challenges with VMware,” he said. Symantec has delivered something “fundamentally new” in how server virtualization works with storage management, he noted.
The key difference between VMware and Veritas is in how each handles virtual machines (VMs). Soderbery argues that the Virtual Machine File System (VMFS) file-based system that VMware uses can’t compete with the block storage system of Veritas VxVI.
As enterprises build out the x86 data center, Symantec’s product seeks to serve those who want to bring physical capabilities into the virtual environment. Veritas Virtual Infrastructure brings dynamic storage layouts, enclosure and array mirroring and storage-area network (SAN) multipathing/load balancing to server virtualization, adding features such as shared VM boot images with which Symantec hopes to lure VMware customers that are not satisfied with the storage capabilities of the leading server virtualization platform.
Soderbery says that VxVI will work well with Microsoft’s forthcoming Xen-inspired hypervisor, Hyper-V. “Microsoft has done something pretty interesting here in being open to the Xen community and encouraging the Xen community to be open with Microsoft,” says Soderbery. “Veritas Virtual Infrastructure is technology that we can apply across the Xen ecosystem and Hyper-V as well.”
With another Xen-based virtualization product on the market engineered to be more compatible with the forthcoming Hyper-V, VMware may feel the pinch as users see more options with the other big players in the server virtualization market. But will the $4,595 per two-socket server for Veritas discourage VMware users from even running a demo?
What do you think? If you plan on deploying Veritas VxVI, we want to hear from you. Send us your thoughts via email.
Of the major hypervisors, I am particularly intrigued by Sun xVM. In this blog, I’ll give a quick tour of Sun xVM VirtualBox and show how to make a virtual machine.
Setting up Sun xVM
The download and install are quick and easy. VirtualBox has a small 23 MB download for Windows x86 platforms, and the install was very fast and did not require a reboot. Once VirtualBox is installed and running, creating a virtual machine is also a snap. The VirtualBox interface to create a virtual machine was quite intuitive. In fact, I created a few virtual machines without any issues at all. The figure below shows the VirtualBox console with the control pane for the two virtual machines I created:
Sun xVM good for client-side virtualization
All of the basic functionality of a virtualization product is present in VirtualBox. This includes .ISO mapping, snapshot technology, a hardware inventory manager and network placement technologies. One difference from the VMware products is a full editor for the MAC address, meaning you can specify a full MAC address for the virtual machine. This presents a lot of functionality, but a little dangerous for the masses if you ask me.
I have used VMware Server and VMware Workstation for my client side virtualization for a long time. However, I am going to go with VirtualBox as I have a new workstation. I will continue to share my feedback? Have you had any positive or negative experiences with VirtualBox? If so, please share them below.
Portland, Ore.-based Tripwire ConfigCheck is a free Windows and Linux based utility that assesses the security of VMware ESX 3.5 hypervisor configurations compared to the VMware Infrastructure 3 Security Hardening guidelines, which were released in February.
The Security Hardening guidelines explain in detail the security-related configuration options of the components of VMware Infrastructure 3 and how security affects certain capabilities.
Tripwire ConfigCheck makes sure ESX environments are properly configured according to these guidelines and lends insight into vulnerabilities in virtual environments. It also provides the necessary steps towards full remediation.
Dan Schoenbaum, senior vice president of marketing and business development for Tripwire
said the utility is being offered for free to encourage the proliferation of VMware’s Hardening guidelines and to increase virtual machine (VM) security.
Tripware hopes that by giving a taste of their technology for free, users will become familiar with them and invest in their software products with more security capabilities, Schoenbaum said.
Colorado Springs, Co.-based Configuresoft Inc. also provides a toolkit for compliance with VMware’s security hardening guidelines. The toolkit consists of a set of rule-based templates, reports and dashboards that plug into Configuresoft’s Enterprise Configuration Manager (ECM).
A number of tools are available for migrating physical servers to virtual machines. Which tool to choose will depend on the source operating system, the target virtualization platform and the type of server being migrated. All the available tools support converting physical servers running Microsoft operating systems but only a few support Linux server conversions. I’ve compiled a brief list of some of the tools and options available for converting physical servers to virtual machines.
Platespin PowerConvert – A commercial product that has the broadest operating system support. Supported source operating systems include Windows NT, 2000, XP, 2003 and Red Hat and SUSE. Virtual platform support includes VMware Server/Workstation/ESX, Microsoft Virtual Server, XenSource and Virtual Iron. The Acronis True Image, Symantec Ghost and Live state image formats are supported as well. Platespin’s product also converts virtual machines to physical machines.
Ultimate P2V – A cheaper alternative with some freeware tools such as BartPE, but this product requires an imaging tool such as Symantec Ghost to perform the imaging. It also requires some work to build the boot CD and can be a bit complicated.
Vizioncore vConverter – Another very robust commercial product that has broad platform support. Optimized to work with VMware ESX but also supports Microsoft Virtual Server, XenSource and Virtual Iron as target formats. Only supports Windows 2000, 2003 and Vista source operating systems.
Microsoft Virtual Server 2005 Migration Toolkit – Only supports Windows NT, 2000 and 2003 source operating systems and only supports Microsoft’s virtual machine as a target format. Free but requires the Automated Deployment Services add-on to Windows Server 2003 Enterprise Edition. The conversion process is somewhat complicated and may not be a good choice for most users.
VMware Converter – A free tool (standard version) provided by VMware to convert physical servers running Windows NT, 2000, XP, 2003 and Linux servers to VMware virtual machines. Also supports Symantec Ghost, Livestate, Backup Exec, Acronis True Image and StorageCraft ShadowProtect image formats.
HP Server Migration Pack – Supports converting Windows 2000 & 2003 servers to VMware ESX, XenServer and Microsoft Virtual Server target formats running HP Proliant hardware. Also supports virtual to physical conversions.
Leostream P>V Direct – I thought I would mention this tool even though Leostream recently dropped this product to focus on other areas. Supports conversion of Windows NT, 2000, XP and 2003 source servers to VMware Server/Workstation/ESX, Microsoft VirtualServer and Xen target servers.
Other useful tools:
Robocopy – A free tool from Microsoft that is included in the Windows Server 2003 Resource Kit. Simply create a new virtual machine and copy the data from the physical server to the virtual server. Robocopy preserves all the date/time stamps and security when copying data from server to server. This is useful for migrating non-application, file servers where you only care about the data files on the server.Sysprep – A Microsoft tool to prepare an existing server for cloning and restoration from a disk image.NewSID – A Sysinternals utility to change a Windows server’s SID (security identifier) after a server has been cloned.
Imaging tools like Symantec Ghost and Acronis True Image create images of your physical servers. Once completed, you can create a new VM and restore the image to the virtual machine. VMTS.net has a good write-up on the procedure here.
You can always do a P2V by backing up a physical server using your traditional backup software, creating a new virtual machine, installing the same operating system on the VM and a backup agent, and then performing a restore from the backup taken of the physical server. Not the cleanest method for doing a P2V but can be used as a cheap alternative to other tools.
But for certain servers, it is less risky and just as easy to build a new server and migrate the data to it. Examples of this include SQL Servers and Domain Controllers. It’s a simple process to build a new virtual machine, install the operating system, dcpromo it and then shutdown your physical domain controllers. Likewise, building a new virtual SQL server then detaching the databases from a physical SQL server and attaching them to the virtual one is an easy process and reduces the risk of data corruption that can occur during the P2V cloning process.