The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog


September 2, 2008  9:24 AM

Making a P2V conversion: Driver cleansing



Posted by: Rick Vanover
hardware, P2V, Rick Vanover, Virtual machine, Virtualization strategies

Successful physical-to-virtual (P2V) conversions revolve around getting the virtual environment correct before presenting the actual workload to the new VM. In this video blog, Rick Vanover discusses some conversion tasks that can help a VM function optimally as the workload goes virtual from a driver perspective.

[kml_flashembed movie="http://www.youtube.com/v/EpQNQj4JL4k" width="425" height="350" wmode="transparent" /]

September 2, 2008  9:19 AM

Embotics’ V-Scout enters management arena for free



Posted by: Rick Vanover
Product announcements, Rick Vanover, Virtualization management

Embotics has launched a new product, V-Scout, which provides administrators with an extra view into their VMware-based virtual environments. V-Scout is a free product that complements Embotics’ existing product, V-Commander. I have had a chance to work with the version 1.0 release of the product and will share some of the key features with you.

One of V-Scout’s main objectives is to give administrators a bridge between their business processes and the technology. How many times have you been frustrated with the limited built-in documentation of the annotations notes field? Much of V-Scout’s functionality revolves around putting attributes in place for objects in the VI3 environment, allowing your requirements and information to be configured within the environment. V-Scout takes this one step further and provides a Web-based management interface that allows reporting, chargeback and various views of the VI3 environment in an intuitive fashion. Because V-Scout uses the attributes field within various VI3 objects, don’t be surprised if you see tasks occur that are reconfiguring objects within the VMware Infrastructure Client. Further, objects will start to have various emboticsManager attributes as V-Scout interacts with VI3. These attributes are called fingerprints. For those of you familiar with V-Commander, V-Scout does not offer as many features. Here is a breakdown of the feature comparison between the two products:

Feature Comparison

Once in V-Scout, one of the core features of custom attributes can be applied to a VM. As shown above, there can be 10 custom attributes assigned within V-Scout. There also many built-in attributes that can be applied, such as an expiration date and approval status for a virtual machine. In my evaluation of the product, I set up a custom attribute of a cost center for tracking purposes. The figure below shows the cost center custom attribute being configured:

Image 1

The other key feature of V-Scout is the built-in reporting. The built-in reporting offers six main categories of reporting that revolve around the guest OS, the host environment, managed systems, virtual machines, population trends and an overall infrastructure summary. While most of the reports out of the box do not give you much information that you don’t already have access to, it does organize it and make the custom attributes available for criteria and a way to present based on your parameters. Below is a sample report for a particular virtual machine:

Figure 2
Overall, from my first pass on V-Scout, I was impressed with the free offering and will continue to use it primarily for the custom attributes features to track virtual machines. More information on V-Scout can be found on the Embotics website.


August 28, 2008  7:44 AM

Xen version 3.3 enhances performance, scalability to open source hypervisor



Posted by: Bridget Botelho
Citrix XenServer, Embedded Virtualization, hardware, Intel, Open source, Oracle VM, Servers, Sun xVM, Virtual machine, Virtualization platforms, Xen, XenSource

Xen.org announced the release of a new version of the project’s open source hypervisor, Xen 3.3 today, with enhancements to security, performance and scalability.
Xen logo
The release is now available for download from the Xen.org community site and is the product of a distributed development effort by senior engineers from more than 50 hardware, software, and security vendors.

The new Xen 3.3 release provides users with the new features including:

* Power management in the hypervisor
* Hardware Virtual Machine (HVM) emulation domains for better scalability, performance and security
* Shadow pagetable improvements for the best HVM performance ever
* Hardware Assisted Paging enhancements
* Device passthrough enhancements
* CPUID feature levelling that allows safe domain migration across systems with different CPU models (within the same vendor brand – Intel or AMD)

Xen 3.3 provides virtualization for x64, IA64 and ARM-based platforms, and through close links with CPU and chipset vendors in the Xen project, Xen 3.3 also supports the latest hardware virtualization enhancements, like Intel Virtualization Technology (Intel-VT).

With Xen’s memory ballooning feature, the hypervisor can reallocate memory between guest Virtual Machines (VMs) to guarantee performance and allow greater density of VMs per server. Xen 3.3 also offers CPU portability to allow live migration of VMs across different CPUs, active power optimization to reduce server power consumption, and significant security enhancements.

Simon Crosby, CTO, Virtualization and Management Division, Citrix Systems, said in a statement, “In just two years, Xen has rapidly gained share in virtualization, much as Linux did in operating systems – and in the same period Xen has driven the price of competing hypervisors to zero, allowing any vendor to include virtualization for free.”

In addition to its growing development community, Xen hypervisor is the standard virtualization platform used by cloud computing providers like Amazon.com. It is also used in virtualization products from Citrix (XenServer), Fujitsu, Novell, Oracle (Oracle VM), Sun Microsystems (Sun xVM), and Virtual Iron, and is available as an embedded option in many x86 servers.


August 26, 2008  9:11 AM

VCP certification proves highly valuable



Posted by: Joe Foran
Joseph Foran, Virtualization

The VCP (VMware Certified Professional) certification I have blogged about twice before has gone through the roof. I have never seen a jump like this in all my years in IT. See for yourself.

VCP CHart 08-22-08

View Larger Salary Graph

On an interesting note, the graph is from a new feature on their site for bloggers, although WordPress doesn’t like the display, so I still did my usual upload-a-screencap-to-photobucket, but tried to respect Indeed.com’s format.

What do I think about this data? I’m betting on lots of people with advanced degrees, lots of experience, and/or other high-end certs having added the VCP to their career-building portfolio, as well as more top-level management and executives. Also of note is the missing Red Hat Certified Architect numbers. The VCP trend does not seem to have continued across the pond, where the prevailing charts seem to show a continuous value for the VCP.

The lesson here is watch the figures, but don’t be surprised when wild, near-impossible salaries turn out to be a case of too-good-to-be-true.


August 25, 2008  10:09 AM

Proxmox PVE offers VM mash-up for the virtualization market



Posted by: Joe Foran
Joseph Foran, Linux and virtualization, Virtualization, Virtualization platforms

The mashup market is for more than just those out there making rich-media web apps. They’ve taken the concept of the mashup to the virtualization market. The resulting product is a mash-up of two virtualization platforms, OpenVZ and KVM, which Proxmox has combined into a delightful new offering that will run just about any operating system as a virtual machine. Virtual appliances are also included in the mashup.

Like VMware VI, VirtualIron and XenServer, Proxmox Virtual Environment (PVE) is a bare-metal, type-1 hypervisor that installs onto a fresh server and turns the machine into a dedicated virtualization host. It is an open source product based on open source products, making it transparent to developers, and thereby it has all the advantages and disadvantages associated with OSS deleopment projects (I find few disdadvantages myself, but I’m admittedly biased because I think that the transparency of OSS is highly valuable.) The goal, outlined in their vision page, is to create an enterprise-class virtualization platform that affords unparalleled flexibility (my words, not theirs.)

The short-list of what PVE supports:

  • Web-based Administration via SSL
  • Bare-metal installation based on Debian Etch (64-bit)
  • Your choice of Container-based (OpenVZ) or Fully-Virtualized (KVM) virtual machines, both on the same server, as well as Paravirtualization for Windows via AMD/Intel CPU extensions and KVM’s built-in ability to handle them.
  • Built-in backup and restore
  • Live and offline migration

This is one of those will-be-great-if-it-lives products. It has a lot going for it, particularly in the ability to manage multiple types of virtualization platform strategies. That said, there are still many drawbacks, as expected of a pre-1.0 release (currently at 0.9). As such, it’s got it’s share of issues to get through before it’s really ready.

PVE currently doesn’t seem to have much in the way of granular user management for the web interface (though the forums do state that it is on the roadmap). Physical-to-virtual (P2V) capabilities are still a little raw, without any in-house tools to handle migrations. The Wiki site for PVE does explain how to use existing tools such as vmzdump, VMware Converter, etc. to migrate servers into formats that PVE can handle. There’s nothing in the way of DRS/HA equivalents, and while PVE does have tools for Live Migration, they don’t work due to a”kernel error”, according to the Wiki.  KVM backup is limited to using LVM2, whereas OpenVZ has that option as well as vzdump, though a tool for KVM  on the roadmap for 1.0. Guest SMP is described as unstable, as well.

The cluster management feature looks a little like this image, from their website:


The more day-to-day function of creating a new virtual machine looks like this:

Because it’s a Debian operating system, storage choices are limited only by the availability of drivers for the hardware platform.  iSCSI, NFS, and other remote storage file systems can be mounted and used to store virtual machines.

The product looks like it will shake up some thinking in the virtualization platform market and may get people thinking more about what it means to be limited to only one type of virtualization option. When it hits that magic 1.0 mark, and most of the major flaws above are fixed for the majority of users, this product could really shine. Overall, I rate this product a seven poker for stirring things up, down from nine because it’s still cooking.


August 25, 2008  9:58 AM

VDI process selection revolves heavily on the endpoint device



Posted by: Rick Vanover
Desktop virtualization, hardware, Rick Vanover, Sun xVM, VDI, Virtualization

Selecting a VDI environment is a daunting process. As I begin to evaluate technologies for VDI design and implementation for an upcoming project, the first step is often to identify the requirements from the end-user perspective.

Administrators frequently get wrapped up in the server side of a technology that the experience end of the solution may be overlooked. Two specific pieces of functionality such as screen resolution and dual monitor support can be incredibly important to the endpoint experience, and may make an implementation fail if it does not meet the requirements of all applications involved. By comparison, other topics such as USB device support, printing and sound are more of a policy decision rather than a device selection process decision.

We strategically arrive at determining device capabilities to match the requirements. At that point, we can then ‘back into’ various backend VDI solutions. Take for example the Sun Ray 2FS Virtual Display Client, which offers two DVI-I (digital video interface) ports that can provide a resolution with one monitor at 1920 x 1200 resolution, or two monitors at 3840 x 1200. Among VDI devices the standard offering is a 1600 x 1200 resolution which will satisfy most resolution situations, however. The dual DVI-I monitor may seem like overkill for a VDI-based thin client, but for many systems that perform archival by scanning documents, the high resolution and dual monitor functionality may be a requirement. Just ask any accounts payable clerk.

Some of this functionality may be circumvented by the use of existing devices, specifically VDI solutions that allow a Windows or other operating system PC to connect to the VDI broker. In this regard, if there are a very limited number of systems with requirements that may not be accommodated with standard endpoint devices, the typical PC can be used to provide the VDI connection from a full install PC. While not ideal, it is a decent stop-gap measure and a way to use of existing equipment.


August 25, 2008  9:09 AM

AMD Opteron powering top servers on VMmark list



Posted by: Bridget Botelho
AMD, AMD Opteron, hardware, HP, Intel, quad-core processor, Servers, Virtual machine, Virtualization, VMmark, VMware

AMD‘s quad-core Opteron processors powered the top three performing servers on VMware Inc.’s VMmark virtualization benchmark for 16 core x86 servers.

Hewlett-Packard’s (HP) ProLiant DL585 G5 with AMD’s Opteron processor is the AMD Opteron Logotop performer for 16-core systems on VMmark’s list. It is also used in HP’s 32 core, eight socket Proliant DL785 on VMmarks list, which achieved a score of 21.88@16 tiles or 96 virtual machines.

These results from AMD based systems aren’t surprising, since AMD Opteron’s virtualization assist technology has received high praise from VMware. One VMware engineer called AMD’s Nested Page Table (NPT) technology the answer to virtualizing large workloads.

Rapid Virtualization Indexing (RVI), a feature of AMD’s third-generation Opteron, includes NPT, is designed to offer near-native performance of virtualized applications and allows fast switching between virtual machines (VMs.)

Intel Corp. has announced a technology similar to NPT, called Extended Page Tables (EPT), which will be available in its next-generation eight-core microarchitecture, code-named “Nehalem.” Nehalem is slated for production later this year


August 20, 2008  3:21 PM

VMware helps hospital reduce data center power, increase performance



Posted by: Bridget Botelho
Desktop virtualization, hardware, High availability and virtualization, HP, Servers, VDI, Virtual machine, Virtualization, Virtualization management, Virtualization platforms, VMware

Palo Alto, Calif.-based VMware, Inc. announced that Rochester General Hospital(RGH) deployed VMware Infrastructure 3 to scale and manage its growing IT environment.

RGH, a community-based teaching hospital, has an IT infrastructure supporting business applications and patient-critical systems as well as massive amounts of data storage that is growing exponentially.

“We started using virtualization to address power and space issues in our main datacenter. We quickly adopted VMWare ESX as our standard platform for new projects and consolidated existing servers,” Tom Gibaud, an IT manager at RGH, said in an email. “It allowed us to continue business as usual and we experienced no delay in completing projects on time. Today we are way below our power threshold and gained about 50% of our floor space even after we doubled the amount of Windows Servers.”

In VMware’s statement, VMware Infrastructure has improved application performance and availability, and strengthened the hospital’s disaster-recovery capabilities. “Before going virtual, our datacenter power supply was maxed out. We couldn’t plug in a toaster. Now, with less hardware, we have capacity to handle whatever comes our way,” Gibaud said.

The hospital now runs 50 virtual machine hosts running 400 Guests with a mix of large and small workloads including terminal services, Gibaud said. In all, RGH has virtualized about 95% of its Windows-based applications, including Exchange, SQL Server, the ClinicalCare portal that physicians and nurses use to access electronic medical records, and RGH’s billing system.

In the initial phase of the virtualization deployment, Gibaud said the hospital used IBM Bladecenter servers (HS20, HS21, LS20). “This allowed us to condense many servers is a small amount of space. With VMware and IBM Bladecenters we were able to consolidate over a 150 Servers into one rack,” he said. “Today we use IBM x3850 and HP DL580 G5 to handle larger server workloads.”

In addtion, the hospital is running 200 Windows XP desktops using VMware’s Virtual Desktop Infrastructure on just two IBM x3850′s.


August 19, 2008  1:16 PM

Making a P2V conversion: Stage-phase configuration



Posted by: Rick Vanover
Networking, P2V, Rick Vanover, Virtual machine, Virtualization, Virtualization management, Virtualization strategies

A well-documented procedure for physical-to-virtual (P2V) conversions still lacks the valuable information learned by experience. In this video blog, Rick Vanover introduces the stage configuration phase of a P2V conversion. When this phase and other phases of a conversion are used in a procedural manner, the success target will increase from accomodation of most scenarios that will arise in virtualization environments.

[kml_flashembed movie="http://www.youtube.com/v/PPZBxfe8xKc" width="425" height="350" wmode="transparent" /]


August 19, 2008  8:39 AM

Is a 100% virtualized environment possible?



Posted by: Eric Siebert
Eric Siebert, Virtualization, Virtualization strategies

Organizations that have virtualized their environments often virtualize only a portion of their servers, leaving some servers running on standalone physical hardware. Is a 100% virtualized environment possible? Certainly it is, because almost all workloads can be virtualized, but there are some arguments against completely virtualizing your environment.

I recently wrote about an experience I had with a complete data center power failure. The problems resulted from all the DNS servers being virtualized and until the host servers and storage-area network were online no DNS was available, which made it difficult for anything in the environment to function properly. Having a DNS server and Active Directory domain controller running on a physical server would have been a great benefit in that situation.

Additionally, many organizations are leery of having too many servers virtualized because they want to avoid the risk of a single host outage causing many virtual machines to go down at once. This risk can be partially offset by some of the high availability features that are available in many of the virtualization products. In addition, if a virtual environment relies on a single shared storage device and that device has a major failure, it can take down all the virtual machines that reside on that storage. This risk can also be partially offset by having a well architected SAN environment with multiple switches and host bus adapters so multiple paths to the SAN are available.

Another reason that you may not want to virtualize your whole environment is that many software vendors do not fully support running their applications on virtual machines and subsequently may require you to reproduce a problem on a physical system. Because of this it is a good idea to have a few physical servers running applications that may be effected by these policies. For example, if you have multiple Oracle, SQL or Active Directory servers, consider leaving one or two of them on physical hardware.

Finally, you may consider leaving a few physical servers for applications that have non-virtualization friendly licensing and hardware requirements that can be difficult to virtualize (licensing dongles, fax boards, etc.) or for servers that have extremely high I/O requirements.

So is a 100% virtualized environment possible? Yes it is, but is it advisable? In most cases it is not recommended. The cost savings that are typically seen by implementing virtualization will increase the more an environment is virtualized but you may want to stop at around 90% and leave a few physical server for the reasons that were previously mentioned.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: