The Virtualization Room


August 26, 2008  9:11 AM

VCP certification proves highly valuable

Joseph Foran Profile: Joe Foran

The VCP (VMware Certified Professional) certification I have blogged about twice before has gone through the roof. I have never seen a jump like this in all my years in IT. See for yourself.

VCP CHart 08-22-08

View Larger Salary Graph

On an interesting note, the graph is from a new feature on their site for bloggers, although WordPress doesn’t like the display, so I still did my usual upload-a-screencap-to-photobucket, but tried to respect Indeed.com’s format.

What do I think about this data? I’m betting on lots of people with advanced degrees, lots of experience, and/or other high-end certs having added the VCP to their career-building portfolio, as well as more top-level management and executives. Also of note is the missing Red Hat Certified Architect numbers. The VCP trend does not seem to have continued across the pond, where the prevailing charts seem to show a continuous value for the VCP.

The lesson here is watch the figures, but don’t be surprised when wild, near-impossible salaries turn out to be a case of too-good-to-be-true.

August 25, 2008  10:09 AM

Proxmox PVE offers VM mash-up for the virtualization market

Joseph Foran Profile: Joe Foran

The mashup market is for more than just those out there making rich-media web apps. They’ve taken the concept of the mashup to the virtualization market. The resulting product is a mash-up of two virtualization platforms, OpenVZ and KVM, which Proxmox has combined into a delightful new offering that will run just about any operating system as a virtual machine. Virtual appliances are also included in the mashup.

Like VMware VI, VirtualIron and XenServer, Proxmox Virtual Environment (PVE) is a bare-metal, type-1 hypervisor that installs onto a fresh server and turns the machine into a dedicated virtualization host. It is an open source product based on open source products, making it transparent to developers, and thereby it has all the advantages and disadvantages associated with OSS deleopment projects (I find few disdadvantages myself, but I’m admittedly biased because I think that the transparency of OSS is highly valuable.) The goal, outlined in their vision page, is to create an enterprise-class virtualization platform that affords unparalleled flexibility (my words, not theirs.)

The short-list of what PVE supports:

  • Web-based Administration via SSL
  • Bare-metal installation based on Debian Etch (64-bit)
  • Your choice of Container-based (OpenVZ) or Fully-Virtualized (KVM) virtual machines, both on the same server, as well as Paravirtualization for Windows via AMD/Intel CPU extensions and KVM’s built-in ability to handle them.
  • Built-in backup and restore
  • Live and offline migration

This is one of those will-be-great-if-it-lives products. It has a lot going for it, particularly in the ability to manage multiple types of virtualization platform strategies. That said, there are still many drawbacks, as expected of a pre-1.0 release (currently at 0.9). As such, it’s got it’s share of issues to get through before it’s really ready.

PVE currently doesn’t seem to have much in the way of granular user management for the web interface (though the forums do state that it is on the roadmap). Physical-to-virtual (P2V) capabilities are still a little raw, without any in-house tools to handle migrations. The Wiki site for PVE does explain how to use existing tools such as vmzdump, VMware Converter, etc. to migrate servers into formats that PVE can handle. There’s nothing in the way of DRS/HA equivalents, and while PVE does have tools for Live Migration, they don’t work due to a”kernel error”, according to the Wiki.  KVM backup is limited to using LVM2, whereas OpenVZ has that option as well as vzdump, though a tool for KVM  on the roadmap for 1.0. Guest SMP is described as unstable, as well.

The cluster management feature looks a little like this image, from their website:


The more day-to-day function of creating a new virtual machine looks like this:

Because it’s a Debian operating system, storage choices are limited only by the availability of drivers for the hardware platform.  iSCSI, NFS, and other remote storage file systems can be mounted and used to store virtual machines.

The product looks like it will shake up some thinking in the virtualization platform market and may get people thinking more about what it means to be limited to only one type of virtualization option. When it hits that magic 1.0 mark, and most of the major flaws above are fixed for the majority of users, this product could really shine. Overall, I rate this product a seven poker for stirring things up, down from nine because it’s still cooking.


August 25, 2008  9:58 AM

VDI process selection revolves heavily on the endpoint device

Rick Vanover Rick Vanover Profile: Rick Vanover

Selecting a VDI environment is a daunting process. As I begin to evaluate technologies for VDI design and implementation for an upcoming project, the first step is often to identify the requirements from the end-user perspective.

Administrators frequently get wrapped up in the server side of a technology that the experience end of the solution may be overlooked. Two specific pieces of functionality such as screen resolution and dual monitor support can be incredibly important to the endpoint experience, and may make an implementation fail if it does not meet the requirements of all applications involved. By comparison, other topics such as USB device support, printing and sound are more of a policy decision rather than a device selection process decision.

We strategically arrive at determining device capabilities to match the requirements. At that point, we can then ‘back into’ various backend VDI solutions. Take for example the Sun Ray 2FS Virtual Display Client, which offers two DVI-I (digital video interface) ports that can provide a resolution with one monitor at 1920 x 1200 resolution, or two monitors at 3840 x 1200. Among VDI devices the standard offering is a 1600 x 1200 resolution which will satisfy most resolution situations, however. The dual DVI-I monitor may seem like overkill for a VDI-based thin client, but for many systems that perform archival by scanning documents, the high resolution and dual monitor functionality may be a requirement. Just ask any accounts payable clerk.

Some of this functionality may be circumvented by the use of existing devices, specifically VDI solutions that allow a Windows or other operating system PC to connect to the VDI broker. In this regard, if there are a very limited number of systems with requirements that may not be accommodated with standard endpoint devices, the typical PC can be used to provide the VDI connection from a full install PC. While not ideal, it is a decent stop-gap measure and a way to use of existing equipment.


August 25, 2008  9:09 AM

AMD Opteron powering top servers on VMmark list

Bridget Botelho Profile: Bridget Botelho

AMD‘s quad-core Opteron processors powered the top three performing servers on VMware Inc.’s VMmark virtualization benchmark for 16 core x86 servers.

Hewlett-Packard’s (HP) ProLiant DL585 G5 with AMD’s Opteron processor is the AMD Opteron Logotop performer for 16-core systems on VMmark’s list. It is also used in HP’s 32 core, eight socket Proliant DL785 on VMmarks list, which achieved a score of 21.88@16 tiles or 96 virtual machines.

These results from AMD based systems aren’t surprising, since AMD Opteron’s virtualization assist technology has received high praise from VMware. One VMware engineer called AMD’s Nested Page Table (NPT) technology the answer to virtualizing large workloads.

Rapid Virtualization Indexing (RVI), a feature of AMD’s third-generation Opteron, includes NPT, is designed to offer near-native performance of virtualized applications and allows fast switching between virtual machines (VMs.)

Intel Corp. has announced a technology similar to NPT, called Extended Page Tables (EPT), which will be available in its next-generation eight-core microarchitecture, code-named “Nehalem.” Nehalem is slated for production later this year


August 20, 2008  3:21 PM

VMware helps hospital reduce data center power, increase performance

Bridget Botelho Profile: Bridget Botelho

Palo Alto, Calif.-based VMware, Inc. announced that Rochester General Hospital(RGH) deployed VMware Infrastructure 3 to scale and manage its growing IT environment.

RGH, a community-based teaching hospital, has an IT infrastructure supporting business applications and patient-critical systems as well as massive amounts of data storage that is growing exponentially.

“We started using virtualization to address power and space issues in our main datacenter. We quickly adopted VMWare ESX as our standard platform for new projects and consolidated existing servers,” Tom Gibaud, an IT manager at RGH, said in an email. “It allowed us to continue business as usual and we experienced no delay in completing projects on time. Today we are way below our power threshold and gained about 50% of our floor space even after we doubled the amount of Windows Servers.”

In VMware’s statement, VMware Infrastructure has improved application performance and availability, and strengthened the hospital’s disaster-recovery capabilities. “Before going virtual, our datacenter power supply was maxed out. We couldn’t plug in a toaster. Now, with less hardware, we have capacity to handle whatever comes our way,” Gibaud said.

The hospital now runs 50 virtual machine hosts running 400 Guests with a mix of large and small workloads including terminal services, Gibaud said. In all, RGH has virtualized about 95% of its Windows-based applications, including Exchange, SQL Server, the ClinicalCare portal that physicians and nurses use to access electronic medical records, and RGH’s billing system.

In the initial phase of the virtualization deployment, Gibaud said the hospital used IBM Bladecenter servers (HS20, HS21, LS20). “This allowed us to condense many servers is a small amount of space. With VMware and IBM Bladecenters we were able to consolidate over a 150 Servers into one rack,” he said. “Today we use IBM x3850 and HP DL580 G5 to handle larger server workloads.”

In addtion, the hospital is running 200 Windows XP desktops using VMware’s Virtual Desktop Infrastructure on just two IBM x3850’s.


August 19, 2008  1:16 PM

Making a P2V conversion: Stage-phase configuration

Rick Vanover Rick Vanover Profile: Rick Vanover

A well-documented procedure for physical-to-virtual (P2V) conversions still lacks the valuable information learned by experience. In this video blog, Rick Vanover introduces the stage configuration phase of a P2V conversion. When this phase and other phases of a conversion are used in a procedural manner, the success target will increase from accomodation of most scenarios that will arise in virtualization environments.


August 19, 2008  8:39 AM

Is a 100% virtualized environment possible?

Eric Siebert Eric Siebert Profile: Eric Siebert

Organizations that have virtualized their environments often virtualize only a portion of their servers, leaving some servers running on standalone physical hardware. Is a 100% virtualized environment possible? Certainly it is, because almost all workloads can be virtualized, but there are some arguments against completely virtualizing your environment.

I recently wrote about an experience I had with a complete data center power failure. The problems resulted from all the DNS servers being virtualized and until the host servers and storage-area network were online no DNS was available, which made it difficult for anything in the environment to function properly. Having a DNS server and Active Directory domain controller running on a physical server would have been a great benefit in that situation.

Additionally, many organizations are leery of having too many servers virtualized because they want to avoid the risk of a single host outage causing many virtual machines to go down at once. This risk can be partially offset by some of the high availability features that are available in many of the virtualization products. In addition, if a virtual environment relies on a single shared storage device and that device has a major failure, it can take down all the virtual machines that reside on that storage. This risk can also be partially offset by having a well architected SAN environment with multiple switches and host bus adapters so multiple paths to the SAN are available.

Another reason that you may not want to virtualize your whole environment is that many software vendors do not fully support running their applications on virtual machines and subsequently may require you to reproduce a problem on a physical system. Because of this it is a good idea to have a few physical servers running applications that may be effected by these policies. For example, if you have multiple Oracle, SQL or Active Directory servers, consider leaving one or two of them on physical hardware.

Finally, you may consider leaving a few physical servers for applications that have non-virtualization friendly licensing and hardware requirements that can be difficult to virtualize (licensing dongles, fax boards, etc.) or for servers that have extremely high I/O requirements.

So is a 100% virtualized environment possible? Yes it is, but is it advisable? In most cases it is not recommended. The cost savings that are typically seen by implementing virtualization will increase the more an environment is virtualized but you may want to stop at around 90% and leave a few physical server for the reasons that were previously mentioned.


August 15, 2008  8:19 AM

News of the week: VMware’s ESX 3.5 bug causes VM failure

Bridget Botelho Profile: Bridget Botelho

This week, the biggest news item on SearchServerVirtualization.comVMwarewas the havoc caused by VMware Inc.’s ESX 3.5 update 2 bug, which kept virtual machines (VMs) from booting up and live migrating (VMotion) on Aug. 12.

Users posted their fury on IT forums like ARS Technica’s the Server Room. One user on the forum summed up the situation perfectly. “This was a very big deal, make no excuses for VMware. It certainly had potential to completely disrupt a lot of customers. … At most it should have disabled VMotion and other extras but not starting a VM.”

In the afternoon on Aug. 12, VMware issued an Express Patch on its Knowledgebase site and warned users not to install ESX 3.5 Update2 or ESXi 3.5 Update 2 if it has been downloaded from VMware’s website or elsewhere prior to Aug. 12, 2008.

VMware’s new CEO, Paul Martiz, issued an apology letter the day of the bug explaining the issue.

When the time clock in a server running ESX 3.5 or ESXi 3.5 Update 2 hits 12:00AM on August 12th, 2008, the released code causes the product license to expire. The problem has also occurred with a recent patch to ESX 3.5 or ESXi 3.5 Update 2. When an ESX or ESXi 3.5 server thinks its license has expired, the following can happen:

  • Virtual machines that are powered off cannot be turned on;
  • Virtual machines that have been suspended fail to leave suspend mode; and,
  • Virtual machines cannot be migrated using VMotion.

The issue was caused by a piece of code that was mistakenly left enabled for the final release of Update 2. This piece of code was left over from the pre-release versions of Update 2 and was designed to ensure that customers are running on the supported generally available version of Update 2.

… I am sure you’re wondering how this could happen. We failed in two areas:

  • Not disabling the code in the final release of Update 2; and
  • Not catching it in our quality assurance process.

We are doing everything in our power to make sure this doesn’t happen again. VMware prides itself on the quality and reliability of our products, and this incident has prompted a thorough self-examination of how we create and deliver products to our customers. We have kicked off a comprehensive, in-depth review of our QA and release processes, and will quickly make the needed changes.

I want to apologize for the disruption and difficulty this issue may have caused to our customers and our partners. Your confidence in VMware is extremely important to us, and we are committed to restoring that confidence fully and quickly.

It remains to be seen whether Maritz’ apology is enough to satisfy frustrated users. A major issue like this may prompt users to try other virtualization products. For instance, the day of the incident, some users were singing praises of Microsoft Hyper-V on technical forums.

Either way, having to deal with this issue after only a month in charge is really initiation by fire for Maritz.

And I imagine that VMware co-founder and ex-CEO Diane Greene, who was ousted by VMware’s board of directors July 8, might feel at least somewhat vindicated.


August 14, 2008  11:33 AM

Powerful scripting options with the VBoxManage command

Rick Vanover Rick Vanover Profile: Rick Vanover

Sun xVM VirtualBox offers a powerful command-line interface (CLI) component, VBoxManage, which can perform most functions within VirtualBox. Having a robust CLI is key to automation and scripting, even in a workstation virtualization product. In my continued coverage of VirtualBox, this blog will highlight some of the parameters of VBoxManage with some examples and areas where this command can be of use.

Modifyvm parameter
Probably one of the more versatile commands for VBoxManage is modifyvm. This parameter can set memory, operating system type, pae settings, monitor quantity, hardware inventory as well as snapshot configuration. Here is a sample command that sets a memory amount, makes the CD-ROM disk the first boot device and disables USB support:

vboxmanage modifyvm XP-TestSystem -memory 512 -boot1 dvd -usb off

The modifyvm parameter also has extended options such as BIOS display time, network interface driver type, host network interface assigned to the VM and enabling or disabling of the clipboard. Overall, modifyvm has over 50 parameters for an individual VM.

Controlvm parameter
From an automation standpoint, the controlvm parameter would be used to start a VM at host system boot. Controlvm can also be used to attach USB or DVD devices. The entry below will disconnect the media of the two virtual Ethernet adapters and reset the power state:

vboxmanage controlvm XP-TestSystem reset setlinkstate1 off
vboxmanage controlvm XP-TestSystem setlinkstate2 off

Note that in the case of an inventory of multiple devices of the same type, a separate entry would be required as in the case of disabling the network interface.

Snapshot parameter
The snapshot parameter can be used to manage all elements of a snapshot. In the case of a frequently used test system, it may be a good idea to automate the change back to the base snapshot. The following command would revert a VM to the existing snapshot:

vboxmanage snapshot XP-TestSystem discardcurrent -state

This command cannot be executed while the VM is running, yet a leading with a power down controlvm parameter can get the system to a state where running the snapshot parameter will do the trick.

Powerful Stuff
This is a very quick sample of what is capable with the VBoxManage command. I don’t know of anything that can be done in the interface that cannot be done with this command. VBoxManage commands also interact the same way across different platforms of xVM VirtualBox. This flexibility offers a compelling solution for an automated deployable solution at the zero-cost of xVM VirtualBox. The online user manual has the entire chapter 8 dedicated to the VBoxManage command, which can be downloaded from the VirtualBox website.


August 14, 2008  11:27 AM

Is VMware’s apology enough?

Eric Siebert Eric Siebert Profile: Eric Siebert

In the aftermath of the infamous bug in the latest release of VMware ESX, VMware CEO Paul Maritz has released a letter that apologizes for the incident and also explains what went wrong and how they are committed to ensure it never happens again.

For customers who were effected by the widespread problem with ESX 3.5 Update 2 released several weeks ago, is VMware’s apology and promise to improve their processes enough? Or is it going to leave some lingering doubt in the minds of some that may inspire them to look at other virtualization products?

The letter provided an explaination of what what happened:

The issue was caused by a piece of code that was mistakenly left enabled for the final release of Update 2.  This piece of code was left over from the pre-release versions of Update 2 and was designed to ensure that customers are running on the supported generally available version of Update 2.

And why it happened:

I am sure you’re wondering how this could happen.  We failed in two areas:

  • Not disabling the code in the final release of Update 2; and
  • Not catching it in our quality assurance process.

And finally what they will do to ensure it never happens again:

We are doing everything in our power to make sure this doesn’t happen again. VMware prides itself on the quality and reliability of our products, and this incident has prompted a thorough self-examination of how we create and deliver products to our customers.  We have kicked off a comprehensive, in-depth review of our QA and release processes, and will quickly make the needed changes.

Despite it all, VMware still has a great enterprise product that is robust and mature and is still the virtualization software of choice for most Fortune 500 companies. This incident still could have easily been prevented by following processes when preparing a beta build to become a final build. In addition, their QA processes which are usually designed to ensure a quality product also failed to detect that the time bomb code was still present and active.

Will VMware learn from this incident? Absolutely. Sometimes it takes a big event like this to inspire changes and improvements in a company that may have been set in its ways and wasn’t paying attention to details.

One area that many users were critical of was VMware’s communication on the matter. They were initially slow to issue public communications and proactively contact customers to let them know about the issue. The thread in the VMware Technology Network (VMTN) forums that was started on this issue became the rallying point for many of the users who were experiencing problems as a result of the bug. VMware employees did provide some updates to the thread which let users know they were aware of the bug but did not provide much other information until much later in the day. Another breakdown was that VMware’s knowledgebase that had information on the bug and is often the first place users go to when experiencing a problem becamse so overwhelmed by the number of requests that it was unavailable for over 6 hours.

VMware delivered the fix for the problem fairly quickly as it was available roughly 24 hours after the problem was first reported. Many users were hoping to get it quicker then that, but VMware needed time to package and test the fix before releasing it. VMware also did provide good communication later in the day with detailed updates and emails that were sent to customers.

So is VMware’s apology enough? In my mind it is. Yes, it was an unfortunate incident that caused many customers a good deal of grief but the end result is that VMware responded quickly and effectively and this incident will serve as a lesson that they won’t soon forget and will help make their products and processes stronger going forward.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: