The Virtualization Room

April 5, 2007  9:05 AM

Virtual Iron Offers up Performance Benchmark

cwolf Profile: cwolf

Following Simon Crosby’s release of a XenSource performance benchmark, I began to needle the folks at Virtual Iron about publishing a benchmark of their own. In short time, Chris Barclay, Virtual Iron’s Director of Product Management, sent me some numbers with his blessing to make public.

Their benchmark was based on the Windows Server 2003 OS running on an Intel Xeon 2.66GHZ dual socket/dual core server, with a 1333MHz FSB and 4GB of DDR2 667MHz RAM. For their tests, 1GB of RAM was allocated to the OS and the VM connected to raw SAN storage. So the test environment, in my opinion, is very fair.
Now onto the results…



Virtual Iron


SPECInt 2000




netperf tcp stream send




netperf tcp stream receive









     Network (MB/sec)

     Disk (MB/sec)

     Disk (Transfers/sec)










So overall the Windows Server 2003 VM was able to perform at or below a 3% performance degradation. The Virtual Iron tests followed the same benchmark pattern used by VMware. If you would like to see the VMware results and also get more detail on what each individual benchmark is testing, take a look at VMware’s document “A Performance Comparison of Hypervisors.” Keep in mind that the Xen performance numbers in the VMware paper are under significant debate, with most of us (myself included) seeing Simon Crosby’s Xen benchmark numbers as being more accurate.

Throughput degradation has been very important in many of the virtualization projects that I have been involved with, so having some hard numbers for performance comparison between VMware, XenSource, and Virtual Iron is extremely helpful. I’m hopeful that we’ll see a similar benchmark from Microsoft once the Windows Server Virtualization (WSV) service is available in Longhorn Server, or even for Microsoft Virtual Server 2005 R2 SP1 for the time being. If not, I’ll churn WSV or Virtual Server through the VMware benchmarks and post some numbers myself.

~Chris Wolf

April 4, 2007  8:32 AM

Getting ESX 3.0.0 and 3.0.1 updates

Kutz Profile: Akutz

It can be a real hassle to download updates for ESX 3.0.0 and 3.0.1. Wouldn’t it be nice if there was some program that could take care of this for you? Now there is! Introducing my patented (not really) chk4updates script. Using arcane majiks and other-wordly sorceries, I have divined through intense enchanting a process by which you can grab all available updates for ESX versions 3.0.0 and 3.0.1. I have included the script inline below, and you can also grab it from


# debug level. set to ‘1’ to have extra debugging
# information sent to stdout.

# the version of esx server to download patches for
# valid values are ‘3.0.0’ and ‘3.0.1’

# set to ‘1’ to download tarballs
# set to ‘0’ to just have a message printed to stdout

# the directory to place the downloaded tarballs in

# get a list of the already downloaded patch files

if [ “$DEBUG” == “1” ]

# get a list of the updates
wget –quiet

# parse the updates list for the actual links
PATCH_LINKS=`cat vi3_patches.html

| egrep -io “[[:digit:]]*-patch.html”`

# remove vi3_patches.html
rm -f vi3_patches.html

# iterate through the patch links
# get the patch information
wget -q $LINK

# parse the actual file name from the patch link
LINK_FILE=`echo $LINK | sed “s/(.*)(/)([^/]*.html)/3/g”`

# determine if this is an esx 3.0.0 or 3.0.1 patch
egrep -ioq “Download Patch ESX-[[:digit:]]*
for VMware ESX Server 3.0.0″ $LINK_FILE
egrep -ioq “Download Patch ESX-[[:digit:]]*
for VMware ESX Server 3.0.1″ $LINK_FILE


# 3.0.0
if [ “$IS_300” == “0” ] &&
[ “$ESX_VERSION” == “3.0.0” ]

# 3.0.1
if [ “$IS_301” == “0” ]
&& [ “$ESX_VERSION” == “3.0.1” ]

if [ “$DEBUG” == “1” ]

if [ “$GET_TARBALL” == “1” ]
# get the link to the tarball we want to download
TARBALL_LINK=`cat $LINK_FILE | egrep -io “http.*gz”`

# strip the extraneous stuff of the link
# so that only the name of the tarball remains
TARBALL_LINK_FILE=`echo $TARBALL_LINK | sed “s/(.*)(/)([^/]*gz)/3/g”`

# parse the patch release date from the patch html
PATCH_RELEASE_DATE=`cat $LINK_FILE | egrep -io “<b>released .*</b>” |
sed “s/(<b>released )(.*)(</b>)/2/gi”`

# get the name of the directory that is
# created once the patch is decompressed
| sed “s/(.*)(.[^.]*$)/1/gi”`

# prepend the patch release date to the decompressed directory

# determine if the patch has already been downloaded by
# comparing the name of the patch to the files that
# exist in the output directory

if [ “$DEBUG” == “1” ]

if [ “$ALREADY_DOWNLOADED” != “0” ]
&& [ “$TARBALL_LINK” != “” ]

if [ “$DOWNLOAD_TARBALLS” == “1” ]
echo “fetching $TARBALL_LINK_FILE …”

# fetch the patch

# move the patch to the outpdir directory

# change the working directory to the output directory

# decompress the patch

# rename the decompressed patch dir to include the
# patch release date

# remove the patch tarball

# change into the previous working directory
cd –

# remove the patch information
rm -f $LINK_FILE

April 3, 2007  12:38 PM

Virtual machine back up licensing challenges

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Recently, I chatted with a sys admin about his experience with VM migrations and management challenges. His primary goal right now is creating backup copies of his VMs as part of his disaster recovery plan. The biggest hurdle? Licensing. Or rather, the cost of licensing, because he wants to avoid the cost of treating each VM like a physical box.

Right now, he’s backing up the VMs that don’t need to be online 24/7 by shutting down his guest OSes, backing up the virtual hard disks using Backup Exec, and restarting. Costs are less for backing up in this fashion, because it only takes one Backup Exec license to backup the files on the host.

But for his mission critical VMs, he’s stuck between a rock and a hard place. Take the VM offline, install the Backup Exec Client and pay the license fee for *each* VM (which could get pretty expensive pretty quickly), or… don’t back up.

He’s starting to research snapshots as a lower-cost solution for his mission-critical VMs, but when I last checked in hadn’t gotten too far in the process. Any suggestions? Leave a comment and I’ll pass your suggestions along.

April 2, 2007  9:33 AM

A little off topic – Parallels Desktop for Mac “Coherence”

Joseph Foran Profile: Joe Foran

Ok, this blog is normally about server virtualization, but I thought I might digress into a slighly-off-topic realm today, to bring some opinions on a product I’ve been using for some time, called Parallels Desktop. This is the program that gets the little text box at the bottom of the “Run Windows and Mac” commercials. You know the ones, John Hodgeman, with the many-titled commentator/reporter from The Daily Show and Justin Long, the guy from about a million small roles. I’ve been using it for some time, having the need to run such things as Visio and Access, as well as my custom MMC for all our Windows server management, and VirtualCenter to manage our VMware environment from my Mac. Parallels is one of the two ways to get windows working on your mac, and until the recent release of VMware’s Fusion beta, the only virtualized way, and I use it all the time. I don’t play the upgrade game with applications that I need on a daily basis, so I’ve been running a slightly older build for a while, at least until my curiousity about Coherence got the better of me.

So, I fire up my XP machine after the upgrade, and click the button for Coherence. What do I see – A Control-Alt-Delete box in the middle of my Mac. Hilarity. True hilarity. Function-Control-Alt-Delete and I’m in. Yuck. My first non-enjoyable part of Coherence… lets hope that it’s my last. The Windows taskbar has just invaded my Mac desktop. Start button, quick launch, and system tray… my favorites (not!). Easy to dispel… just a simple click in the options box and they’re gone. Getting back to the Start button is easy – just a double-click on the parallels icon and I have my menu. Some re-arrangement of what I have pinned there vs. what I used to keep in my quick launch tray, and I’m ready to roll.

Windows command prompt. Adobe Designer. IE7. All working. Everything is working. My shared folders between the virtual and the physcial mac means I can move docs back and forth. What about drag and drop? Works like a charm. It even pops up a box that lets me choose what access I give Parallels. The whole kit and kaboodle looks like this. You can see that the app creates icons in my dock, just as if they were Mac programs. You can see a couple of Mac-native apps running, like iChat, Grab, Thunderbird, and Firefox. I fired up my VMware Server management console to connect to the test lab, and sure enough, it worked great. It feels native. It’s seamless once I’m past that warning box. It’s simplicity is brilliant.

What it means for the deskop is obvious. It means no more worrying about running legacy apps. Got a must-use app for Win9x? Need to run a Windows app on a Linux desktop, and WINE can’t run it? All possible… not with Parallels specifically, but the technology in general. It won’t be long before other products are out there that do these things, that take advantage of this huge conceptual leap in client-side application virtualization.

Imagine a Linux client, talking to a Citrix server that is hosting Mac and Windows apps, and sharing them via http. Wow.

Off-topic a bit? Yes. But keep this kind of converged (Coherent?) virtualization approach in mind as the line between operating systems continues to blur in the server market. Will we one day see one server serving applications to end-users, customers, etc. from a virtualized environment similar to Coherence? I wonder what this means for streamed applications, like those pushed out via Citrix? Will Citrix take advantage of this kind of technology in its own app virtualization products? One can only hope. What will this do for sandboxed applications? It’s a bit off in the future, but expect this sort of innovation to make it’s way upstream in any number of ways.

March 30, 2007  5:10 PM

Steps along the path of enlightenment

Shayna Garlick Shayna Garlick Profile: Shayna Garlick

It’s great to see VMware finally embrace Paravirtualization. As a result of a tremendous community effort to develop a common interface between Xen and VMware, that benefited from the collaboration of VMware, IBM, Red Hat, Novell, XenSource, Intel and many contributors, the first common API between the two hypervisors will appear in Linux kernel 2.6.20.  It’s a pity VMware’s PR didn’t acknowledge the community contribution though…

Here’s what’s going on:  The issue at hand is Paravirtualization – a technique of modifying an operating system so it will run optimally on a hypervisor.    The best known example of a paravirtualizing hypervisor is the Xen hypervisor, but the concept originates from IBM mainframe OSes from the late 70’s and has been widely used on vertically integrated (hardware plus software) systems for some time.  Paravirtualization was first introduced to the x86 architecture by the Xen project instead of the binary patching technique used by VMware, because (as VMware recently acknowledged) it offers significantly improved performance for virtualized guests.

But Paravirtualization has a bit of a downside – it requires modifications to the guest operating system to enable it to co-operate with the hypervisor.  VMware gets around this today using binary patching, which modifies the guest “on the fly” by rewriting the code. Intel VT and AMD-V help a lot- but not with I/O.  The performance benefits of paravirtualization have led all x86 OS vendors to adopt paravirtualization for their next major OS release (though Microsoft calls it “enlightenment”).  Xen-style paravirtualization also allows OS vendors to ship the hypervisor with the OS – something VMware understandably isn’t that keen on.  In the case of Linux and Solaris this is achieved through the inclusion of the Xen hypervisor, and in the case of Microsoft the forthcoming enlightened Longhorn Server OS will be augmented at some point with the Windows Hypervisor, which is architecturally very similar to Xen.

Today many distros are delivering Xen as an embedded hypervisor.  The Xen hypercall API paravirtualization hooks are added by the vendor to their kernel once they select a particular version from  This is tedious/painful, and the obvious right way to do this is to have the hooks included and maintained by    The Xen project was happily working away (albeit rather slowly) to get the Xen hypercall API upstreamed to, when VMware introduced VMI at OLS in 2005.    VMI is a lower level interface than the Xen hypercall API.  It’s much more suitable to a binary re-writing hypervisor like VMware’s.   But it deserved serious consideration because it offered a useful new feature – the same kernel could run native and virtualized.   But VMI is closed source – an ABI not an API, which is a serious problem for many in the open source community.  Everyone agreed that having a single interface for multiple hypervisors would be preferable to having many.   So, at the Ottawa Linux Symposium in 2006 the Xen project began to work with VMware to develop a common set of kernel hooks that could accommodate the VMI ABI and the open source Xen hypercall API.  Since then, there has been a very positive effort on all  sides, with  IBM, HP, Red Hat, Novell, and many other core developers playing a key role in getting the work done.

So, what’s in 2.6.20 is a common API called paravirt_ops, developed collaboratively by a group of contributors, and the first implementation of the VMware VMI interface into paravirt_ops comes in 2.6.21.  The Xen interface into paravirt_ops should follow shortly, likely in the 2.6.22 time frame.  The Xen API is more extensive than VMI, and the work is taking a bit longer to get done.   Once this set of changes is complete, future Linux kernels will have the paravirtualization hooks built in, which will dramatically simplify the kernel development processes of the distros.

The bottom line: Future Linux kernels will have a common hypervisor interface called paravirt_ops that will allow Linux to run on either Xen or VMware with high performance.   Through XenSource’s relationship with Microsoft, it’s reasonable to expect that these Linux kernels will have the ability to run as first-class “enlightened” guests on the future Windows Hypervisor.  Of course, all of this is only relevant to the market when the next major enterprise Linux distributions take new kernels to market that include paravirt_ops, but overall it is good to see harmony emerging in this particular piece of the virtualization landscape.

March 26, 2007  3:34 PM

P2V wins, losses: VMware Converter

Jan Stafford Jan Stafford Profile: Jan Stafford

The road from physical to virtual servers isn’t a freeway…yet. There some potholes that hold up P2V migrations. Here are a couple of views from those who’ve taken a trip with VMware Converter.

Language support issues on VMware Converter caused several P2V mishaps for Robert Sieber of SHD System-Haus-Dresden GmbH in Dresden, Germany. Responding to my blog entry on physical-to-virtual (P2V) migration mishaps, he wrote:

“Mainly all of our tries to migrate from physical to virtual or virtual to virtual failing at 97%. It looks like if there is an issue with the language of the underlying OS. We used German OSes for installing VMware converter and now we are trying to use only English ones. Since we switched to English success rate is somewhat better.

“I really hope that the people who developed ESX server are much better than the one who developed Converter and Capacity Planner.”

Blogger Scott Lowe had mostly good experiences with VMware Converter. Check out his trip through online and CD-boot experiments on his blog. Lowe liked VMware’s network throughput of about 8-9GB per hour and its ability to import directly to VMFS on ESX Server farm with no need to use a helper VM or vmkfstools. On the other hand, he had trouble logging in to VirtualCenter and had to connect to back-end ESX server instead. Booting up seemed to take more time on VMware Converter than on VMware’s older tool, P2V Assistant; but, he says, “this is a very subjective assessment.”

What are your objective or subjective opinions about the state of P2V migration tools? Please comment here, or email me at

March 24, 2007  5:42 PM

Good News from Novell – Update from BrainShare 2007

cwolf Profile: cwolf

I just returned from the Novell BrainShare 2007 conference in Salt Lake City, and I have to say that I was very excited about the amount of attention that virtualization received at the conference. Here are some of the highlights:

  • Novell and Microsoft partnership – both Microsoft and Novell representatives co-presented on both virtualization and directory service integration
  • Plenty of talk on paravirtualized device drivers – with PV drivers, Microsoft Longhorn Server virtual machines will run at near native performance on  Xen running on SLES 10 SP1. With the planned official support for Windows 2000/2003 PV drivers, Xen on SLES 10 SP1 is emerging as a serious choice for virtualization.
  • Failover support for Xen on SLES 10
  • Virtualized NetWare 6.5 support in Xen
  • Cool management on the way – ZENworks Virtual Machine Management (beta coming soon) offers centralized management for VMware, Xen, and Microsoft virtualization engines

I have always been a big proponent of dynamic failover support when it comes to running virtual machines in production environments. With Heartbeat 2.0 integration, Xen VM failover support will be a part of SLES 10 SP1. I dug a little deeper into the heartbeat integration and currently failover will progress in the order of cluster node names. If a target node does not have the resources to support an additional VM, then the VM will fail over to the next node in the cluster (and repeat the process until it has found a suitable home). Novell engineers are working on better automation for failover, so a VM’s first failover target will be a physical host system that has the capacity to host the VM’s required resources. If you’re planning to build a 2 node Xen failover cluster, then this is really no big deal. However, if you’re planning an 8 node cluster, you’ll definitely want tighter control of the failover process. Still, this has been a big year for Xen, and I would not be surprised if Novell’s Xen failover automation isn’t rock solid by the end of the year.

On my Novell Xen wishlist…

  1. Migration tools – I would love to have a tool that automatically converts a physical NetWare 6.5 server into a virtual machine. If Novell will not offer a migration tool, I’m sure that a vendor such as PlateSpin would love to jump in and help.
  2. Improved failover (see above)
  3. Consolidated backup support – I would love to see an answer to VMware’s VCB. Give us a well-documented backup scripting API and integrating Xen backups into enterprise backup software backup jobs will be a piece of cake.
  4. Common management APIs/metadata – It would be much easier for all of us (admins, ISVs, etc) if there was a single common management API set for all virtualization platforms. I’m hopeful that a common management API set will be produced as a result Microsoft/Novell partnership. However, getting all of the major virtualization vendors to agree on a common format would open plenty of new doors in terms of more robust backup methodologies, centralized management, and reporting.

I’m sure that time will tell whether or not my wishes are granted…

March 22, 2007  2:55 PM

VMTN gets a facelift.

Joseph Foran Profile: Joe Foran

From the desk of totally unimportant and frivlous items (also known as my inbox) came this timely bit of news in the VMTN Technical Newsletter:

“The new VMTN front page gives a dynamic view into the activity on the site and in the VMware community. Keep up to date on the latest in VMTN News, Virtual Appliances, Technical Resources, Discussions, Knowledge Base, Compatibility Guides, Security Alerts, VMware Blogs, and Virtualization Blogs. The page is updated throughout the day.”

Ok, so it wasn’t technical. It was informative though, and I do like the new layout. I always had a problem with how difficult to navigate the old VMTN site was, how it was hard to go from one place to another without crossing through a third place that I didn’t really care about. Me, I like the forums and the virtual appliances, but I’m also getting to like the community-centered this-hardware-works-on-VI3 section.

March 22, 2007  2:54 PM

Benchmark Wars

Joseph Foran Profile: Joe Foran

And tonight on Friday Night Company-Fights:

The Undercard: AMD vs. Intel, Windows vs. Linux, and Pepsi vs. Coke.

The Main Event: XenSource vs. VMware.

Ok, here’s my beef. I hate all industry wherein marketing rules over substance, and in this case, I’m calling out both XenSource and VMware for being pig-headed and small-minded.

XenSource – you posted test results of a BETA. A product that is, by definition, not ready for prime time. A product that still needs work. That ain’t done. That’s still raw in the center. Can I say it any other way? My constructive criticism is this – wait until you have posted the mature version that is available in it’s production form and then do the proper benchmarking. Don’t get me wrong here, Xen is a great product, but reacting to VMware’s get-your-goat inflammatory benchmarking is rediculous. All XenSource looks like now is another marketing-driven company that is more interested in fighting perceived “Cola Wars” than in putting out a class-A product. Benching a beta just looks cheesy, and worse, sneaky.

VMware – Those were dirty benchmarks and you know it. You didn’t create a proper test between proper versions, under neutral conditions. And your EULA… only when if became obvious that the problem was public did you give XenSource permission to test your product. You need to drop that contingency against publishing benchamrks. It’s sneaky and cheesy too. Yes, you’re not the only ones to do it, but that doesn’t make it right. While you’re at it, why not post meaningful benchmarks instead of trying to raise the heat on Xen. This can only help them, your competition, to get more publicity. And now that it’s out that the benchmarks weren’t fair, if makes VMware look bad.

I adore VMware. I think Xen is great too. I think in this case both of these companies stand to lose credibility, not gain market share.

March 22, 2007  2:53 PM

VMware SAN Guide

Joseph Foran Profile: Joe Foran

Recently posted to the VMware web site is this guide to configuring your SAN for maximum virtualization efficiency (wow, that almost sounded like marketspeak… help me Obi-Wan help me!). It’s an excellent resource on both VMware architecture and SANs in general, containing a copious section on what a makes a SAN a SAN. For anyone who doesn’t know, it’s a Storage Area Network – a way to take a big honkin’ system (or systems) with lots of disks and share them to your servers, which will think they are the same as physically attached disks. The guide goes on to discuss different kinds of SANs and how to configure them to work best with VMware’s various utilities. Failover is also discussed, both from the SAN and the VMware side, as are some aspects of optimization for performance. There is also the obligatory mention of NAS support (NFS 3 only) in VI3, a first for the ESX product line (VMWare used to support it in earlier pre-ESX products, the descendents of GSX/Server).

Most of the reason that VMware published this document can be summed up by this quote from page 130:

“Many of the support requests that VMware receives concern performance optimization for specific applications. VMware has found that a majority of the performance problems are self-inflicted, with problems caused by misconfiguration or less-than-optimal configuration settings for the particular mix of virtual machines, post processors, and applications deployed in the environment.”

I have to admit, that had me laughing. It was the whole “blame the user” mentality that I found funny – I’m glad VMware put the paper out there, but really, they had to expect that the 80/20 rule of troubleshooting would apply to them too – 80% of all problems are human error. The guide does a good job of helping avoid those pitfalls, and goes into detail on setting up your SAN to perform well.

After perusing this document a bit, I’m going to stick with my anti-fibre-channel stance by saying that it’s just not worth the trouble to deploy new FC SANs for a VMware deployment. I’d stick with an iSCSI SAN or NFS NAS if you want the full benefit of shared storage and don’t already own FC SAN gear. Now I have to admit tht I’m biased here… I managed a SAN environment at one point in my career, and I hate Fibre Channel SANs with a passion that rivals how the Red Sox fans and Yankees fans feel about one another (except I don’t think EMC SANs hate me… at least not like human hate anyway, and if they did, I’d have to consider checking into an alternative cognitive function facility, aka the nuthouse).

Another reason I stand against rolling out new FC SANs for VMware is this article by SSV’s News Director Alex Barrett, in which EMC VP Charles Hollis calls for NAS as the best choice for VMware environments. I tend to agree, provided that a number of recommendations, also in the VMware SAN guide, are followed. First among these – forget sharing the storage network with anything else other than VMware. In fact, put it on a completely different set of equipment if you can, just to avoid any processor overhead that VLANing with the same network hardware may incur. It’s gotta be gig, too. That’s in the basic VMware VI3 docs, and repeated in the SAN guide.

The optimization hints consist of a mix of technical and non-technical advice, some of which would generally be overlooked by a SAN admin, and some of which would be overlooked by a VMware admin, such as:

“Choose vmxlsilogic instead of vmxbuslogic as the LSI Logic SCSI driver. The LSI Logic driver provides more SAN-friendly operation and handles SCSI errors better. “

“No more than 16 virtual machines or virtual disks should share the same physical volume.”

“Enable write-caching (at the HBA level or at the disk array controller level)”

There are also equally obvious dummy-errors that are mentioned, things that must happen in real life, but for the life of me seem so stupid that only people who WANT to be fired would do them. My favorite:

“Optimize disk rotational latency for best performance by selecting the highest performance disk drive types, such as FC, and use the same disk drive types within the same RAID set. Do not create RAID sets of mixed disk drives or drives with different spindle speeds.”

This is saying the following – Don’t mix 72gb 10k rpm drives within the same RAID array as 72gb 15k rpm drives. And don’t put a 72gb drive in with 144gb drives. And for pete’s sake, if your SAN supports mixed drive types, don’t ever, ever, EVER mix SAS drives and FC drives. Duh.

As for what this document is not – it is NOT a howto guide to configure VMware many applications in a SAN environment, beyond the direct purview of shared storage. There’s no guide to setting up VMware HA/DRS, though there are several pages dedicated to the storage aspects of these products, including how multipathing your HBAs can affect HA and DRS. Thats left for more product-specific papers, presumably because there’s no reason to be redundant.

Overall, the paper gets 8 pokers.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: