(Editor’s Note: scrosby is Simon Crosby, CTO of XenSource)
As the enterprise Linux Distros go to market with integrated Xen virtualization, the jockeying for position based on the promise of integrated virtualization is going to heat up.
You may recall that last year, when Novell shipped SLES 10 with Xen 3.0.2 in Q306, they were criticized by Red Hat, who claimed that Xen was nowhere near Enterprise ready. In fact, SLES 10 delivers what it promises: a basic implementation of Xen in Linux, that allows SLES 10 to virtualize itself. Most of the flak that Novell received had nothing to do with Xen – for example, their use of the Linux loopback driver (which is known to be flaky) for guest block I/O caused some users a lot of pain, and their storage architecture is awkward – having all disks owned by the host makes it difficult to manage guest storage effectively.
The launch of Red Hat’s RHEL 5 with Xen is a very big deal. Why? Red Hat dominates the Linux market and RHEL 5 will deliver the Xen hypervisor to millions of servers world-wide, with a seven year support commitment. The implementation choices Red Hat has made for Xen will have a tremendous impact not only on the RHEL footprint itself, but also on the RHEL derivatives, such as Asianux, Red Flag, CentOS and (it must be said) Oracle “Unthinkable” Linux.
Fortunately, it appears from the beta that I’ve been running, that Red Hat has taken time to understand some of the key subtleties of virtualization. For example, RHEL 5 uses the blktap driver for guest block I/O that XenSource added last summer – a zero-copy, high performance alternative to loopback that I expect will improve performance and avoid the stability issues seen with Linux loopback. Also promising is the Xen-ready RHEL 4.5, which allows RHEL 5 to virtualize some of the Red Hat installed base, and Red Hat’s plans to integrate the cluster technology and file system GFS to deliver infrastructure components for high availability based on Xen.
It’s disappointing that Red Hat has elected not to packaage the Xen project’s implementation of the DMTF CIM standard for virtualization management. Every other virtualization vendor will support the DMTF API, so the virtualization management ecosystem vendors will have to re-jig their products to work with RHEL’s libvirt. Overall, the package still feels very much like Linux that virtualizes more Linux, and Red Hat has been wise not to over promote what this release will allow users to achieve.
RHEL 5 ships Xen 3.0.3, which I find amusing, because at about the same time as VMware and Red Hat announced a deal, in which (amongst other things) Red Hat will certify RHEL on ESX, VMware published a performance benchmark of Xen 3.0.3 vs ESX, in which they claimed that Xen was nowhere near enterprise ready. Admittedly the VMware “study” used Windows as the guest, but for Linux, Xen 3.0.3 easily outperforms ESX. (We also debunked the VMware study, showing XenEnterprise typically equals or beats ESX for Windows and Linux).
You won’t find many users virtualizing Windows using RHEL 5, but Novell is clearly planning a Windows future for SLES 10 SP1. Living up to its chameleon logo, the SUSE team is surely hoping to get some of the new SLES 10 licensees delivered by Microsoft to use SLES 10 to virtualize Linux and Windows. SLES 10 SP1 will package Xen 3.0.4, which has good support for Windows, and Novell recently announced an arrangement with Intel for help to develop Windows drivers for Xen. Hopefully SUSE will improve their storage management architecture for SP1, and overcome the need to emulate the 16 bit graphical SLES installer on Intel VT, which makes installation annoyingly slow.
Ultimately, the jury is still out on whether the Windows IT Pro wants to use Linux to virtualize Windows, even if the Linux comes for free from Microsoft. My guess is that the answer is “no” – and that the platform virtualization model of ESX and XenEnterprise will continue to dominate at least until the arrival of Microsoft’s Windows Hypervisor. <sales plug> XenEnterprise, through its upcoming VHD support and enhanced Windows feature set, allows Windows admins to achieve high performance consolidation today, and migrate to the Windows Hypervisor whenever it arrives. </sales plug> I’m similarly doubtful that having Windows support will cause RHEL customers to switch to SLES in large numbers, no matter how aggressive the Novell pricing. But the topic of Licensing is worth another blog entry on another day…
Bottom line: the Xen hypervisor is an engine, and not a car. A powerful engine that needs a great gearbox, shocks, tires and the body to go with it. ESX Server is a Bentley, XenEnterprise a Lexus. To paraphrase Diane Greene, OS vendors are not virtualization experts. So we shouldn’t be surprised that the first OSV implementations of Xen felt more like a Ford Fiesta, and moreover the Xen feature set has been improving rapidly. Don’t count the distros out just yet – they are very focussed on building value around enterprise ready Xen virtualization, and RHEL 5 marks a great achievement by Red Hat and the Xen community.
I just had a chat with Veeam Software’s Ratmir Timashev about the release of their first commercial product, Veeam Reporter for VMware Infrastructure 3. The Russian company may already be known to some of you for its free tools — Veeam FastSCP, which administrators can use for backup or to copy large ISO images, and Veeam RootAccess, which allows ESX administrators to gain root access to an ESX server remotely. At any rate, the deal with Veeam Reporter is as follows: it collects data on ESX virtual machines, virtual networks and virtual switches, stores that data, and then displays it as a Microsoft Visio document.
What’s this good for? “This is extremely useful for planning high availability with VMotion,” Timashev told me. There are no other convenient ways to get “a good representation of where you can move your VMs,” he claims.
Timashev admits that Veeam Reporter doesn’t expose any data that you couldn’t find with VMware Virtual Center; the advantage is that you can “conveniently save the data and report on it.” Currently, Veeam Reporter takes a point-in-time image that the administrator must manually launch. For future versions, Veeam is working on automating the change management capability, plus giving admins an easy way to see what has changed.
Other features might include reporting on security permissions and storage, plus storing data in a database rather than Visio. List price for Veeam Reporter is $120 per managed ESX CPU. You can get it here.
While researching P2V migrations, I came across some discussions of problems encountered during the process.
Dave Mast, Geek at Large, learned this lesson about P2V migrations and shared it on his blog:
“This Tuesday we attempted a P2V (Physical-to-Virtual) conversion on our domain controller. It worked, but not as well as I expected. We wound up losing the SYSVOL share (where all your group policy stuff is kept) during the conversion. We still had our physical machine present, so we fired it back up and all was well.”
Mast is prepared for his next P2V experience. “I set up a second domain controller and stuck it in one of the IDFs. Hopefully the domain data will replicate from the new DC to the old, and if not, we can always start from scratch with a new VM.”
Here’s a thread about Exchange problems in P2V migrations. It’s a tale of many crashes.
Another informative P2V migration tale comes from Scott Lowe’s blog. He talks about using VMware Converter. Here’s a tidbit, but you should check out the whole entry:
“The only odd thing we ran into was that Converter refused to log in to VirtualCenter. We tried short hostname, fully qualified hostname, and IP address, and still had zero luck getting Converter to connect to VC. Fortunately, connecting directly to one of the ESX servers using the root account worked without any problems whatsoever, and the overall conversion process took about 1 hour and 20 minutes to move the 12 to 13 gigabytes of data on the source server.”
I’m still looking for P2V migration stories — successes and failures — to help me build a best practices and “beware” guide. Got any tales to tell? Add a comment, please, or send me an email at firstname.lastname@example.org.
For those who enjoy calculating potential ROI, VMware put out a new ROI calculator at http://www.vmware.com/products/vi/roi_calculator.html. In general, I’m not fond of TCO/ROI/FUD calculators, since they’re undoubtedly biased in favor of the company that puts them out, but they have some merit in that they can be the launching point for a REAL investigation into TCO/ROI/SNAFU calculations. The VRC is easy to use, offers plenty of data entry points for plugging in applicable numbers, doesn’t really rely on sketchy figures that can’t be validated, and most of all, it has the option to remove soft-costs from the final tallies.
I like that. I like that a lot. In fact, that’s the main reason I like it.
It’s also done in Flash, which makes it fairly universal across platforms (I live on an Intel MacBook Pro running Camino) with a simple plugin. The ability to remove individual line-items, in addition to ignoring all soft-costs, is pleasant, and the design is simple and elegant – graphical without being gaudy, and efficient without being complicated.
What don’t like about it? Two things, really. The first I’ve already mentioned… It’s very clearly a sales too, and that makes it biased by design when it comes to spitting out the final ROI numbers. While the bias isn’t as bad as you might find when comparing HP and Dell’s calculators, it’s clearly there. I found that by taking it down to two physical server, zero shared storage, and half the hourly numbers that there was still positive ROI when I had soft (aka indirect) costs included. It *did* calculate out negative dollar figures when I removed them, but it *didn’t* give me a negative percentage (giving instead a 0% figure). I was able to get a negative with soft costs, but I had to drop many of the numbers down to levels that fall even further outside the VI3 target-demographic range – like downtime costs and DR assumptions. The second thing I didn’t like was the customization page. There’s an option to enter your own figures if you need to (rack size, burdened costs of staff, number of admins per server, etc.), but it’s limited. It doesn’t go into the costs per server, which are hugely variable from company to company, and can severely affect the outcomes. Likewise, storage equipment costs wasn’t even a field, let alone something customizable, which means the huge differences in NFS, iSCSI, and FC storage aren’t accurately represented.
Those negatives aside, the fact that I was able to manipulate so many figures is a positive thing. So, on a scal of 1 to 10, with 11 being too good but not too loud, I’d give it a 7. I’d give most other tools a 5, if that means anything.
From Virtual Iron’s Website:
“Virtual Iron Software (www.virtualiron.com), a provider of server virtualization and virtual infrastructure management software solutions, today announced the general availability of Version 3.5 of its enterprise-class virtualization platform. The new release provides for more simple and cost-effective shared storage to facilitate the mainstream adoption of virtual infrastructure management capabilities such as LiveMigration, LiveRecovery, LiveCapacity and LiveMaintenance of virtual servers. The new release also adds single server installation to simplify deployment and configuration.
Version 3.5 provides comparable capabilities to VMware for one-fifth the cost. It is generally available now. Users can try it for free simply by downloading it at http://www.virtualiron.com/free.
‘Cost and complexity are the two major hurdles to broad adoption of server virtualization inside the enterprise,” said Chris Wolf, Senior Analyst, Burton Group. “During the evaluation process, organizations should strongly consider server virtualization platforms that support automated capacity management and virtual machine failover. Virtualization solutions that reduce complexity while automating virtual machine availability are best positioned for widespread adoption and long term success.’
Virtual Iron is one of only two virtualization vendors with the advanced workload migration capabilities required to support automated capacity management and virtual machine failover. With Version 3.5, Virtual Iron has made these capabilities easier to use for the mainstream market by providing support for simpler and more cost-effective iSCSI-based shared storage. This will facilitate the broad adoption of Virtual Iron’s virtual infrastructure management capabilities including LiveMigration, LiveRecovery, LiveCapacity and LiveMaintenance. With Version 3.5, Virtual Iron is also providing the mainstream market with a server virtualization solution that is easier to install, deploy and configure than comparable alternatives. It combines advanced virtualization and policy-based management to deliver all the capabilities and benefits of primary virtualization use cases.”
This is interesting. This is very interesting. Why? Because it signals a full-on challenge to VMware not from a marketing strategy, not from a licensing-hardball strategy, and not from a completely proprietary strategy. Those are the hallmarks of VMware’s battle with Microsoft. This is the first time a seperately available virtualization product, not tied onto the OS by an OS vendor (such as Red Hat and Novell w/ their Linux distros), and therefore single-server-oriented, product has been released that challenged VMware on its home turf.
Here’s a rough equation of the similarities in the products put out by VMware and Virtual Iron:
- VMware’s ESX / VI3 platforms are bare metal-installed. Virtual Iron 3.5’s virtual services is likewise bare-metal installed.
- VMware’s Virtual Center manages VMware ESX/Server/VI3 environments. Virtual Iron’s equivalent prodcut is called Virtual Manager (personally, neither name should have passed muster with the folks in marketing, but then again, who likes the folks in marketing?)
- VMware’s Conveter is meant to take a physical machine and move it across the divide into the realm of the virtual. Virtual Iron’s LiveMigrate is designed to do the same thing.
What has me intrigued the most is this bit of information from the Virtual Iron producti information from their Product Technology Overview paper:
From what I see there, it looks like most of the main features VMware’s VI3 offers are present in Virtual Iron 3.5 (VI3 vs. VI3.5… oh, no, that’s not confusing at all…)
Virtual Iron, according to several reviews I’ve read about the beta product, does lack some of the capabilities of VMware’s VI3 suite, but it’s not missing 80% of them, which is roughly the price difference. There’s even a free version similar to VMware’s Virtual Server (though I’m unsure of whether you can manage the free Virutal Iron product with the enterprise edition) It’s also an open-source application (the Xen hypervisor project) warpped in proprietary software. This has become an increasingly common business model amongst OSS companies who wish to get valuable free labor and remarkable community-driven insight while still trying to balance the need to live up to the OSS ideals of giving back to the community and the drive for corporate profitability.
So is it hype or huzzah? I’d say at this point it’s a bit of both. Hype because it’s not a solution that’s going to be everything-to-everyone, people aren’t going to be saying “hark-the-herald-angels-sing” because they’ve been delivered from some oppression in the marketspace, or anything like that… despite what the rhetoric coming out of Virtual Iron says. It’s huzzah because it really is a Small-to-Enterprise business product that will save a lot of money for organizations. It’s hype because it doesn’t have the sort of community built-up around it that VMware has (NOT that OSS community, mind you… I’m talking about the user and vendor community), and so the company isn’t offering virtual appliances, it doesn’t have the huge base of expertise in Net-land, and it doesn’t have the sheer volume of vendors supporting it that VMware has. It’s huzzah because Virtual Iron is going to open up the market as it builds these things. As they expand, and expand they will, VMware will be forced to come into line with a competitive market rather than what was a two-player field (VMware, MS, and that’s really it). Enomalous will grow too, and so will other Xen-based products, not to mention SWSoft and their interesting approach to virtualization (but thats for another time).
It’s time to take this for a spin and see what it can really do… a lot of questions remain. Not so much questions like “Does Virtual Iron Have X?”, but rather, “Is Virtual Iron’s X as good as VMware’s X?”. Is Virtual Iron’s virtual networking as robust as that within VMware’s Virtual Infrastructure? How about smooth VMotion-like migration between virtual hosts? Both of these, and more, remain to be seen, but for now, the product looks good, solid, and like a viable alternative for the Small-Medium business market, an perhaps the enterprise as well.
The huge cash infusion, the keeping-mum on a lot of deeper details… I think there’s something else going on aside from just a move by MS to set itself up for IP Power-Plays against Red Hat, Ubuntu, Linspire, etc. etc. I’d have to really re-read the fine print, well, ok, what little there is, but here’s my conspiracy theory of the day –
I think the whole thing reeks of payoff. It’s as if something like this (completely fictitious) conversation happened: “Oh, hello Novell… you found out about those huge chunks of stolen NetWare code in the Windows kernel, you say? You can prove it, you say? Oh, well, I understand your Linux business isn’t doing as well as it could have been. How would you like that to change? Well, yes, you’re quite right. We had been opposing Linux for a long time, but our opposition campaign isn’t doing well at all. So, here’s our offer, and sorry about the bad spelling, but as you can see we get to be non-litigating friends again!”
Complete bunk? Quite possibly. Rampant paranoia? Yes, maybe. But something else still stinks in that agreement.
I am (was) the unofficial Microsoft Entourage support person for my organization. Susie Q. is having a problem with Entourage losing her e-mail, “Call Andrew” her office-mate would say. Those days are no more. I now use Outlook 2007 on my Mac and let users know that until the next version of Entourage appears, this child of mother earth is not going to to suffer the burden of two computers any longer. No more do I have to keep a Windows box on my desk just to manage VI3 or play games. The days of this systems administrator keeping up with the Joneses when it comes to many machines is over — behold the power that is Windows on a Mac. Behold the power that is Parallels.
I know Windows on a Mac is nothing new. People have been doing it since almost last year at this time, however, it is only recently that it has been so easy to seamlessly run the latest version of Office or the VI3 client on OS X. As I type this I am also composing a message in Outlook, checking on one of my ESX hosts, and trying to get a 5-man together for the Mana Tombs. With the release of the Core-2 Duo Mac laptops last year as well as the very new Parallels release, I can do all these things on my MacBook Pro, at the same time, and still have honk left over. The recent release of Parallels is the first production release to include their feature Coherence. Coherence lets you hide a VM’s background, so that applications running inside the VM appear to be running as part of the host OS’s (in this case OS X) window manager. And it works very well. Copy and paste is supported between Windows applications and OS X, as well as drag and dropping files! You can even stick Outlook 2007 and Word 2007 in your OS X dock!
You may be wondering, “But that means he is still running a full version of Windows inside his copy of Parallels? How is that any better than two computers? And how does his laptop have any, what did he call it? Oh yeah, how does his laptop have any of this so called honk left over with a full version of Windows running in the background?” Those are very good questions, thank you for asking.
Yes, I am still running a full version of Windows. It is however far more efficient to do so in a VM than on real hardware because I do not have to keep up with 2 computers, a KVM, or using synergy, keyboard/mouse synchronization errors. I have one system to rule them all. Besides, I don’t have enough Tycho and Gabe stickers for all of my boxes, it is good to whittle the number down.
Yes, my laptop still has plenty of honk. It screams. It is so bright with power it out-gleams Mr. Cruise’s teeth from that volley-ball scene in Top Gun. Shiny! The reason for this is that I am using a modified, stripped-down version of Windows XP. I stripped out all of the unnecessary services and other miscellaneous junk out of Windows using nLite (http://www.nliteos.com/). nLite allows you to create custom versions of Windows without all of the fluff. Before I installed Office inside my VM, it was running in around 100 M of memory. Not 640K, but not bad. If you are going to go this route, it is my very sincere suggestion that you do not use Vista. It is a resource hog and the point of running a Windows VM in the fashion that I have described to you is for it to consume as few resources as possible. In fact, Windows has now become not unlike a Java Virtual Machine (JVM). It exists to make other applications possible on top of an existing OS.
You may have caught that I mentioned I am able to play games. Does this mean that the new release of Parallels supports DirectX inside of a VM? No, no it does not. That feature is exclusive to the latest beta of VMware Fusion. However, you can use Parallels to run your BootCamp (http://www.apple.com/macosx/bootcamp/) version of Windows inside of a VM. That way you can boot into Windows bare-metal when you need to and use it inside of OS X the rest of the time. How well does this work? Well, I do not know. I have no need to do this, because the only game I play is World of Warcraft (until Spore comes out) and it runs just fine on my Mac
I could keep typing, but I need to modify an Active Directory account’s attributes. Pardon me while I load AD Users and Computers from OS X. It’s evil, it’s delicious, it’s fun, and it’s a darn good business reason I can present my boss (the one I’m married too) that I should be able to buy a quad-core MacPro. Please honey?
I was doing some blog-surfing the other day and came across Christian Saborio’s virtualization blog. He mentions this really cool tool published by Microsoft that helps IT directors figure out costs of virtual machine licenses in different editions of Windows Server 2003 R2: Standard, Enterprise and Datacenter.
From Christian’s blog:
Riddle me this: How many licenses of Windows Server Enterprise Edition would you need if you are planning on running 20 Virtual machines inside a server that has 2 processors? Very, easy, you would need only 5 licenses. Too tough? How about this one…what would be the price difference if you were running 50 machines running Windows Server 2003 on a virtualization server with 2 processors if you chose to run the host machine with Windows Server Enterprise Edition vs. Windows Server Datacenter Edition? Very easy…running Datacenter edition would be $25,580 cheaper.
Of course, he didn’t make those calculations off-hand! Check out the nifty Windows virtualization calculator here.
Do you know how there are some questions that at first glance seem like they may generate long and complicated answers? Recently I was pinged with such a question, but then I came to realize that the my original answer, and in fact the question itself were both over thinking the situation. The question is, “What problems are there with running various virtualization solutions in the same data center?”
The answer is simple — there really are not any problems that do not already exist in any given data center that employs heterogeneous technologies. For example, there are the problems of redundancy, vendor support, free storage space, and staff expertise.
Imagine a data center that runs both Apache and IIS web servers. There are 2 Apache servers and 2 IIS web servers — the Apache servers run PHP applications and the IIS servers run ASP.NET applications. What happens if either all of the Apache servers go down or all of the IIS servers go down. Because all 4 servers do not use the same web server and the web applications are not based on the same language, the odds of quickly and successfully moving the load of the 2 downed servers to the other 2 are slim.
The same goes for a data center that implements various virtualization technologies. For every different technology employed, you potentially lose a little bit of redundancy.
In a given data center imagine there are Dell servers, HP servers, and IBM servers. That means 3 different vendors (unless you went through a VAR) for supporting your infrastructure. The same is true for virtualization software — having multiple virtualization solutions means having different vendors to contact when problems occur.
Free Storage Space
Available storage space is usually scarce in any data center. There is almost always someone or some system clamoring for more spindles. Throwing additional virtualization solutions into a single data center results in additional servers trying to grab whatever free space they can off of your storage device(s). Since most virtualization solutions use different file system types — NTFS, VMFS-3, ext3, etc…, different virtualization solutions cannot efficiently make use of other virtualization solutions’ free space. There are of course loop holes, such as mounting other file systems as a network file system, but these solutions result in a degradation of speed and also incur the overhead of a file system on top of a file system.
Implementing multiple virtualization solutions also means that your staff must now be experts on each piece of software used. This expectation is most likely unrealistic and can result in system administrators that are proficient in many areas but are experts in none.
While there are no technical reasons you cannot utilize virtualization solutions from multiple vendors in one data center, there are many reasons why it makes sense to stick to one vendor. However, there are always exceptions. The trick is to simply be mindful of the pros and cons in using heterogeneous technologies and to make the decision that makes the most sense for your environment.
The brouhaha over VMware’s attack on Microsoft’s virtualization licensing has brewed up some good blogs.
“I guess this marks the beginning of a crazy roller coaster ride,” writes Andrew Dugdell on his ‘Dugie’s Pensieve’ virtualization blog. He doesn’t think Microsoft’s licensing story is as full of “doom and gloom” as VMware says. “Obviously all forms of virtualization licensing and interoperability is going to get better,” Dugdell wrote. “It has to. I don’t think the market/customers will tolerate anything less.”
VMware is “foaming at the mouth”, says Alex Vasilevsky, founder and CTO Virtual Iron Software, on Virtual Iron’s Virtualization Blog. He thinks VMware is the pot calling the kettle (Microsoft) black. He wrote: “Of course, if VMware truly felt that ‘customers require an any to any interoperability model’ then wouldn’t their virtual disks be in an open format, as opposed to the proprietary format they continue to use? (For what it’s worth, we’re using Microsoft’s VHD format.)”
On his virtualization.info blog, Alessandro Perilli predicts that VMware may feel the ire of its parent company, EMC. “While suggesting a pacific resolution of this case (which would require a public rectification from VMware), Microsoft is clearly recalling its partner EMC for the unprecedented attack of its virtualization subsidiary,” Perilli wrote. He notes that EMC plans to launch a VMware Initial Public Offering (IPO) this summer, “and a compromising of Microsoft partnership could lead to a remarkable damage for stock performance.” That’s an “undesired risk” for EMC, which hasn’t been in Wall Street’s good graces for a while, Perilli concluded.
Dugdell wants to move beyond the age-old software licensing arguments. “Here’s the exciting burning question, how much better will Virtualization interoperability get? How aggressive is that curve going to be? I want to see that curve so steep, you can just feel the gforces kicking in!”