The Virtualization Room

Jul 23 2007   3:16PM GMT

Is Xen ready for the data center? Is that the right question?

Barney Beal Barney Beal Profile: Barney Beal

Article after article and post after post have compared and contrasted Xen, VMWare, Veridian, and a host of other virtualization technologies, with opinions on performance, management tools, implementations, etc., etc. in abundant supply. Inevitably when it comes to Xen, the story comes full circle with some sort of declaration about “data center readiness.” The definition of “ready for the data center” is quite subjective, of course, based largely on the author’s personal experience, skills, and their opinion of the technical capabilities of those managing this vague “data center” to which they are referring.

Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn’t include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file – a common task for any system administrator. Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.

And yet, I can’t help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn’t we be asking, “where is virtualization heading” and “which of these technologies has the most long term viability?”

Where is virtualization technology heading?

To even the most passive observers it has to be obvious that virtualization is here to stay. What may not be so obvious are the trends, the first being integrated virtualization. Within a year, every major server operating system will have virtualization technology integrated at its core. Within a few short years, virtualization functionality will simply be assumed – an expected capability of every server class operating system. As it is with RHEL now, administrators will simply click on a “virtualization” checkbox at install time.

The second trend is in the technology, and that is the “virtualization aware” operating system. In other words, the operating system will know that it is being virtualized, and will be optimized to perform as such. Every major, and even most minor operating systems either have or will soon have a virtualization aware core. Performance and scalability sapping binary translation layers and dynamic recompilers will be a thing of the past, replaced by thin hypervisors and paravirtualized guests. Just look at every major Linux distro, Solaris, BSD, and even Microsoft’s upcoming Veridian technology on Windows Server 2008, and you can’t help but recognize the trend.

Which of these technologies has the most long term viability?

Since we now know the trends, the next logical step is to determine which technology to bet on, long term. Obviously, the current crop of technologies based on full virtualization, like KVM and VMWare (it’s not a hypervisor, no matter what they say,) will be prosperous in the near term, capitalizing on the initial wave of interest and simplicity. But, considering the trends, the question should be, “will they be the best technology choice for the future?” The reality is that, in their current state and with their stated evolutionary goals, full virtualization solutions offer little long term viability, as integrated virtualization continues to evolve.

And which technology has everyone moved to? That’s simple – paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.

Of course, those with the most market share will continue to sell their solutions as “more mature” and/or “enterprise ready” while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution. It reminds me of the ice farmers’ response to the refrigerator – rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn’t as good.

So then, is Xen ready for the “data center?”

The simple answer is – that depends. As a long time (as these things go, anyway) user of the Xen engine in production, I can say with confidence that the engine is more than ready. All of the functionality of competing systems, and arguably more, is working and rock solid. And because the system is open, the flexibility is simply unmatched. Choose your storage or clustering scheme, upgrade to a better one when it becomes available, use whatever configuration matches your needs – without restriction. For *nix virtualization, start today.

For Windows virtualization, the answer is a bit more complex. Pending Veridian, the stop gap is to install Windows on Xen with so-called “paravirtualized drivers” for I/O. Currently, these are only available using XenSource’s own XenServer line, but will soon be available on both Novell and Red Hat platforms (according to Novell press releases and direct conversations with Red Hat engineers.) While these drivers easily match the performance of fully virtualized competitors, they are not as fast as a paravirtualized guest.

Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you’ll already be running on Xen, so you’ll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.

So what’s the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving. Do you go with XenSource’s excellent, yet more restrictive tool set, a more open platform such as Red Hat or Novell, or even a free release such as Fedora 7? That depends on your skills and intestinal fortitude, I suppose. If you are lost without wizards and a mouse, I’d say Xensource is the way to go. For the rest of us, a good review of all the available options is in order.

What about that “long term?”

So we know that virtualization aware operating systems are the future, but how might they evolve? Well, since we know that one of the key benefits of virtualization is that it makes the guest operating system hardware agnostic, and we know that virtualization aware guests on hypervisors are the future, then it seems reasonable to conclude that most server operating systems will install as a paravirtualized guest by default, even if only one guest will be run on the hardware. This will, by its very nature, create more stable servers and applications, facilitate easy to implement scalability, and offer improved performance and manageability of platforms.

As for my data center, this is how we install all our new hardware, even single task equipment – Xen goes on first, followed by the OS of choice. We get great performance and stability, along with the comfort of knowing that if we need more performance or run into any problems, we can simply move the guest operating system to new hardware with almost no down time. It’s a truly liberating approach to data center management.

9  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Barney Beal
    Great article, but Xen for Windows is a solved problem. You forget that our ability to get very high performance for Windows on VT, with PV I/O, gives us equivalent performance or better than VMware. I'd recommend you try the XenSource 4.0 beta. We have better support for Windows than VMware does. Simon Crosby
    0 pointsBadges:
    report
  • Barney Beal
    I'm not suggesting in some way that Xen doesn't virtualize Windows well - I know that is not the case. The point I'm trying to make is that the whole system, not just basic I/O, could be even faster and more scalable once the OS is aware that it is being virtualized and has been optimized to run in that environment. There is still a lot of blocking and trapping going on, which would be eliminated.
    0 pointsBadges:
    report
  • Barney Beal
    It sounds like you do not like VMware (whatever reason). The point is VMware ESX really is a Hypervisor (no matter what they say?, have you heard ESX Lite?) try not to become a microsoft representative (sounds like someone here is spreading FUD). best regards Jose R
    0 pointsBadges:
    report
  • Barney Beal
    If you research any of my other posts or articles, you will definitely come to the conclusion that I am not a Microsoft fan. As for the hypervisor issue, I suppose it depends on your definition of hypervisor. If you define it simply as software that sits underneath a guest operating system, then you are right - VMWare can be considered a hypervisor. If, on the other hand, you define it as cooperative software that interacts directly with a guest operating system, then VMWare's trap and patch binary translation layer simply doesn't match up. The key distinction is that the hypervisor and guest are aware of each other's presence and are optimized to work cooperatively. Don't get me wrong, VMWare is great software. But they have a distinct problem - OS vendors aren't writing code to support them, which means that the VMWare has to do it all themselves. Long term, they will have a hard time keeping up with the OS vendors, who are going in a different direction.
    0 pointsBadges:
    report
  • Barney Beal
    When portraying yourself as an expert and posting information for the world to see, this information would be much more meaningful if your took an objective stance. By your own admission, you are not a Microsoft fan, thus anything you state will now be taken with a grain of salt except for those who agree with your "perspective". If you really want to help peole make a decision, give them all the facts. If Xen is truly king of the hill, the facts will support your claims. Just like "no one ever got fired for buying IBM", VMWare is the safe, defacto standard for most Windows environments. FYI - I do appreciate and read your postings. Sincerely, David Hutchison www.excipio.net
    0 pointsBadges:
    report
  • Barney Beal
    Jim, I agree that it depends on your definition of hypervisor. Most people however would agree with the wikipedia definition of hypervisor here: http://en.wikipedia.org/wiki/Hypervisor ... where I don't see any reference to "cooperative software that interacts directly with a guest operating system". But this is an academic discussion in my opinion. It is not esx being or not being an hypervisor that ultimately will determine its success or its failure. I think that you miss two points in your analysis though: 1) long term it will not be the hypervisor the real game but the tools around it (with most functionalities draining into the cpu's and systems the hypervisor will be commoditized soon so it's those with the best tools that will survive). 2) your paravirtualization analysis is pretty much correct but you seem to assume that no paravirtualization standardization will take place. This has actually already happened in the Linux space with things like "paravirt_ops" etc etc (i.e. single kernel image to run on physical, vmware, xen etc etc). We still don't know what we are going to see in the Windows space .... certainly MS will have to find a compromise between looking good in front of the DOJ and fighting VMware ..... :-) Massimo.
    0 pointsBadges:
    report
  • Barney Beal
    I totally agree with both points. The hypervisor will certainly become a commodity - one could argue that it already has with paravirt_ops. Paravirt_ops is largely Xen based, and, according to Xensource, will be the foundation for the next release. The point I'm trying to stress is that the software industry is rallying around a Xen compatible paravirt model and, while I agree that the best tools will survive long term, the decision making process in the near term should be based on the technology that has the most long term viability. "Brand X is safe" should not be the deciding factor when developing a virtualization strategy. As for the comment about bias, I believe I am being objective here. I think I made the point rather clearly that I believe Microsoft is heading in the right direction with their virtualization technology. My prior comment was simply an effort to stress that I am not a Microsoft fanboy, as was asserted earlier.
    0 pointsBadges:
    report
  • Barney Beal
    Hey there everyone! Though I have a strong IT background and have been in the IT service industry for several years when I came to work here with my new company I knew very little about network automation. Of course I’d heard tid bits here and there when people would talk about ITIL automation and trying to get their networks to process workflow better. Data center automation has and will continue to change the way IT is handled and planned for. I was really and pleasantly surprised when I found out that automation helped greatly to improve application availability and give the people in the department time to get more done. Of course I’ve been playing a lot of catch up because most of the people I work with have been with the company many years and so they are quite familiar with database administration and automation. Kind of a culture shock at first but a good one.
    0 pointsBadges:
    report
  • Barney Beal
    I’m a believer in change, especially when it comes to technology. Though technology naturally evolves and gets better and more efficient with time the changes can sometimes be frustrating. Over the last few years we’ve seen network automation become more and more relevant in the workplace which has really helped to improve application availability for companies’ respective networks. That said it is frustrating to see change when it involves a degradation of the tools we use or are made available to us. I have been witness to the implementation of a new set of automation tools within my own network and these tools are severely outdated, even when compared to our former set of automaton solutions. In this way change in technology is frustrating because the sea of garbage floating around out there makes it difficult for even experienced IT pro’s to find a good set of database tools to help manage their networks.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: