The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog


August 2, 2007  1:22 PM

The case for chargeback and virtual appliances



Posted by: Alex Barrett
Uncategorized

One of the nice things about covering a hot topic like virtualization is that you get to talk to people who are really enthusiastic. Yesterday, Alex Bakman, founder and CEO of a new company called V-Kernel came to my office, and he was as fired up about the virtualization market as any vendor I’ve met in a while.

What V-Kernel has done so far is relatively straightforward. They’ve developed capacity and chargeback software that monitors the CPU, memory, network and disk resources consumed by VMware virtual machines, and maps those resources to a so-called business service – a grouping of VMs that are performing a business function, like “email” or “CRM.” From there, V-Kernel can generate chargeback reports.

Now, the idea of chargeback has always been nice in theory, but in my limited experience, things never really materialized. Bakman has an explanation. In the world of one-server-to-one-app distributed computing that virtualization is supplanting, chargeback was kind of silly, since resources weren’t shared, and it was easy to tell who owned what application. However, if you look at the mainframe – a massively shared resource – chargeback is alive and well. And when you look at today’s high-end x86 platforms, “I don’t care what you call them, they’re mainframes,” he said. As expensive shared resources, Bakman believes IT will move quickly to push back the cost of virtual infrastructure to those departments that are using them.

Virtualization also has the developer and independent software vendor (ISV) in Bakman all excited. Unlike previous systems management software he’s developed, the big thing that differentiates V-Kernel is that it’s delivered as a virtual appliance.

In V-Kernel’s case, that appliance includes a (stripped) Suse Linux operating system, Apache, Tomcat, MySQL, and a Java Ajax application server, all in 420MB that can be “dropped” right on any VMware host. For updates, the appliance includes one-button self-update feature that “calls home,” as it were, for any new security patches or code updates. And because it’s just a virtual machine, “it’s infinitely scalable,” Bakman said. To add more performance, just assign more memory or virtual CPUs to the VM.

The fact that V-Kernel runs Linux is important to data center managers because that’s the same reliable operating system that powers a lot of the other applications in the data center. Plus, writing the application to run on Linux instead of Microsoft sidesteps a lot of licensing fees. “If I were to ship it on a Microsoft OS, there would be licensing fees for every appliance I ship,” he said. Microsoft’s current licensing policies are already conspiring to turn smaller ISVs away developing for the Windows platform, he said.

Furthermore, the advent of Java/Ajax means that developers don’t need to give up any functionality to run over the Web. “Ajax is an absolute Microsoft killer,” Bakman said.

For now, V-Kernel is available only as a VMware virtual appliance, and as Bakman sees the market, there’s no reason to change that – for now. Over time, Bakman’s vision is to add more systems management appliances to V-Kernel’s bag of tricks. In the meantime, you can download a beta at http://www.vkernel.com.

August 1, 2007  1:50 PM

Virtualization and the Singularity



Posted by: Joe Foran
Joseph Foran, Uncategorized, Virtualization

I’m waxing philosophical for a bit here… What is the future of virtualization? What role will it play in the unfolding technological evolution of our society? Is the Technological Singularity really coming, and if it does, how will virtualization be important?

For those eternally-optimistic about our species’ future, there’s a great book that incorporates incredible optimism and the predicted result of the merger between biology and technology called The Singularity is Near: When Humans Transcend Biology, by Ray Kurzweil (the same Ray Kurzweil who first made the electronic piano sound like, well, a piano). I’ve been reading it, and two technologies keep popping into my head to solve some of the problems that crop up when I try and wrap my mind around the many charts and predictions in the book – namely grid computing and virtualization. I highly recommend reading this little gem, although I also recommend taking a lot of his timelines with a grain of salt. More on that later…

It’s my opinion that out of all the many technologies that will push the man-machine merger that Kurzweil so eloquently professes is coming, systems-level virtualization will be the most important, followed closely by grid computing. Many of the calculations RK uses to describe the increasing power of computers are based on Moore’s Law, his own Law of Accelerating Returns, and other mathematical formulae. One item I see in the book is that Kurzweil with doesn’t take into account in the book is the impact of utilization levels, which are far, far below the exponentially increasing numbers he presents in his prediction of future raw computing power. He may have considered this and come up with the same conclusion I have (or another, better one), but it’s not in the text, so I’ll go on with the assumption that its a blank slate. We’re going to have that raw computational power, sure, but we’re not going to make the best use of it (at least until our own intelligence is enhanced by non-biological intelligence, which he predicts in the 2030s time frame) without virtualization and grid technology. Virtualization will do what it was originally marketed for – take that pile of 5-20% utilized resources and merge them together to reach as close to 100% as possible. When it comes to controlling nanofactories (which take component elements and make goods from them – like literally making a car from atoms, or for that matter making a 100% real steak without a cow, or making replacement organs from within your own body), expanding virtual reality into realms an immersive as the real world (or moreso), even expanding our own consciousness directly into nonbiological substrates, there are going to have to be computers, powerful computers, running the show. And not all of these computers will be 100% active all the time. Add in the upcoming pervasive grid computing that will eventually connect all spare computing resources, and you can see the power of virtualization. Virtualized systems on hardware to ensure portability between platforms, to maintain the integrity of a system’s purpose, and to house what is needed where its needed, as well as raw computation resource sharing via a grid will maximize the potential of our raw computing power.

The disaster-recovery-friendliness of virtualization is going to make a huge difference in bringing about the singularity as well. One of RK’s predictions, and indeed his beliefs are clear that he feels that this is our destiny, is that we will overcome our biology and become what others call posthuman (he does not use this term, seeing humanity as consciousness, whatever the underlying origin, and independent of hardware, software, or material). He forsees the gradual transformation of humanity through the increasing use of medical implants (such as are used now to replace limbs and hearts, stop seizures, restore sight, restore hearing, etc.), cosmetic implants (anyone ever seen the Tiger Man?), and eventually nanomedicine – nanoscale sized machines working in conjunction to replace entire biological systems (such as the circulatory, respiratory, digestive, and even nervous). What are we then if not the Ghost in the Shell? And what happens when a shell crashes, as it will? That’s what DR is for, and that’s where virtualization plays such a key part – suppose you have a severe blood disease and in the future you are able to replace your blood with respirocytes (tiny machines that act like blood cells). What happens when there’s a massive systems failure of the OS controlling the little things? Probably not much because that’s an application for a distributed OS on a grid system. What happens down the road when your consciousness has moved, via the natural process of extending your life by replacing failing organs, to a completely computerized substrate, and THEN the underlying hardware fails? That gets trickier. Essentially, you die. Unless of course you happen to have VMotion / LiveMigrate / FutureVirtualizationDRTool’sNameHere, at which point you smoothly slide over into the next bank of hardware and keep on living.

This all assumes you consider computerized consciousness living (I do).

Sound a little far-fetched? It does, doesn’t it. But then again, I’m reminded that if I took many mundane items back a few hundred years in time, I would be worshipped as a god for my ability to cure, kill, and perform feats that defy the “laws” of nature as understood at the time. As Arthur C. Clarke once said – “Any sufficiently advanced technology is indistinguishable from magic.” This isn’t to say I agree with Ray Kurzweil’s timeline – I don’t. He’s a self-professed optimist, and while technology will undoubtedly advance unabated by boom or bust just as it has always done, I think he forgets about the pessimistic, particularly about greed – those who develop the technology to power the singularity first will hoard it, patent it, and sell it for exorbitant sums of money for a very, very long time before competition is able to thrive and prices come down to the point where the panacea to life’s many ills are cured. The price of immortality is immeasurable – and the rich and powerful will pay dearly for it. Dearly enough that it won’t be economical to sell to the masses of ordinary folks for a very, very long time, and for a sadly terrible long time people all over the world will have to endure disease, hunger, and death.

But when it does come about, rest assured that the Technological Singularity will be enabled by virtualization. It may not look the virtualization we know today, much like Mac OS X looks nothing like PWB/UNIX, but it will still have the underlying role as we know it today. So that’s my two cents on virtualization in the future – not much different than it is now, but oh-so-important to what will be.


July 31, 2007  6:27 PM

Is virtualization where Linux will top Windows…perhaps stealthily?



Posted by: Jan Stafford
Microsoft, Red Hat, SUSE/Novell, Virtual appliances, Xen

Virtualization is the theme of the LinuxWorld 2007 Conference & Expo this year, and that’s as it should be, according to industry veterans I’ve interviewed recently. Virtualization a big boon for Linux adoption and a way to steal some of Microsoft Windows’ thunder, they say. Overall, they agreed that succeeding in that space is a make-or-break proposition for Linux.

“It’s appropriate that LinuxWorld focuses on virtualization this year, because virtualization is a must-have for Linux,” said Jim Klein, Information Services and Technology Director for Saugus Union School District in Santa Clarita, Calif. “Without virtualization, Linux will fade away in the data center.”

Klein doesn’t think doomsday scenario is going to happen, however.

“From my experience, Linux and open source virtualization technologies are top-notch, certainly superior to Microsoft’s and reaching parity with VMware’s. The openness of Linux virtualization technologies make it easier to run multiple operating systems in one box.”

On the other hand, some industry vets think that Linux and Xen in its various forms have a lot of catching up to do, and they hope to see some significant announcements at LinuxWorld. Summing up this side of the equation, Alex Fletcher, principal analyst for Entiva Group Inc., said:

“Xen is definitely mature enough to warrant consideration by corporate accounts. Recent moves, such as Red Hat Enterprise Linux (RHEL)5.0 adding Xen as its fully-integrated server virtualization functionality are intended to spur corporate adoption of Xen, but will need time to play out. Granted, RHEL is a fully-robust operating system, but this is the first release that’s included Xen, giving risk-adverse decision makers reason to hesitate. Efforts like libvirt, an attempt to serve as a stable C API virtualization mechanisms, have potential but need to mature.”

Then again, others said, many factors weigh in Linux’s favor in the virtualization arena. For one thing, RHEL and SUSE are very robust enterprise-level operating systems. For another, Linux is not fully-dependent on Xen’s success, because VMware is optimized for Linux. The proven reliability of Linux in data center deployments is another plus. Indeed, consultant and author Bernard Golden believes that virtualization will pave the way for wider usage of Linux. Virtualization makes stability much more important, he said, because after virtualization more systems run on a single piece of hardware. In this situation, he thinks Linux is a better choice than Windows, as Linux has a better track record for both stability and uptime.

Virtualizing Windows-centric applications on top of Linux is a good path to follow, said Golden, author of the soon-to-be-released book, Virtualization for Dummies:

“For those companies that need to move aging Windows applications onto new hardware and want a more stable underlying OS, virtualizing Windows on top of Linux is a perfect solution. Also, Linux’s scalability marries well to two trends driving virtualization: the increasing power of hardware and Linux’s ability to scale across multi-processor machines.”

Microsoft-centric IT organizations probably won’t rush into virtualizing on Linux. In particular, said Golden, sticking with Windows could suit companies that are not ready to make a full commitment to building a virtualization-based infrastructure. He explained:

“The upcoming virtualization capability in Windows Server 2008 — and beyond, given that much of the previously-targeted functionality for Server 2008 has been dropped — will enable [those organizations] to extend the life of aging Windows-based apps. Of course, being able to extend the life of those apps will, to some extent, reduce pressure to migrate those apps to Linux or replace those apps with Linux-based apps.”

Such IT organizations usually move to virtualization using their existing hardware, rather than bringing in more modern, highly scalable hardware, said Golden. In these cases, there is less need to move to Linux. This strategy and the efficacy of using old hardware will be short-lived, in his opinion.

Microsoft-centric shops will also be encouraged to stay that way if Microsoft delivers the promised virtualization-friendly licensing terms for its upcoming Longhorn-plus-hypervisor release, said John Lair, business development manager for Prowess Consulting.

Linux may not gain even if Microsoft’s operating system and virtualization platform price tags are more than those of Linux and, say, Xen, according to Fletcher.

“There is a chance that the savings gained from consolidation will actually work to make Linux’s lower software acquisition costs less of a selling point,” Fletcher said. “Higher licensing costs for Windows aren’t as much an issue when fewer servers are running.”

Then again, Lair and others noted, virtualization will probably decrease the importance of operating system (OS) selection, shifting attention to application and virtualization platform choices. Kamini Rupani, product management director at Avocent, summed up this side of the equation, saying:

“Virtualization doesn’t help or hinder adoption of either Linux or Windows on the server side, because virtualization isn’t directly related to operating systems. Virtualization is about the hardware, about adding more virtual machines running on top of an existing hardware environment.”

In this point-counterpoint discussion, others said that Linux stands to gain even if virtualization devalues OS selection. These folks think that Linux will be the power, or platform, behind the scenes in virtualized enviroments.

“Linux is so easy to use and reliable that I think it will be used ubiquitously and not get much attention,” said Klein. “People won’t care that their VMs are running on Linux. Choosing Linux will stop being a big deal. Also, I believe that the majority of virtual appliances will be running on Linux, so that people will just drop them in without a thought about which operating system is inside.”

If this scenario plays out, Linux will return to its roots as a stealth OS. IT managers brought Linux into IT shops through the proverbial back door to use for applications that didn’t need top-level approvals. While it moved up to a more visible position in data centers, Linux also infiltrated cell phones and numerous other devices without fanfare. Today, Linux appears to be a front-runner as ISVs’ top OS choice for virtual appliances. Perhaps even Microsoft’s resistance is futile.


July 26, 2007  10:03 AM

Server virtualization pushes storage virtualization



Posted by: Ryan Shopp
Uncategorized

We recently published a story by Alex Barrett about NetApp and VMware working together so that clients can get a consistent copy of a VMware virtual machine using NetApp’s snapshots, which use the disk array.

 Then, Byte and Switch published a story about HBA vendors such as Emulex and QLogic pouring their energy into improving storage virtualization (see Virtual HBAs Hitch Servers & Storage) with Virtual HBAs. From the story:

“Virtual HBAs are supposed to make it easier to manage VMs in SAN environments.”

Has the server virtualization inferno finally caused a storage virtualization spark in the virtualization industry? You be the judge.


July 24, 2007  3:24 PM

Virtualization Fashion Update: Thin is In!



Posted by: Marcia Savage
Anil Desai, VDI, Virtualization strategies

IT professionals may wear many hats in their organizations, but we tend not to be known for our fashion sense.  To assist in that area, I’d like to cover one of the latest styles in virtualization: The return of the thin client.  Case in point: see Alex Barrett’s coverage of HP’s acquisition of thin-client vendor Neoware, Inc: Virtualization informs HP’s Neoware Acquisition.  It’s time for traditional fat desktops to start becoming even more self-conscious.  Of course, thin clients never really went away – they’ve been around since the popularitzation of the ”network computing”, which started in the late 90′s.  Lest any of you commit a social faux paus while strutting down your data center’s loading ramps, I wanted to point out some of the issues that prevented the predicted takeover of thin clients:

  • Cheaper desktops:  Reducing hardware acquisition costs were a goal for thin client proponents.  As desktop computers hit the sub-$500 range, however, the cost advantages of using thin client computers became far harder to justfiy. 
  • Fatter apps and OS’s: A while ago, I heard someone ask the most pertinent question I’d heard in years: “Is hardware getting faster faster than software is getting slower?”  The answer, my friends, seems to be “no”.  As hardware gets more capacity, OS’s and applications tend to swallow it up like a supermodel at a salad bar.
  • Single points of failure:  Thin clients (and their users) rely on centralized servers and the network that allows access to them.  Failures in these areas mean major downtime for many users. 
  • The Application Experience:  Remote desktop protocols could provide a basic user experience for the types of people that use a mouse to click on their password fields when logging on to the computer.  Single-task users adapted well to this model.  But what about the rest of us?  I’d like the ability to run 3-D apps and use all of my keyboard shortcuts.  And, I’d like to be able to use USB devices such as scanners and music players.
  • Server-side issues:  Server-side platforms from Citrix, Microsoft, and other vendors had limitations on certain functionality (such as printing). 

So, is it possible for these super-skinny client computers to address these issues?  I certainly think it’s possible.  Server and network reliability has improved over the years, forming a good basis for reliability.  Thin clients are inexpensive, and server-side hardware and software has improved in usability features.  For example, Windows Server 2008′s Terminal Services feature provides the ability to run specific applications (rather than the entire desktop) using a remote connection.  And, multi-core processors that support large amounts of RAM help enable scalability.  Overall, thin clients are cheap dates, they’re more readily avaialble, and they’re less needy than in the past.  What IT admin wouldn’t like that?  Only time will tell if this relationship will last.

Oh, and one last fashion tip: Don’t throw away your old fat clients just yet.  Like so many other fads, they may be back in style sooner than you think.  Order a slice of cheesecake and think about that!


July 23, 2007  7:19 PM

Server consolidation via virtualization: Advice on pitches, multi-purpose server conversion and P2V



Posted by: Jan Stafford
Chris Wolf, Servers, Virtualization strategies, Why choose server virtualization?

Burton Group analyst Chris Wolf shared some good advice about
consolidating servers with virtualization in our recent interview
. Here are some quick tips gleaned from our conversation and some more-info links and questions for you about these topics.

Making a pitch

Make these key points when pitching server consolidation via virtualization to upper management:

  • Virtualization is a means to running fewer physical servers and, thusly, consumer less power in the data center.
  • With fewer physical servers, hardware maintenance and upkeep costs go down.
  • Virtualization increases server availability via dynamic failover enacted at the virtual machine level. So, any application oncan support high availability, and that is a big difference with virtualization compared to traditional clustering solutions.
  • (Have you made this pitch? What did you say? What were the results? Let me know in the comments below or by writing to jstafford@techtarget.com.)

    Converting multi-purpose servers to VMs

    Watch out. This is tricky territory, says Wolf.

    “When I have multi-purpose servers, I generally want to take each application or service on that server that I need and run it as its own VM instance. So, in those cases, you are better off manually reprovisioning those services as separate virtual machines again; because in a dynamic failover environment, the VM itself is the point of failover. So, if I have a multi-purpose server, if I am looking at failover, every application on that server is going to be off-line for the period of the failover. If I have a single application per virtual machine, if the VM fails over now, only a single application would be down.”

    (Wolf talks more about this process in the interview. Has anyone out there tackled multi-purpose server-to-virtualization conversions? If so, please share your experiences with me at jstafford@techtarget.com.)

    Physical-to-virtual (P2V) migration

    There are several approaches, says Wolf. Some common practices that work in small environments — such as manually staging a VM and migrating the data and relying on a backup product to help with the migration — are not a good fit for larger data center environments. When migrating many servers, use a product designed for that job to do do a hot clone of a virtual machine.

    “Not only does it let me move each VM in a live state, I can schedule when the VMs get converted so I can do a conversion during off-business hours.”

    More P2V info can be found here:

    SSV’s P2V news and expert advice;
    Measuring the success of your server consolidation project.

    Got other good P2V links or advice? Let me know: jstafford@techtarget.com.


July 23, 2007  3:16 PM

Is Xen ready for the data center? Is that the right question?



Posted by: Barney Beal
Red Hat, SUSE/Novell, Virtualization, Virtualization management, Virtualization platforms, Virtualization strategies, Xen, XenSource

Article after article and post after post have compared and contrasted Xen, VMWare, Veridian, and a host of other virtualization technologies, with opinions on performance, management tools, implementations, etc., etc. in abundant supply. Inevitably when it comes to Xen, the story comes full circle with some sort of declaration about “data center readiness.” The definition of “ready for the data center” is quite subjective, of course, based largely on the author’s personal experience, skills, and their opinion of the technical capabilities of those managing this vague “data center” to which they are referring.

Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn’t include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file – a common task for any system administrator. Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.

And yet, I can’t help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn’t we be asking, “where is virtualization heading” and “which of these technologies has the most long term viability?”

Where is virtualization technology heading?

To even the most passive observers it has to be obvious that virtualization is here to stay. What may not be so obvious are the trends, the first being integrated virtualization. Within a year, every major server operating system will have virtualization technology integrated at its core. Within a few short years, virtualization functionality will simply be assumed – an expected capability of every server class operating system. As it is with RHEL now, administrators will simply click on a “virtualization” checkbox at install time.

The second trend is in the technology, and that is the “virtualization aware” operating system. In other words, the operating system will know that it is being virtualized, and will be optimized to perform as such. Every major, and even most minor operating systems either have or will soon have a virtualization aware core. Performance and scalability sapping binary translation layers and dynamic recompilers will be a thing of the past, replaced by thin hypervisors and paravirtualized guests. Just look at every major Linux distro, Solaris, BSD, and even Microsoft’s upcoming Veridian technology on Windows Server 2008, and you can’t help but recognize the trend.

Which of these technologies has the most long term viability?

Since we now know the trends, the next logical step is to determine which technology to bet on, long term. Obviously, the current crop of technologies based on full virtualization, like KVM and VMWare (it’s not a hypervisor, no matter what they say,) will be prosperous in the near term, capitalizing on the initial wave of interest and simplicity. But, considering the trends, the question should be, “will they be the best technology choice for the future?” The reality is that, in their current state and with their stated evolutionary goals, full virtualization solutions offer little long term viability, as integrated virtualization continues to evolve.

And which technology has everyone moved to? That’s simple – paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.

Of course, those with the most market share will continue to sell their solutions as “more mature” and/or “enterprise ready” while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution. It reminds me of the ice farmers’ response to the refrigerator – rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn’t as good.

So then, is Xen ready for the “data center?”

The simple answer is – that depends. As a long time (as these things go, anyway) user of the Xen engine in production, I can say with confidence that the engine is more than ready. All of the functionality of competing systems, and arguably more, is working and rock solid. And because the system is open, the flexibility is simply unmatched. Choose your storage or clustering scheme, upgrade to a better one when it becomes available, use whatever configuration matches your needs – without restriction. For *nix virtualization, start today.

For Windows virtualization, the answer is a bit more complex. Pending Veridian, the stop gap is to install Windows on Xen with so-called “paravirtualized drivers” for I/O. Currently, these are only available using XenSource’s own XenServer line, but will soon be available on both Novell and Red Hat platforms (according to Novell press releases and direct conversations with Red Hat engineers.) While these drivers easily match the performance of fully virtualized competitors, they are not as fast as a paravirtualized guest.

Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you’ll already be running on Xen, so you’ll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.

So what’s the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving. Do you go with XenSource’s excellent, yet more restrictive tool set, a more open platform such as Red Hat or Novell, or even a free release such as Fedora 7? That depends on your skills and intestinal fortitude, I suppose. If you are lost without wizards and a mouse, I’d say Xensource is the way to go. For the rest of us, a good review of all the available options is in order.

What about that “long term?”

So we know that virtualization aware operating systems are the future, but how might they evolve? Well, since we know that one of the key benefits of virtualization is that it makes the guest operating system hardware agnostic, and we know that virtualization aware guests on hypervisors are the future, then it seems reasonable to conclude that most server operating systems will install as a paravirtualized guest by default, even if only one guest will be run on the hardware. This will, by its very nature, create more stable servers and applications, facilitate easy to implement scalability, and offer improved performance and manageability of platforms.

As for my data center, this is how we install all our new hardware, even single task equipment – Xen goes on first, followed by the OS of choice. We get great performance and stability, along with the comfort of knowing that if we need more performance or run into any problems, we can simply move the guest operating system to new hardware with almost no down time. It’s a truly liberating approach to data center management.


July 20, 2007  3:30 PM

Virtualization security advice: Hypervisors, switches are vulnerable



Posted by: Jan Stafford
Virtual machine, Virtualization security

Beware of hackers attacking virtual machines (VMs) via the hypervisor or virtual switch. These two avenues of attack will probably pose the most problems to IT security managers in virtualized data centers, Burton Group analyst Chris Wolf told me in a recent interview.

Here are some quick takes from that interview, offered as a heads-up about security and management issues one might face with virtual machines. At the end of this post, I’ve put in some links to other resources on virtualization security.

It’s not so easy to compromise each operating system (OS) living within VMs on a server; but an attack on the underlying hypervisor layer in a virtual environment wouldn’t be too hard to accomplish. Such an attack can take down or limit access to several VMs in one fell swoop, Wolf said. Even worse, the hacker could introduce his own virtual machine to a network without the administrative staff knowing about it.

There’s no silver bullet to protecting the hypervisor. The best practice is, of course, keeping it up to with patches and software updates.

As for virtual switches, Wolf said:

“Not every virtual switch provides the layer to isolation that it should in comparison to a physical switch. Hardware-assisted virtualization is starting to do a lot to provide more hardware-level isolation between virtual machines, but as of today you really have isolation on the address base-level, but no isolation currently in terms of memory, and that is something that is coming with forthcoming virtualization architectures.”

Chris Wolf offers more advice on data protection and server virtualization management in this webcast. (It requires registration.)

You’ll find more VM security tips in the article by SearchServerVirtualization.com resident expert Anil Desai on VM security best practices.

Ed Skoudis and Tom Liston give a detailed rundown on Thwarting VM Detection in this white paper. I found it in a post on Stephen R. Moore’s blog. Thanks, Stephen.

If you’ve had any problems with or can offer any advice on virtualization security, please sound off here, or write to me at jstafford@techtarget.com.


July 20, 2007  3:02 PM

Virtualization is a real life game of RISK (A fun analogy)



Posted by: Ryan Shopp
Links we like, Microsoft, Microsoft Virtual Server, Virtualization, Virtualization platforms, Virtualization strategies, VMware

I was crusing the Web just now, trying to find some interesting blogs that aren’t chock-full of code that an associate editor simply does not understand. I clicked on Roudy Bob’s blog (see our blogroll for his link) and low and behold, my boredom was alleviated!

To read the following analogy of the virtualization game and the boardgame RISK from the source, visit RoudyBob’s blog.

Enjoy.

“I somewhat miss the days when virtualization was at the fringe of the market and just about everything that came along was new and exciting. Now, it’s a high-stakes game – with hundreds of millions (if not billions) of dollars of software and services to be had for the company that plays it right. Along with maturity comes incremental, conservative product releases aimed to grow cautiously while nurturing the existing customer base. Also involved now is the politics and strategy of mergers and acquisitions – not the typical fare for your standard geek. The more I thought about my last post, the more I realized what we’re seeing in the market today is a lot like the RISK game most of us played as a kid. Take for example, the game board: 00044169999 Main400

Microsoft, VMware, SWsoft, XENSource and other smaller players are trying to carve out their piece of the total virtualization pie. The company that claims the most territory (share of the market) wins. Sure, it’s probably a bit of an obvious analogy to make – but it does provide a little different perspective on things.

“Let’s say for the sake of argument that the virtualization RISK map is laid out like this:

“North America – Data Center Virtualization
South America - Development and Test
Africa - Virtual Infrastructure Management (a.k.a., utility computing)
Europe – Linux Virtualization
Asia – Virtualized Desktops
Australia / Pacific Rim – OS X Virtualization
“Each time we observe the likes of Microsoft and VMware (EMC) opening the war chests to dole out large sums of money for smaller companies doing interesting things, the map shifts a little more in the favor of one or the other. New entrants also shake up the dynamics of the map.“Take the Microsoft acquisition of Softricity for example – having the ability to virtualize applications on the desktop would significantly advance Microsoft’s position in the Virtualized Desktops arena – a place that has seen little traction to date. Previously, VMware’s ACE product was really the only large player in that game. When VMware acquired Akimbi this month, they definitely made a further push in two areas they are already strong in – Development and Test as well as Virtual Infrastructure Management.“Continuing the RISK analogy, then, which players occupy the most territory and where should a company like Microsoft (amazingly the underdog, for once…) focus its efforts?

“I think it’s safe to say that the North American continent, er, the Data Center Virtualization space is occupied in a big way by VMware. The fact that they were first to market with an enterprise-class virtualization product (ESX Server) made it easy to make headway in IT organizations who made the early move to virtualization. The ESX Server product is fairly well positioned to satisfy companies’ urge to consolidate and rationalize their physical servers onto virtual machines. Microsoft’s Virtual Server product, despite the company’s efforts, has made little progress in getting into these larger-scale virtual machine environments. Remember, though, that the first player to advance isn’t always the winner.

“Development and Test is a different story. I think Microsoft has an amazing opportunity to leverage the Windows platform and its broad developer tools offering to really win this part of the market. And, if you want my opinion, that’s a much better strategy for going after Data Center Virtualization than trying to fight an uphill battle against ESX Server. A large presence in this space and some strategic offensive moves to the north (remember the analogy, right!?) could turn the tide away from VMware. Everyone is waiting with eager anticipation the release of the Windows-based hypervisor due sometime after “Longhorn”. But in a year and a half – the market will have likely left Microsoft behind. I think it’s a very large bet on their part that will most likely not pay off.

“Virtual Infrastructure Management is where all of the major players (and other folks like Altiris, BMC, Acronis, etc.) seem to be focusing these days. And rightly so. Being able to manage a large virtualized infrastructure easily and bring the concept of “utility computing” to reality is a guaranteed way to differentiate yourself. Again, I think VMware has the early lead as its VMotion and VirtualCenter solutions have helped them to garner mindshare in this area. But, products like System Center Virtual Machine Manager, Systems Management Server and Operations Manager from Microsoft give that company at least a way to make inroads.

“This is undoubtedly the biggest portion of the virtualization market (the greatest customer need) and would be the place where I would choose to play if I were an up-and-coming company that wanted to focus on the space. The reason management is so appealing is that there are all sorts of interesting problems to solve – management, monitoring, backup, restore, provisioning, auditing, asset management, etc. And for the most part, they’re problems that customers are willing to spend some money to address. Startups can grow quickly by providing something customers need and folks like VMware, Microsoft and SWsoft can easily differentiate themselves from one another by leveraging the management “story” around virtualization.

“Linux Virtualization, analogous to the Europe of RISK, is where companies like SWsoft with their Virtuozzo product and XENSource with their Xen product have dominated. Sure, VMware Workstation and VMware Server both run on Linux and the ESX Server hypervisor is based on it. But, in terms of catering to the needs of the open source community and the requirements of large-scale hosting providers running Linux, the Virtuozzo and Xen products have the most traction. SWsoft used their Virtuozzo for Linux product as a foothold into the broader Windows market when it released Virtuozzo for Linux. And Xen is scrambling to provide Windows guest OS support based on the new virtualization support in the latest generation of Intel processors. Your starting position on the game board doesn’t dictate the outcome, just the strategy.

“The biggest untapped market for virtualization has to be leveraging virtualization as part of the end user experience on the desktop. VMware’s ACE product was the first to focus on this, but no one company – even VMware – has seemed to get any traction. The potential opportunity for an interesting solution to problems like mobile workforce empowerment, workstation security, etc. is enormous. The shear numbers dictate that a successful solution could yield impressive financial returns.

“Ironically, Microsoft is probably best positioned to do something in this space and hasn’t. There are plans for providing VirtualPC capabilities to enterprise Vista customers but in reality this is just more of the same. What if users could run their browser in a seamless window running as part of a background virtual machine that was isolated from the corporate network? What if the applications and user date for a workstation PC were somehow virtualized so that users could move easily between different pieces of hardware? These are some of the possibilities that Microsoft could start to address by leveraging its Windows monopoly on the desktop and the pervasiveness of centralized management solutions like Active Directory and Group Policy. And their “innovation” in this area is to bundle a couple of license together and calling it Virtual PC Express.

“Lastly, there’s the OS X Virtualization market. In the game of RISK, completely occupying Australia is one way to gain an advantage early – leveraging the additional armies provided by controlling the entire continent. As far as virtualization is concerned, I don’t think owning the Mac market is going to yield any huge advantage in areas like Data Center Virtualization or Virtual Infrastructure Management. It’s still an interesting space – especially with the switch to Intel-based Macs. What was once dominated by Microsoft’s Virtual PC product is now up for grabs again with products like Parallels Workstation for OS X gobbling up earlier adopters who bought new intel-based machines and want to virtualize Windows. Apple may also have a play here as well if rumors are true that they are looking to integrate virtualization into the next version of the OS X operating system.”

Well done, Roudy Bob!


June 29, 2007  5:31 PM

New open source business models based on Xen



Posted by: Jan Stafford
Virtualization, Xen

Editor: This is a post by Simon Crosby, Xen Project leader. In the first sentences he refers to a previous blog posting on this site.

I wanted to re-phrase some key points from my blog posting of this (which I have withdrawn) because I failed to tease out and succinctly articulate the core argument, and in doing so unintentionally aroused the ire of some in the community.  Thanks to those who offered feedback — you were right, and I stand corrected.  Let me try to get it right.

Novell’s announcement of its Windows driver pack for the Xen hypervisor implementation in SLES is interesting because it both challenges the existing business models of the Linux distros while offering them previously inaccessible opportunities through the delivery of mixed-source offerings.

When Linux was just Linux, and not capable of virtualizing other operating systems, the concept of the Linux OSV Supporting the OS and all open source components in the app stack that they deliver as part of the distro, was straightforward.  The business model of the major distros is based on their ability to Support (that is, take a phone call from their customer, and deliver fixes where necessary) any of the technology they deliver in their product (whether or not they originally developed it). An open source product philosophy enables them to develop, debug and develop expertise in the entire stack that they deliver. 

But with virtualization as an integral component of the distro (whether Xen, KVM or one of the other open source virtualization technologies), Linux is only one (arguably the key) component of the stack, and when a different OSV’s product is virtualized on Linux (Windows, perhaps, or another open source OS),  two new opportunities emerge: First, a Linux OSV can extend its value proposition to its customers by offering to Support other open source OSes virtualized; and second, by adding to their offerings the requisite closed source add-ons such as the Novell Windows Driver Pack for closed source OSes, the distros can artfully deliver high value mixed-source offerings that “price to value”, and protect themselves from the kind of discounting attack that Oracle used on Red Hat.

Both Novell and Sun have announced their intention to support their customers’ use of other open source operating systems virtualized on their implementation of the Xen hypervisor.  Thus, one might expect Novell to see new business opportunities to support competitive Linux distros on SLES, and in so doing give customers a migration path to SLES as an OS while leveraging SLES and the hypervisor to virtualize existing competitive Linux installations in use by the customer.

The fact that Linux, BSD and OpenSolaris source code is available to the virtualization vendor, and the fact that the key vendors and communities behind those OSes work within the context of the Xen project to develop a common open source “standard” hypervisor, means that from a virtualization  perspective at least, all are compatible with the same hypervisor ABI, hopefully reducing any support complexity.  Thus far, only Red Hat has maintained a steady focus on only RHEL, and possibly future Windows support.

The possible adoption by the Linux OSVs toward the delivery of mixed-source offerings is extremely important.   Upcoming releases of Netware and OES to run on SLES/Xen gives Novell an important opportunity to price to value, specifically because the mixed-source nature of the combined product contains IP, and the market is good at determining the value of such things.  Contrast this with the traditional open source business model, in which there is no IP, but the vendor markets a high value brand (such as RHEL or SLES) and associated service offering.

This is vulnerable to attack by lower labor cost and/or competitive offerings – a problem that the mixed-source offering does not seem to me to have.

It is a specific goal of the Xen Project to develop an open source “engine” that can be delivered to market by multiple players, in
multiple products.   Virtualization of closed source OSes forces (in the case of Windows) the delivery of closed source value-added components that are not part of the core hypervisor itself.   The value-added components that vendors must add to the “engine” in order to deliver a complete “car” to their customers allows them to differentiate their products, and gives customers choice.   By contrast, had it been the Xen project’s goal to deliver a complete open source “car” there would be no value proposition for the different vendors seeking to add virtualization to their products, and it would put Xen in conflict with the Linux OSVs — some of the most important contributors to the project.
The nutshell: I think that Xen has pioneered a new model of open source business – one which uses open source as a reference standard implementation of a component of the offering, but which stops short of a whole product.  This encourages multiple vendors to contribute, because adopting that model allows them to add value to the final product and be compensated for it.
 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: