The Virtualization Room

Apr 28 2008   2:16PM GMT

Is hypervisor-based virtualization doomed?

Keith Harrell Profile: SAS70ExPERT

The following is a guest blog written by Schorschi Decker, an IT professional specializing in virtualization and enterprise-level management with over 25 years of experience in the industry.

Operating system isolation or hypervisor-based virtualization remains popular, but are we settling for less than what we should? Hiding its limitations in modest incremental effectiveness, hypervisor-based virtualization persists because it continues to hide an ugly secret: poor quality code.

Many who have worked with hypervisor-based virtualization may already knows this, but anyone who has attempted implementation of application instancing undoubtedly see where hypervisors fail. Replication of the operating system within a virtual instance is waste, waste driven by bad code. Faster cores, more cores per package, limited improvement in memory and device bus design, marginal increases in mechanical drive design and shared storage models have all contributed to mask how hypervisors inefficiently utilize processors.

If customer adoption rates are an indicator of success, past attempts at application instancing have not been successful to any consistent degree (there are no buzzwords for an application instance method.) To be clear, homogeneous applications have benefited, such as Microsoft SQL and ISS, Oracle and even Citrix. However, in the case of Citrix, application instancing has been environment-dependent to a degree.

Resource management within a common operating instance has not significantly changed since the introduction of mainframe logical partitions (LPARs). Solaris zones is a container-based model, whereas AIX micro-partitions follow a truer application instancing model. Even Apple computer introduced simple memory partitioning in the Macintosh Operating System 7.x. DEC (yes, Digital Equipment Corporation) leveraged Microsoft Job Engine API, effectively a processor affinity layer, in a ground breaking concept product that Compaq buried. Does anyone remember that product?

The hypervisor foundation resulted from heterogeneous application partitioning failures. For Windows, application instancing has stalled at times or has otherwise been over shadowed by operating system instance isolation techniques. Windows SRM is a weak attempt to crack the hypervisor foundation, but it is so immature at this point it is useless. Microsoft SoftGrid, now Microsoft Application Virtualization has a greater potential but is just not well accepted at this point. Should Microsoft provide it for free to drive acceptance?

The technology industry has attempted some rather interesting implementations to offset the impact of operating system instance isolation, for example, thin-disking and image-sharing which are based on eliminating disk partition under utilized space. Several attempts at addressing the DLL and .Net issues (e.g. Microsoft SoftGrid as well as Citrix) have been implemented to support heterogeneous application instancing but have masked the true issue that has always existed, the lack of quality code.

Why do I make this point? Because the hypervisor is essentially a band-aid on the boo-boo of bad coding. Quality code makes for stable environments. With a stable and predicable environment, applications can be run without fear of crashing, and it is this fear that gives hypervisor virtualization its strength.

Did someone just say “operating system isolation”? Case in point, the recent Symantec Antivirus issue with VMware ESX OS. Code quality is going to become a green issue, just as watts per core and total power consumption has in the data center. Enterprise customers who purchase significant code-based products will demand better code as a way to reduce non-hardware oriented costs. Just how many lines of executed code is redundant processing when hypervisor-based virtualization is leveraged? Billions? Wake up and smell the binary-generated ozone! Those cycles cost real money and introduce a very big surface area for bug discovery.

Poor software quality makes hypervisor-based virtualization more expensive than it should be and the publishers of operating systems love it. After all, the total number of operating system licenses purchased has not gone down with hypervisor virtualization. The industry has known for years that poor quality software has been an issue. One radical solution is to hold software publishers to a higher standard, but that idea has not gained enough grassroots support – yet. When it does, the hypervisor will be history.

8  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • SAS70ExPERT
    I think your argument definitely has merit, but don't blame the technology just because of the implementation. Hypervisor-based virtualization brings a tremendous number of benefits, most notably management. You have to be able to manage virtualization in the server data center, and today that's via the hypervisor. There's no better access to data for virtual operating systems than their parent running platform. Might we someday get to a Softgrid model for server-based applications? Maybe, but we're not even close to it today. There are advancements being made in building more intelligent hypervisors, such as Hyper-V and ESX 3i. I do agree that the hypervisors that most people are using on their desktops today with Virtual Server, VMware Desktop/Workstaion/Server, and Parallels, leave a lot to be desired. But again, don't blame hypervisor technologies just because they've been poorly implemented today. Let's think about how we can perfect this technology and reap the benefits rather than jump to killing it. -Alan
    0 pointsBadges:
    report
  • SAS70ExPERT
    This sounds like biting the hand that feeds you. Xen, virtual Iron, ESX, Hyper-V... they're all junk? I have hard time dealing with that analysis. I am not saying that he is wrong about the bloated code, but what is the alternative, rewriting all the code in assembly language? Virtualization is now an essential part of a datacenter infrastructure. Whether Dev, QA, Training, or Production servers, these technologies allow more efficient usage of hardware, power, and high availability.
    0 pointsBadges:
    report
  • SAS70ExPERT
    This is my 44th year in mainframe technology. IBM announced mainframe VM in 1972 and it has been running on IBM mainframes ever since, alongside Z/OS, VM itself, VSE, TPF (the old airlines control program), Linux, and many others. All aspects of the hardware, software, and I/O have been fully virtualized in IBM's VM for years and are in use today. IBM used VM in the past to test new systems before rollout - and that continued into the present with the rollout of their Z10 machine this past February. Every modern IBM mainframe today runs under the PR/SM hypervisor (Processor Resource / System manager) which is, in reality, a no charge specialized version of IBM's VM which loads automatically at POR (Power On Reset)- which many of you call "boot up" - from the mainframe's Service Elements (ThinkPad PCs) yet the PR/SM overhead is about 1% in our mainframes. I don't think anyone in my world thinks VM is a waste of time or effort, least of all IBM. It has the functionality many just dream about today - and it also has at least 30 years of developmemnt, testing, and debugging behind it. One final comment: "It works."
    0 pointsBadges:
    report
  • SAS70ExPERT
    Well I guess that on the same line we can even say: 1) in Scotland you wouldn't need an umbrella if it wasn't raining.... or 2) if I could walk at 130 Km/Hour I wouldn't need a car or 3) etc etc etc Point is that in Scotland... it rains (a lot) and I can only walk at about 3 Km/Hour (sustained).. so we need umbrellas and cars to overcome facts of the nature. We all know that x86 Operating Systems are not nearly close to the quality of other proprietary OS'es on different platforms.... this is another fact .... and "asking" the OS vendor to "fix them" is not going to work. How can an OS (read Windows) that is able to run flight simulator as well as a backend 32-socket SQL database at the same time be engineered to be secure and reliable from the ground up? It's certainly flexible and cheap .... and that's pretty much it.... there is no secret sauce to build something that is as flexible as windows and as reliable as the zOS.... it's either one or the other. Your choice. That's why I (personally) think that it's not the hypervisor to be doomed in the long run ..... but the Operating Systems that you are referring to. This is my side of the story: http://it20.info/files/3/documentation/entry54.aspx http://it20.info/blogs/main/archive/2007/06/25/32.aspx Massimo. P.S. Oh btw .... >Poor software quality makes hypervisor-based virtualization >more expensive than it should be and the publishers of >operating systems love it Well ... ask MS if they are happy about having VMware around .... ;-)
    0 pointsBadges:
    report
  • SAS70ExPERT
    "Case in point, the Case in point, the recent Symantec Antivirus issue with VMware ESX OS". Eh? What? You can't just have a big rant and throw in a comment like that without hyperlinking it to additional info. I've been Googling that comment and can't find any articles relating to a "recent Symantec Antivirus issue with VMware ESX OS". Clarify please!
    0 pointsBadges:
    report
  • SAS70ExPERT
    Schorschi, Nice article and quite informative, especially references to little known leading edge technologies like DEC's MS Job Engine API, which predated VMware. There is not much new under this sun. BTW: DEC is actually Digital Equipment Corp (not Digital Electronics Corp.). People oft forget the once #2 IT company whose legacy has greatly influenced the shape of technology today, unfortunately they were not as good at telling the world unlike M$. HP did more to bury the DEC legacy than Compaq.
    0 pointsBadges:
    report
  • SAS70ExPERT
    This is only my 39th year in IT, the first 25 of them as an IBM employee (now retired). I joined IBM Australia in 1970, the year that System/370 was announced, this range of systems being amongst the very first to have widespread use of VM as fundamental part of the architecture (as referred to by Dick Yexek just above). Then in 1978 I had my first visit to the IBM Rochester (Minnesota) Development Lab and became seriously involved with the IBM System/38, with a radical new machine architecture which included VM capabilities into the very hardware and microcode itself. This architecture lived on as the IBM Application System/400 -- or just AS/400, with operating system called OS/400 -- then the iSeries, then the System i, and a month or to ago as simply "i" brand (I think). Whatever the branding, IMHO this second very clean IBM virtual machine architecture also puts the other VMs and hypervisor offerings to shame!
    0 pointsBadges:
    report
  • SAS70ExPERT
    You seem to presume the motivation for hypervisor virtualization is fault isolation. In my world, the primary motivation is to decouple the software platform (OS + middleware, etc.) and change management requirements of two applications. AppA wants OS version 1.2.3, but AppB wants OS version 1.2.4 plus patch XYZ, or even a completely different OS family (e.g. Windows and Linux). Or even if we can get the suppliers of AppA and AppB to deliver code that can piecefully co-exist on a common platform, AppA wants to upgrade next month to a new version that has new platform requirements, while AppB can't accomodate a corresponding upgrade until next year.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: