VMware is reaching out to Virtual Iron users, following Oracle’s decision to kill off the Virtual Iron product line last week. (Oracle acquired Virtual Iron in May.) As my colleague Alex Barrett reports today, VMware is offering Virtual Iron customers 40% off the list price of vSphere and vCenter.
The offer is an apparent attempt to keep Virtual Iron’s customers from moving to Oracle VM (which is what Oracle wants them to do). But that raises the question: Why does VMware care?
There are only about 2,000 Virtual Iron customers out there, and Virtual Iron had less than a 2% share of the market. On top of that, Oracle’s market share is even smaller, by most estimates.
Even if VMware were to successfully woo all 2,000 Virtual Iron customers, their effect on VMware’s market share and profits (especially with this discount) would be minimal. And on the flip side, even if they were all to move to Oracle, Oracle would still be nothing more than a bit player in the market.
VMware clearly sees Larry Ellison’s deep pockets as a threat. It’s good to recognize that, but letting it become a distraction — like this Virtual Iron discount program — could lead to problems down the line.
And since I got the song in your head, here’s Tom Petty and the Heartbreakers with “Refugee”:
[kml_flashembed movie="http://www.youtube.com/v/gJ-bhM-xuec" width="425" height="350" wmode="transparent" /]
We still don’t know what Oracle’s plans are for the Virtual Iron technology it acquired in May, but in light of these developments, a much bigger question is arising about a much bigger acquisition: Will Oracle kill off Sun Microsystems’ virtualization line too?
To answer this question, I did what any responsible journalist would do. I consulted the Magic 8-Ball. Its response?
On one hand, the future doesn’t look good for Sun virtualization. In the wake of the acquisition by Oracle — which is expected to close this summer — Sun has backed down on its plans to offer its xVM Server as a standalone hypervisor. It will only be available as part of the xVM Ops Center management console or the OpenSolaris operating system (but not the commercial version of Solaris).
Judging from this move, you’d think Sun sees the writing on the wall: Oracle wants to present a unified virtualization front, and all these different products from three different vendors won’t help meet that goal.
But on the other hand, Sun just yesterday released its VirtualBox 3.0 virtualization software. Why would Sun execs go through all the trouble of putting out a new release if they thought Oracle was just going to squash it in a few months?
It’s all very confusing for the three companies’ customers and partners. Oracle’s relative silence about the acquisitions isn’t helping matters. (Instead of responding to our questions about Virtual Iron’s future, the company just emailed us a link to its specifics-deficient Virtual Iron FAQ.)
Oracle knew what its plans were for both Virtual Iron and Sun when it bought them. Whether the Magic 8-Ball says “signs point to yes” or “outlook not so good,” any sort of definitive answer would be welcomed.]]>
I received a lot of feedback on this blog from people defending VMware, and thought, why not get some Hyper-V users to talk to me about the product – how it performs, its related management tools, features, etc. I asked Microsoft’s press team to send some users my way for interviews, and about a week later Microsoft’s “Rapid Response” team sent me a couple of links to case studies.
Thanks, but I would like to interview some users myself, outside of Microsoft filters. How about at least sending me the contact info for the users profiled in these case studies?
Microsoft’s response was, “Unfortunately regarding direct contact information for the Hyper-V case studies, we have no further information to share.”
This strikes me as odd because Microsoft’s competition, VMware and even smaller virtualization companies like Virtual Iron refer me to real users to interview about their products.
Does this mean that Microsoft doesn’t have the same level of product confidence as the competition? VMware has offered plenty of customer references, and while those users do complain about the acquisition cost of VMware’s software, I don’t think I’ve heard any serious gripes about the product itself.
So I am interested in hearing from Hyper-V users about its performance, because as users and analysts have said, Microsoft won’t sail past VMware on price alone.]]>
I think that Alex and the editorial staff did a great job with selecting products, but thought that I would take a moment to highlight some vendors with excellent products that did not make the list. After all, it’s just as much fun to debate the vendors that were not recognized as it is for those who were.
Yes, VMware’s on the list, but at the same time they’re not on the list. If you didn’t notice, VMware ESX Server 3.5 is nowhere to be found in the article. The SearchServerVirtualization.com editors informed me that ESX 3.5 missed the cutoff date for award consideration (November 30th), and therefore wasn’t eligible. Editors do need time to work with a released product in order to make a fair judgment, so I understand the reasoning for the cutoff. Still, ESX 3.5 was a significant release from VMware, with features such as Storage VMotion adding significant value to VMware deployments.
Novell quietly had a great 2007, from a virtualization product perspective. Novell was right behind Citrix/XenSource in achieving Microsoft support for their Xen-based virtualization platform, and was pushing the innovation envelope throughout the year. Novell was the very first virtualization vendor to demonstrate N_Port ID virtualization (NPIV) on their Xen platform. Novell was even showing their work with open virtual machine format (OVF) last September at their booth at VMWorld. When you factor in Novell’s work with their heterogeneous virtualization platform management tool, ZENworks Virtual Machine Manager, you’re left with a pretty nice virtualization package. The vendors mentioned in the virtualization platform category (VMware, Citrix/XenSource, SWsoft) are all worthy of recognition, and I think it’s equally fair to recognize Novell’s work in 2007 as well. Perhaps Novell’s heavy lifting in 2007 will result in recognition in 2008; however, it’s safe to say that Novell is going to have some stiff competition from VMware, Citrix/XenSource, Microsoft, Sun, Parallels, and Virtual Iron.
I thinks it’s hard to leave Symantec Veritas NetBackup 6.5 out of the discussion. In fact, amongst backup products, I’d list them as first, right alongside CommVault. Symantec was the first major backup vendor to announce support for Citrix XenServer backup, while all other backup products officially supported one virtualization platform – VMware ESX Server. The NetBackup team was also very innovative with VMware Consolidated Backup (VCB), as NetBackup 6.5 includes the capability to perform file level recoveries of VCB image level backups. Typically, a backup product performs two VCB backup jobs – an image level backup for DR purposes, and a file level backup for day-to-day recovery tasks. NetBackup 6.5 provides the ability to do this in a single pass, which I found to be pretty innovative. Factor in Data-deduplication (extremely valuable considering the high degree of file redundancy on VM host systems), also available in NetBackup 6.5, and it’s hard to see how NetBackup could be ignored.
SteelEye is another vendor in the data protection category that I’m surprised did not make the list. VMware HA by itself will not detect an application failure and initiate a failover job as a result, as it’s primarily designed to monitor and react to hardware failures and some failures within the guest OS. SteelEye LifeKeeper, on the other hand, provides automated VM failover in response to application and service failures (in addition to guest OS and physical server failures). Many failures are software-specific, and products that can automate VM failover or restarts in response to software failures go far to improve the availability of VMs in production.I’m limiting my comments only to the award categories, hence I’m only listing some of the products I’ve worked with in 2007 that fit into one of the SSV categories. I hope that for the 2008 awards, we’ll see a higher number of award categories, so all products in the virtualization ecosystem are represented.
Do you agree with editors’ choice of winners? Which deserving vendors do you feel were left off the list? I’d love to hear your thoughts.]]>
If you haven’t seen Mike DiPetrillo’s latest blog, “VMware Patch Tuesday,” it’s definitely worth a few minutes of your time. Mike’s post contrasts patch management on the ESX hypervisor with that of competing platforms. I think the picture DiPetrillo paints is much darker than reality (at least with Windows hosts) being that a given Windows Server 2003 host will not require every available patch (many are service-specific) and since not all updates require a reboot. The patch reboot requirements will further diminish in Windows Server 2008 thanks to hot patching support.
That being said, Mike’s latest post is about much more than VMware’s patch management strategy. Instead, consider it the start of the VMware Offensive. In 2007, VMware for the most part smiled and waved at their competition. That’s not going to be the case in 2008. Citrix, Microsoft, Novell, SWsoft, Sun, Oracle, and Virtual Iron all have plans to chip away at VMware’s market share, and rather than ignoring their competitors, I expect VMware to be much more aggressive at highlighting what makes their approach to virtualization different from the competition.
Read the rest of this post at Burton Group’s Data Center Strategies blog.]]>
Then there’s always the price-war Virtual Iron started with VMware. Virtual Iron is not kidding when they say their prices are 20% of the cost of VMware’s VI3 Enterprise. Couple this to the fact that VMware still can’t manage to get the SKU out for their Mid-Sized Acceleration Kit, and Virtual Iron has a strong chance of remaining a serious (if small) competitor to VMware over the long term. In the end, this can only be good for the consumer in the smaller enterprises that Virtual Iron targets. With the backing of Intel, AMD, Platespin, and the of OEM alliances VI has made (HP and IBM offer Virtual Iron and VMware on their hardware), Virtual Iron is looking strong in the face of all comers – Citrix and VMware included.
What about Viridian? I’m waiting on that… given what I think of Virtual Server (nice toy), Vista (insert expletives here), and Server 2k8 (hyper-hype), I’m not any near convinced that Microsoft will put out a real hypervisor to compete with VMware or Xen. Truthfully, I’m more interested in what Phoenix is doing… but that’s for another blog. Time will tell.
Is VMware a better product? Yes, it’s far more mature, and has a much greater support based, it’s also not being limited the way Virtual Iron is by Xen’s requirement to have newer AMD or Intel virtualization-friendly CPUs to run Windows natively. I think real question is this – Is VMware a superior product? On that, I’d have to say no – the little Xengine That Could has caught up quickly, serves similar markets, and beats them on price.]]>
“While having the technology is one thing, bringing it to market is an entirely separate issue. This is where the Citrix acquisition makes great sense for XenSource. Financially fueled by Citrix, XenSource now has the financial clout, sales, and channel resources to go after the large stake of unclaimed virtualization market share in the enterprise. Don’t get me wrong. This will not be easy, as Citrix and XenSource are competing against powerhouse vendors with strong sales, channel, and IHV partnerships. VMware, Microsoft, Red Hat, and Novell are well established in the enterprise, and are all looking to add to their share of the market. Virtual Iron has been making a lot of noise in the SMB space lately, and they should see the explosion of the XenSource sales channel as a serious threat.”
Wolf sees the acquisition as a win for Citrix and Xen and for users, too.
“In the coming months and years, we should expect to get enterprise-class virtualization technologies at lower costs, with more features, and a motivated group of vendors that are eager to push innovation to remain competitive.”
Read his blog in its entirety on the Burton Group Data Center Strategies blog.]]>
[kml_flashembed movie="http://www.youtube.com/v/H5I5TAGZ-1w" width="425" height="350" wmode="transparent" /]
Virtual Iron's Mike Grandinetti provides insights about the synergies between virtualization, blades, server consolidation and iSCSI in this interview with Jan Stafford, SearchServerVirtualization.com's senior site editor.
DISCLAIMER: I’m not a lawyer. In fact, I don’t ever even want to be a lawyer. I’m happy as an IT Director and SysAdmin, and don’t want to ever be a source of legal advice. The below is informed opinion, not legal advice. Tell it to the Judge. Under the recently-relaxed “Obviousness” rule that governs patents, a patent is useless if the idea behind it is only an obvious improvement over an existing idea. eGenera’s patent seems to fall squarely under that rule, from the language in their patent application. Just for kicks, I read it, and will quote it. Under the quotes I’ll comment on what this means, in my not-so-humble and not-so-attorney opinion.
It starts out with a SCREAMING cry of obviousness in section 1…
“A method of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising: generating a specification that describes a configuration of processing resources of the primary site; providing the specification to a fail-over site having a configurable processing platform capable of deploying processing area networks in response to software commands; using the specification to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. “
That, my friends, is commonly known in the IT field as a failover cluster. The link even defines the N+1 method that eGenera is using in its product. The short of it – You have multiple boxes on a network that are mirrors of one another. One fails, another takes over its role. There’s usually hardware or software in-between that keeps things synchronized and detects the failure. This part of the patent is worded to be host-, network- and processor-inclusive, which would be obvious because most clusters are situated on networks, don’t necessarily need to run the same processors, and are hosts. The “big” improvement is in the use of the term “site” – where the product is meant to restore an entire data center’s configuration. In the press release, this means that if you have four data centers and one disaster site, if any one data center fails, the disaster site takes on the complete configuration of the failed site (i.e., all nodes, network configurations, etc.). This is a huge step forward in disaster recovery, but it’s not patent-worthy because there are a zillion ways to do this.
Here’s one – If you put 100% of your data center(s) onto VMware’s VI3 with shared storage, and had a huge WAN pipe between each site to cover the overhead, you would have this “network in a box” N+1 approach because ESX provides virtual network switching and virtual guest machines, without having to worry about the value of N except in scalability terms. The same is true of most Xen-based products, like Virtual Iron. I’ve been doing this for years on a much smaller scale. If my main data center drops off the face of the earth, I’ve got all of my critical systems in VMware, with snapshots and backups of the machines and the storage they’re on, as well as configuration of the virtual switches. If the worst happens and my data center goes down, my staff and I drive over to a remote office, restore, and have exactly what eGenera is talking about – a complete restoration of all configurations at a remote data center. The method – backing up virtualized systems. The process – recovery to virtualized systems. It’s not as slick as an automated system, but we’re getting to the point that eGenera talks about in its patent (thanks to an Open Source project called Openfiler and some extra bandwidth on our WAN to handle sending snapshots between sites rather than driving tapes around).
Soon, the site backups will be automatically done over WAN links, meaning that when something fails, I switch over to the DR network and everything comes back online from the virtual machine snapshots / backups. It won’t be long after that until we automate that switchover, and have exactly what eGenera is describing. It’s been a long process because that deep a level of DR isn’t a critical requirement to the business, but it was obvious where we wanted to go, and obvious how we needed to get there – through the use of virtualized host and network environments.This brings me to the next few sections:
“2. The method of claim 1, wherein the act of using the specification to generate commands to deploy processing resources is in response to the receipt of a fail-over condition.
3. The method of claim 1, wherein the act of generating includes specifying data specific to device configuration at the primary site.
4. The method of claim 1, wherein the act of generating includes specifying management information relevant to the primary site. “
Summary – we’re patenting how we’re going to do what we claim, how we’re going to encode the configuration, and how we’re going to send notifications when it kicks off. All irrelevant if the concept of the patent is obvious. Also these are all in themselves obvious – any virtualized system has to have information on configuration of the underlying virtualized hardware. Any cluster sends notification on a failure of a node. Any mirrored port has awareness of the cache in its partner port. Outside of the patent office, and in the IT office, this stuff goes without saying.
Next up, section 5… this is identical to section 1, except the last few words: “wherein the specification describes all of the independent processing area networks.”. There’s no significant difference as to obviousness here – it’s just the parts of the patent which are different from each other that are referenced. This is a matter of scale – rather than a “subset of the plurality” (English: part of the whole) this is a “master file” of the entire environment being monitored. It adds grid technology into the mix, another obvious case for virtualization.
Section 6 changes the plurality part to “a minimum configuration of processing resources at the primary site”, which is just saying that the system will use the most efficient (i.e., minimal) configuration possible to get the job done. Duh. Do I have the same number of VMware hosts at my remote sites? No, I don’t. I don’t even always have the same builds or even the same versions! Do I have all of the same configurations? No. Can I really bring up 100% of my data center at a remote site? Sure. And eat a performance bullet on the servers.
So what would I do? I would bring up critical systems only – Active Directory, DNS, Email, mission-critical apps. My Jabber server would stay down. My dev environment would stay down. I would run the minimal configuration I need to get the business working. Can it get any more obvious than “if I don’t have the all the resources I need, I’ll get by with what I have, the best that I can”?
The seventh section tacks this part onto the end: “…wherein the primary site includes a configurable processing platform having a pool of processors and wherein the act of generating includes specifying information descriptive of how the processors are pooled at the primary site. “A pool of processors. Also known as a computing grid. And it goes on to describe having a documented system for how that grid works, and tying that to the application. This is truly obvious. If you have a system, you document it. If you have an automation system, it’s documented, and it uses documentation on how the system it automates functions. This sort of thing has been around forever.
On a non-grid level, Detroit has been outsourcing human labor to robots using this exact methodology for decades. On grids, this is how they work… regardless of distance. Each node is aware of each other node, and so the grid has an internal documentation of the pool of resources that are available.
Section 8: “The method of claim 7 wherein the act of generating includes specifying data to describe logical partitioning of processors and interconnectivity into logical processing networks. ” This is the virtualization component. In virtualized systems and virtualized DR products like VMware’s VMotion, this is a core component of how the systems work to provide fault tolerance. The service console knows what virtual machines are out there, and what host systems are out there. It has descriptive information about the resources, how they’re pooled, and how they’ll be moved in the event of failure. eGennera’s idea is obviously a small improvement to the process, applying it towards a virtualized grid concept, but it’s not a huge leap forward (again). Virtualized grids have already been in the works for some time. See here and here.
Section 9 states:
“A system of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising; a computer-readable specification that describes a configuration of processing resources of the primary site; a configurable processing platform capable of deploying processing area networks in response to software commands; logic to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. ”
In other words, the systems to the methods described above.
Section 10 is similar, switching plurality for totality like sections 1 and 5 did. So they’re going to build a computer system to do what DR specialists and Virtualization specialists have been doing for some time now, only under a commercial brand. Seems obvious to me.
The next sections reference art that I won’t reprint or link to here, as they’re not very original. In fact, the art is as obvious as the concept for this patent. I won’t need the art to describe the obviousness of some of what is printed in the text. Here’s my favorite example so far:
“To date, a considerable body of expertise has been developed in addressing disaster recovery with specific emphasis on replicating the data. Processor-side issues have not received adequate attention. To date, processor-side aspects of disaster recovery have largely been handled by requiring processing resources on the secondary site to be identical to those of the first site and to wait in standby mode.”
Processors have not received adequate attention because in virtualized environments, they are largely irrelevant as long as you’re not mixing widely different types (such as AMD and Intel). You do not need to maintain identical processors, or quantities of processors, or anything like this. I can restore the virtual machines running on my Intel dual core Xeon servers on my Intel single core Xeon machines with a great deal of flexibility amongst processor family types. Does it matter if one is 2.8GHz and another is 1.6GHz? Not really. The processors at my remote sites aren’t sitting in standby mode, either. They’re running apps on the local servers. They are live, running, and chugging along. They’re ready to load up more virtual machines and take over the load at any time.
So, considering the giant logical fallacy presented here, I’m left wondering if there’s even a need for this patent. I could get REALLY brave and open up a huge pipe to the remote sites and run shared storage and VI3 over my WAN… assuming I had unlimited funds for a 1+gig WAN pipe, and then I could get away with having no other process beyond VMware’s built-in recovery with VMotion, CB, and HA.
And yes, I recognize that not everything can be virtualized, but in all honesty, what eGenera proposes is no less disruptive and impactful to a data center than that of virtualization. The rest of the document gets into specifics and details that are very patent-sounding, detailed diagrams, how the parts of the product will work, a definition of PAN (processor area network, as opposed to personal area network), how control nodes manage the environment, etc.
Here’s the summary: We’re going to build a set of interconnected boxes that will virtualized your environment down to the tiniest level. Then when something fails, we’ll load up resources at a remote site and make it all come back online.
Can it get any more obvious than this? It seems like eGenera is using patents to block competition. It strikes me that the folks as eGenera collectively went “Oh, I have an idea to improve this and this and this. It seems kind of logical, but we should patent it so nobody else with the same idea can compete with us without licensing from us”. It’s a great product, but it uses existing technology and existing ideas about how to use technology to provide a product that is already out there, just not in a commercial pacakage. I personally don’t think the patent will stand up to challenge, given the recent changes to patent law.]]>