Speculations are overflowing within the virtualization community following Diane Greene’s resignation from VMware. As a glass-half-full kind of guy, I’d like to offer my reasons why VMware may thrive in the next several years.
First and foremost, I feel that VMware’s technology has the potential to continue to be superior to the competition. While price is among the more important decision points, the superior product will hold its own in the marketplace despite the higher price. The standing example in this arena is enterprise databases. Oracle is a better database platform than Microsoft’s offerings, yet both hold good position in the market place. A certain amount of normalization of market share for VMware is to be expected as other hypervisors and management products enter the market, but more organizations have yet to enter the market as a customer.
Storage integration will continue to drive the richest virtualization platforms. The most underrated technology that VMware has ever produced is the virtual machine file system or VMFS. VMware’s implementation of this technology will improve over time, and the competition is not yet there in this space.
VMware will have a lower host hardware cost per VM for the same performance deliverables. While this is incredibly difficult to precisely quantify, my experience is that VMware can run more virtual machines than Hyper-V on the same hardware. Again, because price is among the most important decision points, this point may help VMware as hardware becomes more capable for virtualization technologies.
VMware can continue to innovate within the virtualization space. VMware has the virtualization expertise to provide new products into the market, and among the major players in the field they would be the most suited to innovate at this point.
It is a given that the other platforms will make gains in market share with the relative flood of products into the space. But considering VMware’s proven ability to innovate in this space, they have the chance to retain their lead and keep going in the correct direction. We will see!
Clabby Analytics analyst Joe Clabby is 100% convinced that Microsoft’s Hyper-V will take over VMware in market share over the next three to five years, and makes some strong points for this in his recent report, Six Reasons Why Microsoft’s Hyper-V will Overtake VMware to Become the Major Player in the x86 Server Virtualization Marketplace.
The report came out prior to the shake-up at VMware on July 8, when the company announced that its Board of Directors replaced VMware co-founder and CEO Diane Greene was being replaced, and then lowered its revenue forecast.
VMware had the vision to see the value of virtualization and took the technology to the top unchallenged due to strategy, innovation and sales execution, but that ride is about to come to an end, Clabby said.
“With the introduction of Hyper-V by Microsoft, VMware is about to experience some very serious competition from a vendor with deep pockets, with a massive worldwide marketing and sales organization, with major market penetration across Fortune 500 and small and medium business markets, and with extensive and complementary infrastructure and management product depth,” Clabby reported.
Among the reasons Clabby believes Microsoft will crush VMware are that Microsoft already has an expansive installed base, a mammoth network of direct sales and indirect business partners, and is offering lower prices alternatives to VMware’s hypervisor and related infrastructure/management software products.
Unfortunately, I have to agree. History tends to repeat itself, and this has been Microsoft’s strategy for a very long time: see a great technology, copy it, and outprice the rest of the market.
Vanity Fair‘s July issue had a great article that illustrates this, called “How the Web was Won” that looks at the eveolution of the Internet over the past 50 years, including details of how Microsoft took over Netscape Navigator by developing Internet Explorer.
The computer programmer known for founding Netscape Communications, Lou Montulli, told Vanity Fair, “From a scientific point of view none of us really respected Microsoft. There was definitely a sense of: They’ve put out of business three or four major companies, and they did it simply by copying what they did and outpricing or outmaneuvering them in the market. This is a general feeling of computer scientists everywhere, that Microsoft doesn’t tend to innovate as much and really just enters the market late, takes it over, and then stays at the top.”
Pricing aside, Microsoft already has a massive installed base.
“It will leverage this installed base, and price its products to out-function/undercut VMware’s pricing,” Clabby wrote. “The computing industry saw this same situation arise when Citrix built a leadership base for its terminal server products — only to have Microsoft enter the market and claim significant marketshare after Citrix pioneered the terminal server marke umbrella. Almost the exact same situation is about to happen again — this time between VMware and Microsoft.”
Microsoft also has a packaging advantage with its Hyper-V hypervisor, as it can be delivered with every single version of 64-bit Windows Server 2008, and installing Hyper-V is a cake walk, according to Clabby.
“A box simply needs to be checked during installation and Hyper-V becomes active. By not requiring IT buyers to find/acquire/download additional virtualization software, the job of deploying and testing virtualization within a Windows Server 2008 is greatly simplified. VMware cannot counter this packaging advantage,” Clabby wrote.
The most damning problem for VMware, according to Clabby, is product depth.
Though VMware has the advantage of technologies like VMotion, to move live VMs, and all of the handy add-on management and infrastructure software integrated into VMware, Clabby said Microsoft’s management and infrastructure is far deeper.
Microsoft’s Systems Center product portfolio inlcludes systems management tools like Configuration Manager; Operations Manager; Data Protection Manager; Virtual Machine Manager; System Center Essentials; Capacity Planner, and the list goes on, ad nauseum.
Besides all of those points, Microsoft is a $51 billion dollar software company and VMware’s revenue is just over $1 billion.
In short, given its deep pockets, large installed base and virtualization strategy, it is safe to say Microsoft will, once again, be laughing all the way to the bank.
Given that virtual environments for x86 servers are relatively new, most lack direct experience in performing major in-place upgrades. While there are many ways to approach a key upgrade to a virtual environment, we’ll take a look at one example of a server virtualization upgrade: VMware ESX 3.5 and VirtualCenter 2.5 to the Update 1 release of both products. This release resolved some major issues, putting the spotlight back on the new features of ESX 3.5, namely Storage VMotion.
Maintaining version control on a virtualization platform is in the best interest of ongoing administration. With VMware environments, this situation is illustrated by the sequential upgrade tasks with older versions of ESX and VirtualCenter. The first step in making a successful upgrade is to go through the release notes and scour the Internet for existing resources that can make this task less daunting. One particularly helpful resource is the RTFM Education ESX and VirtualCenter upgrade guide by Mike Laverick which goes through many scenarios with specific, step-by-step guides on almost every topic of the upgrade.
Having all of the resources in the world may still not be enough to ensure a smooth upgrade of the virtual environment. This is where a test environment for the upgrades can prove critical to a successful project. Provisioning an accurate test environment can become increasingly expensive, but can provide a beneficial test ground to ensure there are no surprises during the upgrade. Consider the test environment shown in the figure below:
This test environment is a smaller, yet representative environment of the larger environment in that it may have the same storage system, base drivers on the host systems yet simply providing a smaller workload. This environment can be an adequate test environment for all of the basic functions involved with an upgrade. As for provisioning the environment, there are some tricks available such as using the systems in an unlicensed or evaluation mode, reducing processor inventories or taking resources from the live environment if the loss can be sustained.
Planning and testing are the best defenses against an upgrade failure. Furthermore, because the scope of a virtual environment is so broad, the investment in testing and planning should be a no-brainer.
If you have not noticed, I have been on a Sun xVM VirtualBox kick recently. I think it is beneficial to virtualization administrators and managers to be familiar with at least two hypervisors — so why not learn more about xVM?
VirtualBox has a smooth interface for a version 1 release, but the one area that would require the most adjustment is the virtual networking. Let’s take a closer look at network functionality in VirtualBox.
Virtual networking on VirtualBox has a few key differences that VMware users would need to develop an understanding about before fully utilizing the potential of the product. The first difference is the concept of the virtual networking hardware. VirtualBox allows a virtual machine (VM) to have one of four network interface cards virtually assigned. These are the AMD PCNet PCI II, AMD PCNet FAST III, Intel Pro/1000 T and the Intel Pro/1000 MT. This array of virtual adapters allows a VM to have broad support for multiple operating systems, but the corresponding bridging functionality may make network administrators a little uneasy.
For Windows systems, VirtualBox uses a spanning tree algorithm from the native operating system bridging that may cause issues on systems with multiple interfaces in managed network environments. The bridged network functionality puts the VMs on the same physical network as the VirtualBox host system. In this fashion, a VM would be able to retrieve a DHCP network from the physical network and interact as if it were placed on the network parallel to the host. Windows XP and Server 2003 products’ bridging functionality is explained on the TechNet website.
Another key difference is that in order for a VM to use the bridged network is the addition of a bridging interface. Adding an interface is fairly straight forward with the use of the VBoxManage command. The following command would add a bridging interface named “VM-Bridge”:
VBoxManage createhostif "VM-Bridge"
Once this command is completed, the VM-Bridge interface is now present in the network connections inventory of the Windows control panel. Then a VM can be configured to use bridged networking with the newly created interface as shown in the figure below:
At this point, the VM-Bridge interface can transparently place the VM on the same network as the host when the Windows bridged connections are correctly configured. Note also that in the network configuration you can fully edit the MAC address of the VM. While exceptionally convenient, this can introduce risk for some environments and situations.
Now that we have gone through a quick look at VirtualBox’s implementation of bridging network connections for VMs, I would have to nudge the VMware products to be a little more seamless in the category of bridged networking. By having the VMware bridge protocol binding used instead of a separate series of adapters for the same purpose, VMware’s bridging fits better for most environments.
Make no mistake, the comprehensive VirtualBox networking implementation is fully competitive with VMware. There is much more to the VirtualBox networking implementation available for download in the online user guide in section 6.
Sun xVM VirtualBox for Windows offers the capability to import VMware-based VMDK files into a virtual machine (VM), making a migration or cross-platform deployment quite enticing. VirtualBox 1.6.2 does not yet support the Open Virtual Machine Format (OVF) implementation; however, native handling of the VMDK files will suffice for most situations. Let’s go through importing a VMDK file for use in VirtualBox.
The critical tool in VirtualBox is the Virtual Disk Manager (VDM) for disk access. For most of us with a VMware-centric background, this will be a new concept. The VDM is a single tool where all virtual disks are inventoried. This can span multiple locations as well as multiple disk types – such as floppy, CD-ROM, and hard drives. Further, for the hard drive inventory, it is ubiquitous as to whether the disk is a VMware VMDK file or a VirtualBox VDI file. The figure below shows the VDM with an inventory of both VMDK and VDI files:
When a VM is created (or when modifying and existing VM), the drive inventory can be specified to create a new virtual disk or use a disk that is listed in the VDM inventory. By managing the virtual disks within the VDM, the VMs can pull directly from this inventory based on your configuration. The VDM can provide disks of all types from remote locations, such as a UNC path or a mapped drive.
There are a few important notes on the use of VMDK files within VirtualBox. First is that the snapshot functionality is not yet supported for VMDK files within VirtualBox. Second, if you intend to boot from the VMDK file, the VM may need boot device modifications. And lastly, the VMDK is modified when used by VirtualBox, so if you go back to using it with a VMware product, depending on what you have done to it – it may not be accessible. For non-boot drives, this should be a transparent exchange.
More information on the use of VMDK files within VirtualBox can be found in the online user guide for VirtualBox in section 5.4.
Anyone with five minutes of IT experience knows that vendors sometimes publish bogus “benchmarks” that portray their products in the best of all possible lights. Virtualization guru and Burton Group analyst Chris Wolf recently uncovered a particularly spectacular example of this, courtesy of QLogic and Microsoft.
In a release, QLogic Corp., a networking technology provider, said it tested virtual machines running on Windows Server 2008 Hyper-V and attached to a storage area network (SAN) via its 8 Gbps Fibre Channel (FC) host bus adapters, and saw near-native performance of 200,000 I/O operations per second (IOPS).
But, as Wolf discovered, what QLogic failed to mention was that it ran its tests against a very unusual SAN array: the Texas Memory RamSan 325 FC, which uses solid-state storage. Further, the benchmark used block sizes of just 512 bytes, compared with a more real-world block size of 8 K or 16 K.
This left Wolf feeling duped and betrayed:
If I was watching an Olympic event, this would be the moment where after thinking I witnessed an incredible athletic event, I learned that the athlete tested positive for steroids.
Wolf ran this benchmark by a colleague, who calculated that had the same benchmark been performed using “real disks” with latency of 7 milliseconds, it would have limited throughput to a much less impressive 9,142 IOPS. Hardly anything to write home about.
Thanks to Wolf for taking the time to look into this.
Now that Microsoft has finally delivered Hyper-V, everyone is waiting to see how many VMware shops will make the switch. Are there any compelling reasons for a company that already has a large investment in VMware products to switch to another product? Here are some reasons why companies may or may not make the switch from VMware to Hyper-V:
Some reasons why companies may choose Microsoft Hyper-V:
It’s Microsoft. Companies that mainly use Microsoft products could switch to get better support for running their products running on virtual hosts and to not have to rely on a separate vendor for virtualization.
Cost. It’s definitely cheaper then ESX, but I’m a firm believer that you get what you pay for. Yes, Hyper-V is a lot cheaper then ESX but it lacks the maturity and high-end features that ESX has. It’s probably just a matter of time though before VMware lowers its cost for large enterprises as they have already done with the SMB market with its bundled foundation acceleration kits.
- Versatility. Hyper-V will pretty much run on any hardware that Windows will run on. ESX only supports a very specific set of hardware. VMware has recently expanded their hardware support and will continue to do so.
Some reasons why companies stick with VMware ESX:
- Cost (again). Companies with a lot of in-house VMware experience will have to re-train staff to learn Hyper-V and basically start from scratch. There is a large pool of skilled and experienced VMware architects and administrators available today as well as many VMware consulting firms and business partners.
- Less features. ESX and VirtualCenter have a very rich tool set including vMotion, DRS and HA. Hyper-V lacks the ability to team NICs on vSwitches and their Quick Migration feature requires downtime.
- Less third-party products. A large number of 3rd party products and add-on’s are available for ESX to enhance it. It will take time for vendors to release products for Hyper-V.
- It’s VMware. ESX is a mature, stable product that has been around for many years, Hyper-V is a 1.0 product that will take to develop and get all the bugs out of it.
Will I make the switch? Probably not anytime soon. I’ll definitely be looking at Hyper-V and will make my own comparisons, but the lack of certain features is a show stopper for me right now. I’ll keep an eye on Hyper-V to see how it develops, re-evaluating it later as new versions are released.
The competition is going to be great in the virtualization market, as it helps to drive down costs and force vendors to innovate. The race is on between VMware and Microsoft with VMware already miles ahead. Nevertheless, Microsoft has a lot of money and the determination to be on top (take Lotus Domino, Novell Netware and Netscape as examples). Expect Microsoft to slowly whittle away at VMware’s dominance as their product matures and to see VMware to do whatever they can to maintain superiority in the virtualization market.
I recently came across an article revealing that 1 out of 3 IT administrators have used their elevated privileges to snoop on confidential information. It’s always possible to lock out administrators to sensitive data through operating system access controls, however, a virtual environment opens up other avenues for exposing sensitive data.
With physical servers, the task of imaging a server’s hard drive for offline examination is not always easy. An administrator of a virtual environment can easily and stealthily snapshot a virtual machine to temporarily suspend writes to disk file, make a file system copy of the VM’s disk file from the host server while it is running and then take that copy to a workstation where they can mount it and attempt to gain access to information to which they would normally not have access.
Either by mounting the disk file to an existing VM then adding an additional hard drive to access the information on the drive, or creating a new VM and mounting a live CD to utilize hacking utilities to defeat the operating system security, admins can bypass operating system level controls to gain access to the data simply by making a copy of the disk file and mounting it elsewhere .
Virtual servers open up additional attack vectors over physical servers, illustrating why proper security measures must be utilized to ensure that sensitive data is adequately protected in virtual environments. In addition to properly securing host servers, auditing and logging should also be in place to track all logins and activities on host servers. Administrators typically need access to sensitive data to be able to do there jobs but this access should be limited as much as possible to only what they actually need.
Many administrators snoop because they know they can get away with it. By restricting access and logging events, the 2/3rds of IT administrators who set the better example make snooping more difficult for nosey admins.
Traditionally, developing and testing applications is a labor-intensive and time-consuming process that requires IT departments to create testing environments that mirror production environments. Once a testing environment is created—with production operating systems, network configurations and the like all painstakingly recreated—the test-and-development crew may need the machines only for a few days before the environment is scrapped. For IT operations, creating and tearing down test environments is just one more activity in already overtaxed schedules.
Virtualization technology – with its inherent ability to quickly create virtual machines – has been widely embraced for test-and-dev applications. Now virtual lab management software further helps IT administrators by automating and consolidating the processes required to establish lab IT infrastructure. Many virtualization proponents view these tools as the perfect antidote to the legwork required to set up and break down lab environments.
Easing IT’s burden
Providers VMlogix, Surgient and, naturally, VMware offer virtual lab management products designed to make the build-and-tear-down process required for test and development faster and easier. (VMware Lab Manager works only with VMware environments.) The software typically enables the configuration of multiple VMs in multiple environments and integrates with third-party quality assurance and testing tools, such as HP Quality Center, Borland SilkTest, IBM Rational Build Forge and IBM Rational ClearQuest, among others. For test-and-dev folks, the payoff of such tools is faster testing and development. For IT operations, the value of such tools has more to do with labor savings and cost overhead.
For about two years, Brian Boresi, manager of client engineering at Sisters of Mercy Health System, has used Surgient’s Virtual QA/Test Lab Management System (QTMS) test applications as part of an enterprise desktop refresh.
Before getting the tool, a subject matter expert would spend more than a week in a central lab testing a new system against core applications. Today, that process has been whittled down to about four hours. “An SME creates testing scripts based on a onetime visit to the lab,” Boresi said. “The virtual test tool automates the scripts which we run in a test environment on a VMware ESX server.”
Theresa Lanowitz, president of voke Inc. , an IT research firm, has studied the benefits of virtual lab management technology and said that such results as Boresi’s are fairly typical. With virtualized lab environments, Lanowitz said, “developers want to test in an environment as close to production as possible, and operations don’t have to set up a lab.”
At Vignette Corp. , a software company, virtual lab technology enables developers and QA testers to provision their own test environments. The company uses LabManager from VMLogix, which includes self-service automation technology, allowing end users to create their own VMs without the intervention of IT operations. “Users now log in and self-service images for themselves,” said Rob O’Neill, Vignette’s senior manager of IT. “With automated workflows, users can check out machines, run them for testing, and then tear them down once they are finished.” The turnaround time for creating test environments ranges from about five minutes to 20 minutes, O’Neill said.
While VM sprawl has become an issue in production environments, it’s also a challenge for test and development. Bart Burkhard, manager of engineering for Overwatch Systems, a provider of software for military command and business information analysis, is currently piloting VMLogix’s LabManager in part to contain VM sprawl. “We have a number of disconnected labs and data centers used by developers and testers,” Burkhard said. “The disconnected labs and parallel projects make physical resource allocation and discovery difficult for us.”
Saving money, improving access to resources
For this reason, Overwatch opted to move test and dev from a physical to a virtual environment, Burkhard said, but the company was wary of the sprawl that could result. With LabManager, Overwatch now maintains a single repository of VMs that track how they are utilized by the company’s test and development staff. “As leases come up for various desktops in the labs, we’ll incrementally replace physical machines with VMs.”
From Burkhard’s perspective, the benefits of using a lab management environment are twofold. From a business perspective, it helps save money on items such as leases, power and cooling because it facilitates the move from physical to virtual environments. For end users, the use of lab management software is getting them access to resources faster. “The time we spent to allocate a machine into a lab with any disk size and memory based on the VMs we have is down from three days to one hour,” Burkhard said.
Word has it that Microsoft is finally getting it together and releasing Hyper-V, putting the tech world on notice that it is now safe to exhale.
Phew, we were all about to turn blue.
Has someone ever told you a story about some aging celebrity, and your first thought is, “Wait, you mean they’re not dead yet?’ I probably shouldn’t admit this, but when I read that Hyper-V was coming out, I thought, ‘What do you mean, it’s being released? I thought that already happened!”
My mistake, I had confused the release with another important Microsoft — ahem, milestone — in March: the Hyper-V release candidate (RC).
Excuse me for being flip, but I was bored to tears by this whole Viridian-cum-Hyper-V saga long ago. Two years ago, when I first started covering virtualization, the big news was that Microsoft had made Virtual Server 2005 available for free. Immediately thereafter, VMware returned the volley and made its hosted virtualization platform VMware Server free too, eliminating any real advantage Virtual Server 2005 may have had over the better-established GSX. So much for that story line.
Since then, we’ve lived through name changes, (Viridian to Hyper-V), release candidates, pricing announcements (why $28 dollars, why not $25? $29.99?), delays (will Microsoft meet its 180-days-after-Longhorn deadline? Will it beat it?), feature cuts, feature clarifications (“Quick migration” anyone?), and countless press articles with VMware cast as David to Microsoft’s Goliath — or is it the other way around?
Everything except an actually shipping, nonbeta, nonrelease candidate product.
As a journalist, I’m just happy that the wait is over, and we can all stop walking around on tenterhooks, expected to drop everything every time Microsoft comes knocking at our inbox with some virtualization-related announcement that may or may not pertain to the release of Hyper-V.
Now we can all get on with our job of waiting for Microsoft to update us on the status of all the product features that it excised from Hyper-V last year: quick migration, hot add of system resources, increased numbers of CPUs, etc. What a relief!