Traditionally, developing and testing applications is a labor-intensive and time-consuming process that requires IT departments to create testing environments that mirror production environments. Once a testing environment is created—with production operating systems, network configurations and the like all painstakingly recreated—the test-and-development crew may need the machines only for a few days before the environment is scrapped. For IT operations, creating and tearing down test environments is just one more activity in already overtaxed schedules.
Virtualization technology – with its inherent ability to quickly create virtual machines – has been widely embraced for test-and-dev applications. Now virtual lab management software further helps IT administrators by automating and consolidating the processes required to establish lab IT infrastructure. Many virtualization proponents view these tools as the perfect antidote to the legwork required to set up and break down lab environments.
Easing IT’s burden
Providers VMlogix, Surgient and, naturally, VMware offer virtual lab management products designed to make the build-and-tear-down process required for test and development faster and easier. (VMware Lab Manager works only with VMware environments.) The software typically enables the configuration of multiple VMs in multiple environments and integrates with third-party quality assurance and testing tools, such as HP Quality Center, Borland SilkTest, IBM Rational Build Forge and IBM Rational ClearQuest, among others. For test-and-dev folks, the payoff of such tools is faster testing and development. For IT operations, the value of such tools has more to do with labor savings and cost overhead.
For about two years, Brian Boresi, manager of client engineering at Sisters of Mercy Health System, has used Surgient’s Virtual QA/Test Lab Management System (QTMS) test applications as part of an enterprise desktop refresh.
Before getting the tool, a subject matter expert would spend more than a week in a central lab testing a new system against core applications. Today, that process has been whittled down to about four hours. “An SME creates testing scripts based on a onetime visit to the lab,” Boresi said. “The virtual test tool automates the scripts which we run in a test environment on a VMware ESX server.”
Theresa Lanowitz, president of voke Inc. , an IT research firm, has studied the benefits of virtual lab management technology and said that such results as Boresi’s are fairly typical. With virtualized lab environments, Lanowitz said, “developers want to test in an environment as close to production as possible, and operations don’t have to set up a lab.”
At Vignette Corp. , a software company, virtual lab technology enables developers and QA testers to provision their own test environments. The company uses LabManager from VMLogix, which includes self-service automation technology, allowing end users to create their own VMs without the intervention of IT operations. “Users now log in and self-service images for themselves,” said Rob O’Neill, Vignette’s senior manager of IT. “With automated workflows, users can check out machines, run them for testing, and then tear them down once they are finished.” The turnaround time for creating test environments ranges from about five minutes to 20 minutes, O’Neill said.
While VM sprawl has become an issue in production environments, it’s also a challenge for test and development. Bart Burkhard, manager of engineering for Overwatch Systems, a provider of software for military command and business information analysis, is currently piloting VMLogix’s LabManager in part to contain VM sprawl. “We have a number of disconnected labs and data centers used by developers and testers,” Burkhard said. “The disconnected labs and parallel projects make physical resource allocation and discovery difficult for us.”
Saving money, improving access to resources
For this reason, Overwatch opted to move test and dev from a physical to a virtual environment, Burkhard said, but the company was wary of the sprawl that could result. With LabManager, Overwatch now maintains a single repository of VMs that track how they are utilized by the company’s test and development staff. “As leases come up for various desktops in the labs, we’ll incrementally replace physical machines with VMs.”
From Burkhard’s perspective, the benefits of using a lab management environment are twofold. From a business perspective, it helps save money on items such as leases, power and cooling because it facilitates the move from physical to virtual environments. For end users, the use of lab management software is getting them access to resources faster. “The time we spent to allocate a machine into a lab with any disk size and memory based on the VMs we have is down from three days to one hour,” Burkhard said.
Word has it that Microsoft is finally getting it together and releasing Hyper-V, putting the tech world on notice that it is now safe to exhale.
Phew, we were all about to turn blue.
Has someone ever told you a story about some aging celebrity, and your first thought is, “Wait, you mean they’re not dead yet?’ I probably shouldn’t admit this, but when I read that Hyper-V was coming out, I thought, ‘What do you mean, it’s being released? I thought that already happened!”
My mistake, I had confused the release with another important Microsoft — ahem, milestone — in March: the Hyper-V release candidate (RC).
Excuse me for being flip, but I was bored to tears by this whole Viridian-cum-Hyper-V saga long ago. Two years ago, when I first started covering virtualization, the big news was that Microsoft had made Virtual Server 2005 available for free. Immediately thereafter, VMware returned the volley and made its hosted virtualization platform VMware Server free too, eliminating any real advantage Virtual Server 2005 may have had over the better-established GSX. So much for that story line.
Since then, we’ve lived through name changes, (Viridian to Hyper-V), release candidates, pricing announcements (why $28 dollars, why not $25? $29.99?), delays (will Microsoft meet its 180-days-after-Longhorn deadline? Will it beat it?), feature cuts, feature clarifications (“Quick migration” anyone?), and countless press articles with VMware cast as David to Microsoft’s Goliath — or is it the other way around?
Everything except an actually shipping, nonbeta, nonrelease candidate product.
As a journalist, I’m just happy that the wait is over, and we can all stop walking around on tenterhooks, expected to drop everything every time Microsoft comes knocking at our inbox with some virtualization-related announcement that may or may not pertain to the release of Hyper-V.
Now we can all get on with our job of waiting for Microsoft to update us on the status of all the product features that it excised from Hyper-V last year: quick migration, hot add of system resources, increased numbers of CPUs, etc. What a relief!
Overheard on the blogosphere:
“Together VMware and Apple could make a run at redefining the desktop (it’s clear that VMware/Microsoft synergy isn’t going to happen). VMware delivers the virtual infrastructure and Apple delivers the OS. Combined, we could have a new generation of desktop delivery that might eventually supplant Microsoft.”
– Chris Wolf on his blog, Chris Wolf.com.
In this post, Wolf — a Burton Group senior analyst and virtualization expert — urges VMware and Apple to get it together and end Microsoft’s desktop dominance. Wolf cogently explains why the operating system-based desktop will fade away as desktop virtualization and new personal desktop models emerge.
What’s your take on where the desktop is going, and which model will dominate in businesses and with consumers? Sound off here, or write to me at email@example.com.
Looking for more info about desktop virtualization? For a complete analysis of current desktop virtualization products and models, check out Andrew Kutz’s article in SearchDataCenter.com’s Virtual Data Center ezine.
Desktop virtualization packages rely on snapshots and virtual drive functionality. The de facto functionality standard here is found in VMware Workstation and VMware Server, but the tools in Sun’s VirtualBox may be setting a new standard. Let’s take a quick look at how snapshots and virtual drives work within Sun xVM VirtualBox.
VirtualBox snapshot technology provides the same basic functionality as the VMware products in that they can be taken while the virtual machine (VM) is running or offline. The snapshots are taken from two different places depending on the state of the VM. For a running VM, the snapshot is taken from the running console as shown in the figure below.
When a VM is powered off, snapshots may be taken in the properties of the VM. This difference is a slight inconvenience, but is an easy learning curve to overcome. Further, if a VM needs to revert to a saved snapshot, this same location is where the VM would be reverted. VirtualBox gives the option to build from the snapshots, so there can be multiple point-in-time restores for a single VM. Snapshots in VirtualBox are kept in the
.VirtualBox\Machines\VMName\Snapshots location by default, and are a collection of .VDI and .SAV files. The figure below shows three point-in-time restores for a single VM:
As with all snapshot restores, you should be sure that you want to restore as the reverting process is authoritative to the VM. Reverting to a VirtualBox snapshot taken while the system is running reverts precisely to that point with the VM running, rather than a powered off state. Overall, the functionality inventory of VirtualBox snapshot functions as advertised and brings another positive view to this exciting virtualization platform.
More information on the VirtualBox 1.6.x product can be found in the online user guide.
As server virtualization technology makes its way from test environments into production, IT organizations are struggling to keep up with the inherent management challenges involved in dealing with virtual environments.
The ease with which VMs are created makes it that much easier for VMs to be launched and moved willy-nilly regardless of the security and software licensing cost issues, just to name two common problems. Vendors of course have been hip to these challenges. This month, Embotics Corp. released version 2.0 of its V-Commander management software designed to automatically nip virtual sprawl in the bud. One way the software does this is by automatically enforcing policy dictating such things as VM expiration dates and through role-based security access that defines just who can do what in terms of VM creation and migration.
Also this month, Netuitive Inc. revamped its Service Analyzer business service management (BSM) software to include virtualization management capabilities. Nick Sanna, Netuitive’s president and CEO, said the company’s self-learning correlation software can monitor the status of applications across the environment, whether they are physical or virtual. “The idea [behind Service Analyzer],” said Sanna, is to eliminate IT management silos by automating performance management and providing end-to-end visibility into business services.”
Attention, college students: your tuition may soon decrease!
Well, maybe not. However, VMware Inc. reported today that 900 universities including top tier schools such as Harvard and Yale are saving big bucks using VMware Inc. virtualization.
Many renowned universities that have deployed VMware to reduce capital and operating costs, increase application and system uptime, decrease power consumption and improve disaster preparedness include Cambridge, Princeton, Stanford, Purdue, the University of Maryland, the University of Auckland, and the University of California campuses at Berkeley, Los Angeles and San Diego.
These schools and hundreds more around the world are running their mission-critical enterprise applications, database systems, and education-specific applications such as CollegeNET and the Blackboard Academic Suite in VMware virtualized environments, the company reported.
Others are using VMware for disaster recovery (DR).
Bowdoin College in Maine partnered with Los Angeles-based Loyola Marymount University to build a co-located datacenter for cross-country DR. By partnering and using VMware to create back-up systems, the schools have achieved higher availability and better load balancing, with more than 70% of their environment virtualized and more than 100 virtual machines (VM). They are saving $15,000 in annual server maintenance and have avoided $500,000 in hardware costs, according to VMware.
Ohio State University has been a VMware virtualizatiton customer since 2003 when the College of Humanities needed to upgrade its IT infrastructure and found there was not enough room to expand. After deploying VMware virtualization, the College was able to meet its upgrade needs with 54 VMs running on three physical host servers. The college avoided $160,000 in hardware costs and cut server provisioning time down from three weeks to five minutes, and the IT staff can now manage all of its VMware VMs from a single console.
Clearly, the education sector is a strong market for VMware, as there are now 900 universities and colleges using the virtualization platform. Because of this, VMware created a free online tool called VMware Academic Program staffed with IT professionals from higher education facilities to answer questions about overall IT best practices. In addition to these experts, the site also includes case studies to help understand how others have implemented VMware.
In last week’s blog, I wrote about my first experiences with Sun’s xVM VirtualBox 1.6.2. I like the interface and the features available to this free desktop virtualization product. Among these great options is one that lets users configure the VirtualBox server to view virtual machines remotely with VRDP, or VirtualBox Remote Desktop Protocol.
VRDP is a compatible implementation of Microsoft’s Remote Desktop Protocol (RDP) that is configured for easy console access to the guest platform from remote systems. This is different from a web-based interface that the competition has in that it is configurable per virtual machine. Let’s take a look at how to configure VRDP for a virtual machine in these steps below.
The first step is to enable VRDP, or remote console as it is called within the interface. By default, VRDP is disabled for all virtual machines and is enabled with a specified security method. The security methods are referred to as null, guest and external. The null method is a no-security model in that any VRDP connection will be accepted, and this configuration is documented by Sun as being designed for a testing and private network only configuration. To enable VRDP on a virtual machine, click on the settings tab while the virtual machine is powered off and configure the remote display option:
Once VRDP is configured, the virtual machine will accept connections the next time it starts. The tricky part is the port and IP address configuration. On default configurations, 3389 would be used for the VRDP session on the host. If your host is a Windows system and is running Remote Desktop, another port should be specified. VRDP can also remotely start the virtual machine with VboxHeadless headless command. Once the virtual machine is running, a connection is made to the host system running VirtualBox and the specified port if not 3389. This connection will provide the redirected console within a standard rdesktop or mstsc session, and will be at all states and regardless if the guest is using a network interface. In this configuration, an operating system could be installed and the virtual BIOS can be accessed as well as other tasks below the operating system.
More information on the VRDP implementation can be found in the VirtualBox online user manual from the VirtualBox community website in section 7.4.
Today at Red Hat Summit in Boston, two of Red Hat’s emerging technology engineers, Dan Barrange and Richard Jones, presented the new tool sets that their team has developed for work with Xen virtual machines (VMs), including command line utilities, which will become part of the oVirt tool set.
According to Barrange, “You won’t have to lock into any particular technology underneath,” because these new utilities don’t require installation on a guest or require administrators to log in. Like the forthcoming Red Hat Enterprise Linux (RHEL) KVM-based hypervisor, these tools can also be launched from disk. “That’s the competitive advantage to using our tools,” says Barrange.
Red Hat engineer Richard Jones says that these new command line monitoring tools allow for a wider range of kernels and filesystems to be used and will offer better Windows support. Some of the utilities featured today include the following:
- Virt-top. A top-like utility for showing stats of virtualized domains (e.g. for reading network traffic, disk throughput, etc.).
- Virt-df . Virt-df is df for virtual guests. Used for checking how much disk space is being used by virtual machines.
- Virt-p2v. A graphical user interface launched from live CD for physical to virtual migration.
A full list of these tools is available at Richard Jones’s website.
Red Hat has developed these tool sets for oVirt, its next-generation virtualization management console. Unlike the current Virtual Machine Manager for RHEL, oVirt creates a small “stateless” image of the host virtualization layer with no local disks or installation necessary.
Should you assign a virtual machine (VM) more than one virtual processor or not? It’s common for admins to configure virtual symmetric multiprocessing, or VMs with multiple CPUs, whether it is needed or not.The decision to use more then one virtual processor in a VM should be based on an actual requirement by the applications installed on the VM and not simply because two processors are better then one. Many physical servers commonly have multiple CPUs regardless if the applications running require them. While being wasteful of server resources, this does not negatively impact a physical server but most VMs will usually run better with one virtual processor and can actually run slower when more than one is assigned to it.
The reason for this is the hypervisor’s CPU scheduler must find simultaneous cores available equal to the number assigned to the VM. So a four VCPU VM will need to have four free cores available on the host for every CPU request that is made by the VM. If there are not four cores available because other VMs are using them then the VM must wait until the cores become available. Single VCPU VMs have a much easier time because they only need there to be a single core available for the scheduler to process CPU requests for it.
Here are some tips on assigning VCPUs to VMs:
- Limit the number of VSMP VMs on your hosts. The less you have, the better your VMs will perform.
- Assign a VM multiple VCPUs only if you are running an application that requires it and will make use of them.
- Don’t assign a VM the same amount of VCPUs as your host system has total cores available.
- If you are going to use VSMP have at least twice (preferably three or four times) the number of cores available on your host system then that of your VM with the most VCPUs. So if you have a four VCPU VM, have at least eight cores available on your host server and preferably 16.
- If you are converting a multi-CPU physical Windows server to a single VCPU VM, make sure you change the HAL from multiprocessor to uniprocessor.
- Don’t use CPU affinity as it restricts the scheduler and makes it harder to process CPU requests. The scheduler is very good at what it does, so let it do its job.
The virtualization world is still waiting for the official release of the Open Virtual Machine File Format, or OVF, once Distributed Management Task Force (DMTF) puts the finishing touches on what will be an industry standard virtual machine (VM) format. According to DMTF’s Christy Leung the organization plans to announce the release of OVF over the next couple of months.
OVF frees users from platform dependence in virtual environments, enabling them to mix and match platforms without incurring interoperability problems. Despite the clear benefits of a common format in a multiplatform virtualization landscape, a universal format has encountered some roadblocks.
Since late 2007, DMTF has worked on OVF when Dell, HP, IBM, Microsoft, VMware and XenSource submitted a proposal for a standardized format for VMs. At the upcoming Burton Group Catalyst conference later this month, DMTF member organizations — including VMware, Citrix, and Novell — will demonstrate OVF interoperability publicly for the first time. According to Burton Group analyst Chris Wolf, “some vendors moved OVF support higher up on their development roadmap in order to have it ready in time to demonstrate at the Catalyst conference.”
Wolf says that OVF is worth the wait — and the investment in the long term. “OVF has a nice long-term goal of standardizing the way hypervisors mount and run VMs,” says Wolf, “but its immediate use is primarily in importing VMs and standardize how VM metadata is managed.”
Wolf goes on to say that while OVF VMs will soon be able to load onto any hypervisor, a virtual hard disk conversion may be required as part of the import process because of the presence of two primary virtual hard disk formats in play: Virtual Machine Disk Format for VMware and Virtual Hard Disk for Microsoft and Xen. “OVF would have even more value if all vendors could agree to use a single standardized virtual hard disk format,” according to Wolf. “Thus far, the reasons for not having a single virtual hard disk format are more political than technological.”
When DTMF finishes its work, OVF will greatly improve the functionality of virtual machines. “OVF metadata is extensible, so any software vendor could use OVF to embed their management metadata inside VMs, regardless of hypervisor,” says Wolf.
“That is a big deal, as vendors could have a consistent management methodology regardless of hypervisor.”