You may hear the term SCSI reservations frequently when dealing with VMware servers that utilize shared storage. SCSI reservations are used to ensure exclusive access to disk-based resources when multiple hosts are accessing the same shared storage resources. In addition to being used by VMware hosts, SCSI reservations are also used by Microsoft Cluster Server.
SCSI reservations are only used for specific operations when metadata changes are made and are necessary to prevent multiple hosts from concurrently writing to the metadata to avoid data corruption. Once the operation completes the reservation is released and other operations can continue. Because of this exclusive lock, it is important to minimize the concurrent number of reservations that are made. When too many reservations are being made at once, you may receive I/O failures because a host is unable to make a reservation to complete an operation because another host has locked the logical unit number (LUN). When a host is unable to make a reservation because of a conflict with another host, it will continue to retry at random intervals until it is successful; however, if too many attempts are made the operation will fail.
Some examples of operations that require metadata updates include:
- Creating or deleting a VMFS datastore
- Expanding a VMFS datastore onto additional extents
- Powering on or off a VM
- Acquiring or releasing a lock on a file
- Creating or deleting a file
- Creating a template
- Deploying a VM from a template
- Creating a new VM
- Migrating a VM with VMotion
- Growing a file (e.g., a Snapshot file or a thin provisioned Virtual Disk)
Having a minimal amount of reservation conflicts is generally unavoidable and will not have a big impact on your hosts and VMs. To avoid having too many conflicts, try to limit the number of operations that can cause reservations and stagger them so too many are not happening simultaneously. All reservation errors are logged to the /var/log/vmkernel log file on each ESX host. To reduce the amount of conflicts:
- Limit the number of snapshots you have running, as snapshots grow in 16MB increments and every time they grow they cause SCSI reservations.
- Only vMotion a single VM per LUN at any one time.
- Only cold migrate a single VM per LUN at any one time.
- Do not power on/off too many VMs simultaneously.
- Limit VM/template creations and deployments to a single VM per LUN at any one time.
- Consider using smaller LUN sizes (<600GB) and do not use extents to extend a VMFS volume
This blog post was written by Megan Santosus, features writer for SearchServerVirtualization.com.
By now, server virtualization has pretty much proved its mettle as a way to consolidate data centers and reduce costs. As virtualization has gone mainstream, some of the management challenges have become top of mind. Consider the situation for a senior IT manager at a financial services company, who spoke on the condition of anonymity. “Virtualization is great stuff,” he said. “But it does change the way you manage things.”
Two years ago, the financial services company began implementing virtualization — specifically VMware and ESX Server, although the company has since deployed virtualization with Sun Solaris clusters. At that time, the company realized that it had a gap in virtual server management capabilities. “We are making a large push with ESX servers, and we want to manage them holistically with some of the other servers in our environment,” the IT manager said.
To that end, four months ago the company began beta testing CA Advanced Systems Management r11.2; the company already uses the previous version of the software, and one of the enhancements with 11.2 is integration with VMware VirtualCenter. By installing an agent on VirtualCenter and another on the CA management server, the company now collects and aggregates the performance data for virtual machines into a centralized Web-based system. “We take the performance data on the physical ESX server and provide that to our capacity team so they can plan and manage our virtual environment,” the IT manager said.
For the capacity team, virtualization means being able to figure things out in advance such as how many hosts can run on an ESX server, what’s the footprint of the application, and whether it’s best to put components on the same physical box or spread them out. “We can now give the capacity team performance data they need to make the decisions about moving things around,” the IT manager said. Rather than planning, the IT manager likens the process now to capacity modeling. “If we want to move virtual servers running Oracle, Apache and Weblogix, we look at the performance data to make our decisions.”
Megan Santosus is a features writer for SearchServerVirtualization.com.
VMware Inc.’s recent acquisition of B-hive Networks is indicative of just how much of a wrench virtualization has thrown into the performance management arena. (To recap: B-hive’s Conductor software monitors application performance across virtual environments.) “First and foremost, the acquisition shows the importance of being able to manage performance in a virtualized environment,” said Trevor Matz, the president and CEO of Aternity Inc., a provider of end-user performance management software. “The system metrics normally associated with performance tools are pretty meaningless in virtual environments.”
Traditional performance metrics — CPU, memory usage — that are used to monitor the performance of the hardware that provides service to end users don’t have much relevance in virtual environments, Matz said. “Those metrics are associated with a host machine or virtual box itself and don’t indicate what the end user is experiencing,” he added.
Matz said that Citrix Systems Inc., Microsoft and Parallels are all at work on creating tools that collect meaningful metrics in a virtualized environment. “Having comprehensive tools is not enough,” Matz said, adding that there are already more than enough metrics to parse through. “The next big frontier is the ability to transform huge amounts of data into actionable business intelligence that correlates across platforms.”
On Monday, June 9, Symantec Corp. of Cupterino, Calif., announced the release of Veritas Virtual Infrastructure (VxVI), a server and storage virtualization product built on Citrix Systems Inc.’s XenServer technology. By exploiting Veritas’ block storage management model, VxVI hopes to compete with VMware-Infrastructure-3-in-production environments by offering increased capabilities for storage and availability-critical systems.
The new Xen-based virtual infrastructure platform from Symantec provides storage management and high availability with cross-platform connectivity for the virtual data center. It’s essentially XenServer with the Veritas storage management layer on top — all wrapped in a Symantec management console.
According to Symantec Senior Vice President of Storage and Availability Management Rob Soderbery, the time is right for a product that addresses the needs of testing and development, needs that have been underserved by VMware. “Users understand the storage management challenges with VMware,” he said. Symantec has delivered something “fundamentally new” in how server virtualization works with storage management, he noted.
The key difference between VMware and Veritas is in how each handles virtual machines (VMs). Soderbery argues that the Virtual Machine File System (VMFS) file-based system that VMware uses can’t compete with the block storage system of Veritas VxVI.
As enterprises build out the x86 data center, Symantec’s product seeks to serve those who want to bring physical capabilities into the virtual environment. Veritas Virtual Infrastructure brings dynamic storage layouts, enclosure and array mirroring and storage-area network (SAN) multipathing/load balancing to server virtualization, adding features such as shared VM boot images with which Symantec hopes to lure VMware customers that are not satisfied with the storage capabilities of the leading server virtualization platform.
Soderbery says that VxVI will work well with Microsoft’s forthcoming Xen-inspired hypervisor, Hyper-V. “Microsoft has done something pretty interesting here in being open to the Xen community and encouraging the Xen community to be open with Microsoft,” says Soderbery. “Veritas Virtual Infrastructure is technology that we can apply across the Xen ecosystem and Hyper-V as well.”
With another Xen-based virtualization product on the market engineered to be more compatible with the forthcoming Hyper-V, VMware may feel the pinch as users see more options with the other big players in the server virtualization market. But will the $4,595 per two-socket server for Veritas discourage VMware users from even running a demo?
What do you think? If you plan on deploying Veritas VxVI, we want to hear from you. Send us your thoughts via email.
Of the major hypervisors, I am particularly intrigued by Sun xVM. In this blog, I’ll give a quick tour of Sun xVM VirtualBox and show how to make a virtual machine.
Setting up Sun xVM
The download and install are quick and easy. VirtualBox has a small 23 MB download for Windows x86 platforms, and the install was very fast and did not require a reboot. Once VirtualBox is installed and running, creating a virtual machine is also a snap. The VirtualBox interface to create a virtual machine was quite intuitive. In fact, I created a few virtual machines without any issues at all. The figure below shows the VirtualBox console with the control pane for the two virtual machines I created:
Sun xVM good for client-side virtualization
All of the basic functionality of a virtualization product is present in VirtualBox. This includes .ISO mapping, snapshot technology, a hardware inventory manager and network placement technologies. One difference from the VMware products is a full editor for the MAC address, meaning you can specify a full MAC address for the virtual machine. This presents a lot of functionality, but a little dangerous for the masses if you ask me.
I have used VMware Server and VMware Workstation for my client side virtualization for a long time. However, I am going to go with VirtualBox as I have a new workstation. I will continue to share my feedback? Have you had any positive or negative experiences with VirtualBox? If so, please share them below.
Portland, Ore.-based Tripwire ConfigCheck is a free Windows and Linux based utility that assesses the security of VMware ESX 3.5 hypervisor configurations compared to the VMware Infrastructure 3 Security Hardening guidelines, which were released in February.
The Security Hardening guidelines explain in detail the security-related configuration options of the components of VMware Infrastructure 3 and how security affects certain capabilities.
Tripwire ConfigCheck makes sure ESX environments are properly configured according to these guidelines and lends insight into vulnerabilities in virtual environments. It also provides the necessary steps towards full remediation.
Dan Schoenbaum, senior vice president of marketing and business development for Tripwire
said the utility is being offered for free to encourage the proliferation of VMware’s Hardening guidelines and to increase virtual machine (VM) security.
Tripware hopes that by giving a taste of their technology for free, users will become familiar with them and invest in their software products with more security capabilities, Schoenbaum said.
Colorado Springs, Co.-based Configuresoft Inc. also provides a toolkit for compliance with VMware’s security hardening guidelines. The toolkit consists of a set of rule-based templates, reports and dashboards that plug into Configuresoft’s Enterprise Configuration Manager (ECM).
A number of tools are available for migrating physical servers to virtual machines. Which tool to choose will depend on the source operating system, the target virtualization platform and the type of server being migrated. All the available tools support converting physical servers running Microsoft operating systems but only a few support Linux server conversions. I’ve compiled a brief list of some of the tools and options available for converting physical servers to virtual machines.
Platespin PowerConvert – A commercial product that has the broadest operating system support. Supported source operating systems include Windows NT, 2000, XP, 2003 and Red Hat and SUSE. Virtual platform support includes VMware Server/Workstation/ESX, Microsoft Virtual Server, XenSource and Virtual Iron. The Acronis True Image, Symantec Ghost and Live state image formats are supported as well. Platespin’s product also converts virtual machines to physical machines.
Ultimate P2V – A cheaper alternative with some freeware tools such as BartPE, but this product requires an imaging tool such as Symantec Ghost to perform the imaging. It also requires some work to build the boot CD and can be a bit complicated.
Vizioncore vConverter – Another very robust commercial product that has broad platform support. Optimized to work with VMware ESX but also supports Microsoft Virtual Server, XenSource and Virtual Iron as target formats. Only supports Windows 2000, 2003 and Vista source operating systems.
Microsoft Virtual Server 2005 Migration Toolkit – Only supports Windows NT, 2000 and 2003 source operating systems and only supports Microsoft’s virtual machine as a target format. Free but requires the Automated Deployment Services add-on to Windows Server 2003 Enterprise Edition. The conversion process is somewhat complicated and may not be a good choice for most users.
VMware Converter – A free tool (standard version) provided by VMware to convert physical servers running Windows NT, 2000, XP, 2003 and Linux servers to VMware virtual machines. Also supports Symantec Ghost, Livestate, Backup Exec, Acronis True Image and StorageCraft ShadowProtect image formats.
HP Server Migration Pack – Supports converting Windows 2000 & 2003 servers to VMware ESX, XenServer and Microsoft Virtual Server target formats running HP Proliant hardware. Also supports virtual to physical conversions.
Leostream P>V Direct – I thought I would mention this tool even though Leostream recently dropped this product to focus on other areas. Supports conversion of Windows NT, 2000, XP and 2003 source servers to VMware Server/Workstation/ESX, Microsoft VirtualServer and Xen target servers.
Other useful tools:
Robocopy – A free tool from Microsoft that is included in the Windows Server 2003 Resource Kit. Simply create a new virtual machine and copy the data from the physical server to the virtual server. Robocopy preserves all the date/time stamps and security when copying data from server to server. This is useful for migrating non-application, file servers where you only care about the data files on the server.Sysprep – A Microsoft tool to prepare an existing server for cloning and restoration from a disk image.NewSID – A Sysinternals utility to change a Windows server’s SID (security identifier) after a server has been cloned.
Imaging tools like Symantec Ghost and Acronis True Image create images of your physical servers. Once completed, you can create a new VM and restore the image to the virtual machine. VMTS.net has a good write-up on the procedure here.
You can always do a P2V by backing up a physical server using your traditional backup software, creating a new virtual machine, installing the same operating system on the VM and a backup agent, and then performing a restore from the backup taken of the physical server. Not the cleanest method for doing a P2V but can be used as a cheap alternative to other tools.
But for certain servers, it is less risky and just as easy to build a new server and migrate the data to it. Examples of this include SQL Servers and Domain Controllers. It’s a simple process to build a new virtual machine, install the operating system, dcpromo it and then shutdown your physical domain controllers. Likewise, building a new virtual SQL server then detaching the databases from a physical SQL server and attaching them to the virtual one is an easy process and reduces the risk of data corruption that can occur during the P2V cloning process.
If Hyper-V doesn’t convert the VMware faithful as soon as Microsoft makes its hypervisor generally available later this year, it may get a little help from its friends: Xen-based virtualization platforms.
Some like IT consultant Ardalan Dlawar believe that Microsoft will leverage support for Xen-based platforms to increase competition with VMware. “And Xen will have more third-party support and fewer compatibility issues,” according to Dlawar.
Despite user arguments that ;Hyper-V will have to deliver more than a lower price tag to win users, Hyper-V will certainly get consideration from many VMware customers. While organizations want to maximize their VMware investment, especially enterprise customers which deploy tens or hundreds of VMware virtual machines, Hyper-V evals will most likely be deployed, according to Andi Mann, the research director at Boulder, Colo.-based Enterprise Management Associates (EMA).
Based on a survey of more than 600 enterprises, EMA found about 30% of enterprises have already planned a Hyper-V deployment even with Hyper-V’s general availability several months away. In addition, Microsoft is actually within 10% of VMware in current and planned enterprise deployments according to EMA’s data. Also consider this EMA finding: Xen-based platforms already account for more than 40% of current or planned deployments, the data suggests that the market demand for VMware alternatives won’t disappear.
“VMware is still way out in front in server virtualization,” said Mann, “but both Microsoft and Citrix Systems are definitely catching up.”
Of course, VMware and Microsoft aren’t the only options available. As managers continue utilizing toolsets available from Xen-based products such as Citrix’s XenServer and Virtual Iron Software, VMware and Microsoft are both working on tool sets that enable users manage their virtualization counterparts respectively.
“Both VMware and Microsoft understand that they are not going to be the only players on the market, they recognize that customers are leveraging their competitors’ technology in different parts of their businesses,” according to Adnan Hindi, the VP of operations at ScienceLogic in Reston, Va. Hindi said that companies like his, which produces cross-platform appliances, will benefit from multiple-platform virtual landscapes. As shops continue to see benefit in the utilities that Xen-based products offer, Hindi sees a universal virtualization tool set ultimately working itself out; these tools would essentially equalize platforms in the market and dilute decision making in choosing a virtualization platform largely down to cost.
Over the past year, there’s been a lot of talk about VMware’s cost of VMware. But the price of VMware Server is right for small businesses, said Brett Riale, an IT consultant in Pittsburgh, who feels “truly blessed that programs as functional as VMware Server have been released for free.” Riale is hesitant to trust another Microsoft virtualization product after “the debacle” that was Virtual Server 2005. “Unless it absolutely outperforms VMware,” Riale said that he won’t consider Hyper-V in the near future. And Dave Baughman, a systems administrator for Muncie, Ind.-based Ontario Systems, thinks that his ESX system is “a consistent platform” and that the price of support is worth their investment. “Most of the cost is for support and (VMware’s) support is very good,” says Baugham.
But what will happen when all the Microsoft customers with enterprise agreements get a taste of Hyper-V support? Or if Microsoft offers more third-party support for Xen?
Howard Holton, a system engineer, said that market share will shift in Hyper-V’s favor.
“Hyper-V is an excellent solution for many of those that cannot afford the steep cost that ESX server requires,” says Holton, who has already has a positive experience working with the release candidate and points out that for most data center operations, VMotion’s High Availability (HA) is overkill. “Hyper-V fits into the market below VMware for hosts that do not need true HA.”
Holton said that in the long run Hyper-V might win out over VMware because Citrix’s XenServer has finally given Xen a roadmap. XenServer is the spoiler, with a lower TCO than VMware. Although price hasn’t deterred Holton from delivering VMware to his customers in the past, he predicted that Hyper-V will only increase in value.
“As a value-added reseller in the small to midsized space, VMware is the leading virtualization product that I offer. That is changing.”
New vendors, strategies, technologies and capabilities seem to present themselves daily to the virtualization administrator and manager. One resource that can help is the Intel Premier IT Professional (IPIP) community.
Today I had the opportunity to attend the IPIP event here in Columbus, Ohio. The meeting provided a great vendor-independent view of virtualization products that revolve around Intel technologies. Planning your virtualization hardware environment is critical to the decisions that will be made in your current and future virtualization implementations.
Between now and the end of the year, Intel is conducting ten more of these events throughout North America. The agenda of these events includes sessions in the following areas:
- Intel product roadmap
- Client virtualization strategies
- Consolidation efficiencies through virtualization
- Application virtualization strategies
One important advantage to attending the events is that you can have access to non-disclosure information about the processor product line, a key planning part of virtual environments. But the live events are only the tip of the iceberg. On the IPIP website, members can access case studies, presentations, videos and white papers anytime. Also, every page on the IPIP site has a popularity tag that content of all types can be viewed from the tags.
The best part of these resources is that they are free. Check out the Intel Premier IT Professional website and register for an event in your area.
Big players in the virtualization world griped about the absence of performance benchmarks for virtual machines on CIO Talk Radio yesterday and discussed some of the issues surrounding virtualization standards.
Guests on the show included: Simon Crosby, Chief Technology Officer of the Virtualization and Management Division of Citrix; Tom Bishop, Chief Technology Officer, of BMC Software; Dr. Tim Marsland, Sun Fellow, Chief Technology Officer, for the Software Organization at Sun Microsystems Inc.; and Brian Stevens, Chief Technology Officer and Vice President of Engineering at Red Hat.
The glaring ommission in this lineup: VMware, Inc.
The panelists on CIO Talk Radio didn’t mention VMware by name, but did complain that some companies aren’t being open with their performance data, thus prohibiting the virtualization industry from publishing comparative performance data.
VMware’s licensing agreement for ESX allows users to conduct internal performance testing and benchmarking studies, and allows those users (and not unauthorized third parties) to publish or publicly disseminate the data provided that VMware has reviewed and approved of the methodology, assumptions and other parameters of the study.
Users that have published benchmark data, like Sr. Systems Engineer Mark Foster did on his blog, have had to unpublish results because of VMware’s stipulations.
Meanwhile, the SPEC Virtualization Committee has been working to create standard benchmarks for VMs. The committee’s goals are to deliver a benchmark that will model server consolidation of commonly virtualized systems such as application servers, web servers and file servers; provide a means to compare server performance while running a number of VMs; and produce a benchmark designed to scale across a wide range of systems.
SPEC expects these benchmarks to be available by the end of this year, but the timeline is not set in stone, according to the website.
Sun’s Marsland said benchmarking progress has been slow because there isn’t an easy way to define a workload, and a large number of benchmarks are required.
“We are talking about a virtual computer, with lots of aspects that need to be benchmarked,” Marsland said. “Every component that gets virtualized needs to be benchmarked.”
Having an open, standardized way of benchmarking is expected to push virtualization further into the mainstream because it will eliminate false perceptions about performance, panelists said. For instance, “there is the thought that I/O intensive workloads can not be virtualized, and the absence of benchmarks prevents us from proving otherwise. It is important for us to have good benchmarks out there,” one panelist on the show said.
Though users look at benchmarks, this type of data is most useful to vendors and OEMs who can use the performance standards to improve the technology, and of course, market their products.
“More open scrutiny of performance results will help us to improve as an industry overall,” Bishop said. “There are ways to measure performance in non-virtual environments, and people are adapting those techniques to get the most out of their virtualized environments.”
In terms of application performance in virtual environments, the issues differ depending on the data center infrastructure. The network, the servers and the storage all affect performance, said Stevens of RedHat.
Another problem with virtualization? There are support challenges. If an application running in a VM starts acting wacky, the application vendor may not support it, Crosby said.
Licensing and support in virtual environments has been a major gripe with Oracle, for example, which does not support running its applications with VMware.
“It is a reasonable concern…right now there is irrational market based control. Some folks are abstaining from supporting certain apps [in virtual envionments]. As customers demand support, things will hopefully get rational, by next year I hope,” Crosby said.