Portland, Ore.-based Tripwire ConfigCheck is a free Windows and Linux based utility that assesses the security of VMware ESX 3.5 hypervisor configurations compared to the VMware Infrastructure 3 Security Hardening guidelines, which were released in February.
The Security Hardening guidelines explain in detail the security-related configuration options of the components of VMware Infrastructure 3 and how security affects certain capabilities.
Tripwire ConfigCheck makes sure ESX environments are properly configured according to these guidelines and lends insight into vulnerabilities in virtual environments. It also provides the necessary steps towards full remediation.
Dan Schoenbaum, senior vice president of marketing and business development for Tripwire
said the utility is being offered for free to encourage the proliferation of VMware’s Hardening guidelines and to increase virtual machine (VM) security.
Tripware hopes that by giving a taste of their technology for free, users will become familiar with them and invest in their software products with more security capabilities, Schoenbaum said.
Colorado Springs, Co.-based Configuresoft Inc. also provides a toolkit for compliance with VMware’s security hardening guidelines. The toolkit consists of a set of rule-based templates, reports and dashboards that plug into Configuresoft’s Enterprise Configuration Manager (ECM).
A number of tools are available for migrating physical servers to virtual machines. Which tool to choose will depend on the source operating system, the target virtualization platform and the type of server being migrated. All the available tools support converting physical servers running Microsoft operating systems but only a few support Linux server conversions. I’ve compiled a brief list of some of the tools and options available for converting physical servers to virtual machines.
Platespin PowerConvert – A commercial product that has the broadest operating system support. Supported source operating systems include Windows NT, 2000, XP, 2003 and Red Hat and SUSE. Virtual platform support includes VMware Server/Workstation/ESX, Microsoft Virtual Server, XenSource and Virtual Iron. The Acronis True Image, Symantec Ghost and Live state image formats are supported as well. Platespin’s product also converts virtual machines to physical machines.
Ultimate P2V – A cheaper alternative with some freeware tools such as BartPE, but this product requires an imaging tool such as Symantec Ghost to perform the imaging. It also requires some work to build the boot CD and can be a bit complicated.
Vizioncore vConverter – Another very robust commercial product that has broad platform support. Optimized to work with VMware ESX but also supports Microsoft Virtual Server, XenSource and Virtual Iron as target formats. Only supports Windows 2000, 2003 and Vista source operating systems.
Microsoft Virtual Server 2005 Migration Toolkit – Only supports Windows NT, 2000 and 2003 source operating systems and only supports Microsoft’s virtual machine as a target format. Free but requires the Automated Deployment Services add-on to Windows Server 2003 Enterprise Edition. The conversion process is somewhat complicated and may not be a good choice for most users.
VMware Converter – A free tool (standard version) provided by VMware to convert physical servers running Windows NT, 2000, XP, 2003 and Linux servers to VMware virtual machines. Also supports Symantec Ghost, Livestate, Backup Exec, Acronis True Image and StorageCraft ShadowProtect image formats.
HP Server Migration Pack – Supports converting Windows 2000 & 2003 servers to VMware ESX, XenServer and Microsoft Virtual Server target formats running HP Proliant hardware. Also supports virtual to physical conversions.
Leostream P>V Direct – I thought I would mention this tool even though Leostream recently dropped this product to focus on other areas. Supports conversion of Windows NT, 2000, XP and 2003 source servers to VMware Server/Workstation/ESX, Microsoft VirtualServer and Xen target servers.
Other useful tools:
Robocopy – A free tool from Microsoft that is included in the Windows Server 2003 Resource Kit. Simply create a new virtual machine and copy the data from the physical server to the virtual server. Robocopy preserves all the date/time stamps and security when copying data from server to server. This is useful for migrating non-application, file servers where you only care about the data files on the server.Sysprep – A Microsoft tool to prepare an existing server for cloning and restoration from a disk image.NewSID – A Sysinternals utility to change a Windows server’s SID (security identifier) after a server has been cloned.
Imaging tools like Symantec Ghost and Acronis True Image create images of your physical servers. Once completed, you can create a new VM and restore the image to the virtual machine. VMTS.net has a good write-up on the procedure here.
You can always do a P2V by backing up a physical server using your traditional backup software, creating a new virtual machine, installing the same operating system on the VM and a backup agent, and then performing a restore from the backup taken of the physical server. Not the cleanest method for doing a P2V but can be used as a cheap alternative to other tools.
But for certain servers, it is less risky and just as easy to build a new server and migrate the data to it. Examples of this include SQL Servers and Domain Controllers. It’s a simple process to build a new virtual machine, install the operating system, dcpromo it and then shutdown your physical domain controllers. Likewise, building a new virtual SQL server then detaching the databases from a physical SQL server and attaching them to the virtual one is an easy process and reduces the risk of data corruption that can occur during the P2V cloning process.
If Hyper-V doesn’t convert the VMware faithful as soon as Microsoft makes its hypervisor generally available later this year, it may get a little help from its friends: Xen-based virtualization platforms.
Some like IT consultant Ardalan Dlawar believe that Microsoft will leverage support for Xen-based platforms to increase competition with VMware. “And Xen will have more third-party support and fewer compatibility issues,” according to Dlawar.
Despite user arguments that ;Hyper-V will have to deliver more than a lower price tag to win users, Hyper-V will certainly get consideration from many VMware customers. While organizations want to maximize their VMware investment, especially enterprise customers which deploy tens or hundreds of VMware virtual machines, Hyper-V evals will most likely be deployed, according to Andi Mann, the research director at Boulder, Colo.-based Enterprise Management Associates (EMA).
Based on a survey of more than 600 enterprises, EMA found about 30% of enterprises have already planned a Hyper-V deployment even with Hyper-V’s general availability several months away. In addition, Microsoft is actually within 10% of VMware in current and planned enterprise deployments according to EMA’s data. Also consider this EMA finding: Xen-based platforms already account for more than 40% of current or planned deployments, the data suggests that the market demand for VMware alternatives won’t disappear.
“VMware is still way out in front in server virtualization,” said Mann, “but both Microsoft and Citrix Systems are definitely catching up.”
Of course, VMware and Microsoft aren’t the only options available. As managers continue utilizing toolsets available from Xen-based products such as Citrix’s XenServer and Virtual Iron Software, VMware and Microsoft are both working on tool sets that enable users manage their virtualization counterparts respectively.
“Both VMware and Microsoft understand that they are not going to be the only players on the market, they recognize that customers are leveraging their competitors’ technology in different parts of their businesses,” according to Adnan Hindi, the VP of operations at ScienceLogic in Reston, Va. Hindi said that companies like his, which produces cross-platform appliances, will benefit from multiple-platform virtual landscapes. As shops continue to see benefit in the utilities that Xen-based products offer, Hindi sees a universal virtualization tool set ultimately working itself out; these tools would essentially equalize platforms in the market and dilute decision making in choosing a virtualization platform largely down to cost.
Over the past year, there’s been a lot of talk about VMware’s cost of VMware. But the price of VMware Server is right for small businesses, said Brett Riale, an IT consultant in Pittsburgh, who feels “truly blessed that programs as functional as VMware Server have been released for free.” Riale is hesitant to trust another Microsoft virtualization product after “the debacle” that was Virtual Server 2005. “Unless it absolutely outperforms VMware,” Riale said that he won’t consider Hyper-V in the near future. And Dave Baughman, a systems administrator for Muncie, Ind.-based Ontario Systems, thinks that his ESX system is “a consistent platform” and that the price of support is worth their investment. “Most of the cost is for support and (VMware’s) support is very good,” says Baugham.
But what will happen when all the Microsoft customers with enterprise agreements get a taste of Hyper-V support? Or if Microsoft offers more third-party support for Xen?
Howard Holton, a system engineer, said that market share will shift in Hyper-V’s favor.
“Hyper-V is an excellent solution for many of those that cannot afford the steep cost that ESX server requires,” says Holton, who has already has a positive experience working with the release candidate and points out that for most data center operations, VMotion’s High Availability (HA) is overkill. “Hyper-V fits into the market below VMware for hosts that do not need true HA.”
Holton said that in the long run Hyper-V might win out over VMware because Citrix’s XenServer has finally given Xen a roadmap. XenServer is the spoiler, with a lower TCO than VMware. Although price hasn’t deterred Holton from delivering VMware to his customers in the past, he predicted that Hyper-V will only increase in value.
“As a value-added reseller in the small to midsized space, VMware is the leading virtualization product that I offer. That is changing.”
New vendors, strategies, technologies and capabilities seem to present themselves daily to the virtualization administrator and manager. One resource that can help is the Intel Premier IT Professional (IPIP) community.
Today I had the opportunity to attend the IPIP event here in Columbus, Ohio. The meeting provided a great vendor-independent view of virtualization products that revolve around Intel technologies. Planning your virtualization hardware environment is critical to the decisions that will be made in your current and future virtualization implementations.
Between now and the end of the year, Intel is conducting ten more of these events throughout North America. The agenda of these events includes sessions in the following areas:
- Intel product roadmap
- Client virtualization strategies
- Consolidation efficiencies through virtualization
- Application virtualization strategies
One important advantage to attending the events is that you can have access to non-disclosure information about the processor product line, a key planning part of virtual environments. But the live events are only the tip of the iceberg. On the IPIP website, members can access case studies, presentations, videos and white papers anytime. Also, every page on the IPIP site has a popularity tag that content of all types can be viewed from the tags.
The best part of these resources is that they are free. Check out the Intel Premier IT Professional website and register for an event in your area.
Big players in the virtualization world griped about the absence of performance benchmarks for virtual machines on CIO Talk Radio yesterday and discussed some of the issues surrounding virtualization standards.
Guests on the show included: Simon Crosby, Chief Technology Officer of the Virtualization and Management Division of Citrix; Tom Bishop, Chief Technology Officer, of BMC Software; Dr. Tim Marsland, Sun Fellow, Chief Technology Officer, for the Software Organization at Sun Microsystems Inc.; and Brian Stevens, Chief Technology Officer and Vice President of Engineering at Red Hat.
The glaring ommission in this lineup: VMware, Inc.
The panelists on CIO Talk Radio didn’t mention VMware by name, but did complain that some companies aren’t being open with their performance data, thus prohibiting the virtualization industry from publishing comparative performance data.
VMware’s licensing agreement for ESX allows users to conduct internal performance testing and benchmarking studies, and allows those users (and not unauthorized third parties) to publish or publicly disseminate the data provided that VMware has reviewed and approved of the methodology, assumptions and other parameters of the study.
Users that have published benchmark data, like Sr. Systems Engineer Mark Foster did on his blog, have had to unpublish results because of VMware’s stipulations.
Meanwhile, the SPEC Virtualization Committee has been working to create standard benchmarks for VMs. The committee’s goals are to deliver a benchmark that will model server consolidation of commonly virtualized systems such as application servers, web servers and file servers; provide a means to compare server performance while running a number of VMs; and produce a benchmark designed to scale across a wide range of systems.
SPEC expects these benchmarks to be available by the end of this year, but the timeline is not set in stone, according to the website.
Sun’s Marsland said benchmarking progress has been slow because there isn’t an easy way to define a workload, and a large number of benchmarks are required.
“We are talking about a virtual computer, with lots of aspects that need to be benchmarked,” Marsland said. “Every component that gets virtualized needs to be benchmarked.”
Having an open, standardized way of benchmarking is expected to push virtualization further into the mainstream because it will eliminate false perceptions about performance, panelists said. For instance, “there is the thought that I/O intensive workloads can not be virtualized, and the absence of benchmarks prevents us from proving otherwise. It is important for us to have good benchmarks out there,” one panelist on the show said.
Though users look at benchmarks, this type of data is most useful to vendors and OEMs who can use the performance standards to improve the technology, and of course, market their products.
“More open scrutiny of performance results will help us to improve as an industry overall,” Bishop said. “There are ways to measure performance in non-virtual environments, and people are adapting those techniques to get the most out of their virtualized environments.”
In terms of application performance in virtual environments, the issues differ depending on the data center infrastructure. The network, the servers and the storage all affect performance, said Stevens of RedHat.
Another problem with virtualization? There are support challenges. If an application running in a VM starts acting wacky, the application vendor may not support it, Crosby said.
Licensing and support in virtual environments has been a major gripe with Oracle, for example, which does not support running its applications with VMware.
“It is a reasonable concern…right now there is irrational market based control. Some folks are abstaining from supporting certain apps [in virtual envionments]. As customers demand support, things will hopefully get rational, by next year I hope,” Crosby said.
While Citrix Systems’ Xen’s ubiquity may help the technology earn a legacy as the invisible hypervisor, it may also prove the most challenging next step for IT administrators and developers who want to find or develop software that leverages, supports or extends the Xen hypervisor.
To understand the problem that Xen faces, take Java as an example. Java is great, and I am committed to developing applications that are truly cross-platform using what I consider this fantastic creation. But in all the years that Java has been around, it has failed to gain traction that NET has achieved in less time. Why?
Although Java is slower, it offers a greater advantage than .NET in terms of portability; but Java still hasn’t managed to gain a majority mindshare of developers. This is because Java’s true worth is its portability, its ability to blend into any system. Java has succeeded so well at being invisible that it has lost the sexiness associated with languages used to construct desktop and Web applications. Every once in a while, something like the Google Web Toolkit comes along that makes people take a step back and re-evaluate Java’s usefulness for end-user applications. Ultimately, Java has been left to the obscurity of providing enterprise, back-end applications.
Is Xen is destined to a Java-like fate? While ultimately it may not prove difficult to develop cutting-edge technology compatible with the Xen hypervisor, it may prove so to market it. If you are in the business of selling virtualization add-on products, you want to ensure that your product is compatible with VMware Infrastructure, because that is where the sales are.
The marketplace has not been especially kind to Xen for two reasons: it was not first to market, which is an important factor for any industry, and Xen resellers do not have the power of the VMware PR machine. Also, all major virtualization vendors, including VMware, say that hypervisors should be ubiquitous — the difference is that the VMware CEO Diane Greene has been quoted on virtualizationreview.com and in person. VMware shouts the same thing everyone else is casually discussing and this makes headlines.
As Xen’s legacy may be to become the ubiquitous, embedded hypervisor for all to use, its strength may also be its greatest detriment to Xen-based virtualization platforms. Xen’s strength is its practical application as the invisible, reused, resold, embedded hypervisor, but invisibility just hasn’t worked in Citrix’s favor. Instead, it shields partners from building ecosystems around Xen and has marginalized the brand name.
Hypervisor price competition will get intense this year as Microsoft enters the market with Hyper-V, and VMware will have to respond, says Chris Wolf.
Wolf is Burton Group’s senior analyst for data center strategies, and he’s probably everyone’s go-to expert on virtualization. He predicted how the price wars will shake out in this short interview and said not to expect any vendors’ prices on production-level features to plummet.
I talked with Chris during TechTarget’s Advanced Enterprise Virtualization Seminar in San Francisco last week. He says to expect price drop and new virtualization product announcements in June, just before Burton Group’s Catalyst Conference in San Diego June 23-27.
If you haven’t heard Chris speak, you’re missing out. He knows virtualization inside out, way beyond the basics. And he isn’t shy about telling it like it is — much to vendors’ chagrin. Check out his sessions at the TechTarget seminars or the Catalyst Conference, or any other chance you get. I read his blog at ChrisWolf.com daily, and he has also written virtualization books, including Virtualization: From the Desktop to the Enterprise.
SearchServerVirtualization.com and SearchVMware.com have been covering cost and pricing wars between virtualization vendors, looking beyond the marketing hype to offer useful info for IT pros who are evaluating products. I particularly liked Rich Brambley’s blog post about the real story behind vendors’ competition for hypervisor market share.
Now that you know Chris Wolf’s views on virtualization product pricing, how about sharing yours? Will better pricing cause you to re-evaluate whether you continuing using VMware? Are you sticking with VMware because you’ve invested in its ecosystem and in staff training? Have you been waiting for Microsoft or Xen technologies to catch up before starting virtualization projects? Tell me about it in your comments below, by emailing me at email@example.com. Even better, respond to Chris Wolf’s interview or sound off on VMware in a video on YouTube, and send me the link. We’ll post it on this blog.
This blog post was written by Megan Santosus, Features Writer.
A recent white paper published by Embotics Corp. on the hidden costs of virtual machines (VMs) paints just the kind of picture one might expect from a vendor of VM lifecycle management software. According to the paper, an IT shop with 150 virtual machines will typically spend between $50,000 and $150,000 on VMs that are redundant. Those costs stem from four areas: infrastructure (processing, storage, memory and the like); management systems (backup, change and configuration management, etc.); server software (licenses for operating systems and applications); and administration (labor and training). David Lynch, Embotics’ vice president of marketing, said that it’s not unusual for customers to discover that half of their VMs are redundant.
Are VMs really sieves leaking that much money?
Todd Monahan, data center manager at Alcatel/Lucent’s Ottawa, Ont., facility, (and an Embotics customer, although he didn’t talk about his own company’s experience), finds the white paper’s conclusions on the money, so to speak. Monahan estimates that typical licensing costs incurred by a data center for his size – 500 servers split 50:50 between physical and virtual boxes – to break down per machine as follows: Monitoring, $250 to $300; backup, $600 to $700; and operating system for standard Windows $600 to $700. Add on the application licensing costs that vary widely, and you’ve got quite a bit more than chump change at stake.
And as for half the number of VMs being unnecessary, that resonates with Monahan as well.
“It’s so easy to create VMs when you go through a consolidation exercise,” Monahan said. “And because you can’t see them, it really becomes an issue of out of sight, out of mind.”
I was recently asked, “do you have any visibility of the storge utilization you provide your virtual machines?” I stopped, thought about it and said “no”. However, in my situation, this is not yet a problem.
A pitfall for most enterprise server virtualization strategies is in a reservation for storage, regardless of what the virtual machine has written on the virtualized filesystem to the defined maximums. For example, if I have a base installation of a Windows Server 2003 system, the footprint as I do my server builds will be around 5 GB. My standard build allocation is 32 GB. This makes this system only 15.6% utilized from inception. This rule of thumb applies to most servers, and a standard build has 32 GB as an accepted footprint per system.
Excluding backend storage virtualization and de-duplication strategies, what about systems that have a storage footprint larger than 32 GB? Well, luckily we’ve been down this path before:
The storage is the storage, virtual or physical.
Managing the percentage of utilization for shared storage should be a task of continuing diligence. I don’t (yet) have a large number of virtual servers with a footprint above the standard build, these systems face the same battles we have had for years with general purpose servers. As an example, take a main file and print server that is 2 TB on a general purpose server: It will be about 2 TB on a virtual server as well from the storage perspective. For large storage footprints using iSCSI or storage-area network (SAN) technologies, the difference in configuration is minimal.
However, how do we address the first question about under-utilized storage footprints for the virtualized systems? Is it best to look only at operating system metrics? That may be an adequate solution for each operating system, but the aggregation will be from different sources and outputs. What are you doing to address storage utilization when you are not using storage virtualization?
As new vendors enter the x86 virtualization space, pioneer VMware, Inc. is moving on to the next frontier, cloud computing, said VMware President and Chief Executive Officer Diane Greene in her keynote address at the JP Morgan Technology Conference in Boston on May 21.
“The dream of cloud computing is fast becoming reality,” she said.
With cloud computing, workloads are assigned to connections, software and services, which are accessed over a network of servers and connections in various locations, collectively known as “the cloud.” Using a thin client or other access point, like an iPhone or laptop, users can access the cloud for resources on demand.
Greene told the event attendees that the evolution of virtualization begins with users deploying VMs for testing and development, then easing into server consolidations for production environments. The third phase is resource aggregation, with entire data centers being virtualized, followed by automation of all of those aggregated workloads. The final “liberation” phase is cloud computing, Greene said.
“We now have competition going after the first two phases of virtualization evolution with 1.0 products, but we are very much in the aggregate, automate and liberate phase,” Greene said.
Other vendors have their sights set on cloud computing as well. IBM Corp. and Google announced plans to promote cloud computing in October by investing over $20 million in the hardware, software and services at universities, and Reuters reported this week that Microsoft expects companies will abandon their own in-house computer systems and shift to cloud computing as a less expensive alternative.
While VMware moves towards cloud computing, the company is in the thick of the automation phase and has released a number of virtualization automation products recently, including VMware Site Recovery Manager for Disaster Recovery, VMware Stage Manager and VMware Lifecycle Manager for lifecycle management and VMware Lab Manager, as well as product and service bundles.
The company is also focusing on desktop virtualization with Virtual Desktop Infrastructure and has introduced services and products to move that inititive forward.
“Desktop virtualization does require a major change in the infrastructure, so it could be 2011 before we see desktop virtualization adoption in the millions. We do have hosted desktop virtualization customers with large deployments…but [adoption] will happen at a measured pace,” Greene said. “I do think someday everyone’s desktop will run in a virtual machine, whether it be on PCs or MACs, thin clients or phones. With the advantages from a security, manageability and flexibility standpoint, it will become mainstream.”
The cost of desktop virtualization is a barrier to adoption, but Greene said the price per user of desktop virtualization will come down steadily over the next few years. It is in the $800 per user range today, she said.