The Virtualization Room


May 23, 2008  10:00 AM

Virtual machines bleed money

Lauren Horwitz Lauren Horwitz Profile: Lauren Horwitz

This blog post was written by Megan Santosus, Features Writer.
A recent white paper published by Embotics Corp. on the hidden costs of virtual machines (VMs) paints just the kind of picture one might expect from a vendor of VM lifecycle management software. According to the paper, an IT shop with 150 virtual machines will typically spend between $50,000 and $150,000 on VMs that are redundant. Those costs stem from four areas: infrastructure (processing, storage, memory and the like); management systems (backup, change and configuration management, etc.); server software (licenses for operating systems and applications); and administration (labor and training). David Lynch, Embotics’ vice president of marketing, said that it’s not unusual for customers to discover that half of their VMs are redundant.
Are VMs really sieves leaking that much money?

Todd Monahan, data center manager at Alcatel/Lucent’s Ottawa, Ont., facility, (and an Embotics customer, although he didn’t talk about his own company’s experience), finds the white paper’s conclusions on the money, so to speak. Monahan estimates that typical licensing costs incurred by a data center for his size – 500 servers split 50:50 between physical and virtual boxes – to break down per machine as follows: Monitoring, $250 to $300; backup, $600 to $700; and operating system for standard Windows $600 to $700. Add on the application licensing costs that vary widely, and you’ve got quite a bit more than chump change at stake.

And as for half the number of VMs being unnecessary, that resonates with Monahan as well.

“It’s so easy to create VMs when you go through a consolidation exercise,” Monahan said. “And because you can’t see them, it really becomes an issue of out of sight, out of mind.”

May 22, 2008  9:51 AM

Storage utilization is a new battle

Rick Vanover Rick Vanover Profile: Rick Vanover

I was recently asked, “do you have any visibility of the storge utilization you provide your virtual machines?” I stopped, thought about it and said “no”. However, in my situation, this is not yet a problem.

A pitfall for most enterprise server virtualization strategies is in a reservation for storage, regardless of what the virtual machine has written on the virtualized filesystem to the defined maximums. For example, if I have a base installation of a Windows Server 2003 system, the footprint as I do my server builds will be around 5 GB. My standard build allocation is 32 GB. This makes this system only 15.6% utilized from inception. This rule of thumb applies to most servers, and a standard build has 32 GB as an accepted footprint per system. 

Excluding backend storage virtualization and de-duplication strategies, what about systems that have a storage footprint larger than 32 GB? Well, luckily we’ve been down this path before:

The storage is the storage, virtual or physical.

Managing the percentage of utilization for shared storage should be a task of continuing diligence. I don’t (yet) have a large number of virtual servers with a footprint above the standard build, these systems face the same battles we have had for years with general purpose servers.  As an example, take a main file and print server that is 2 TB on a general purpose server: It will be about 2 TB on a virtual server as well from the storage perspective. For large storage footprints using iSCSI or storage-area network (SAN) technologies, the difference in configuration is minimal.

However, how do we address the first question about under-utilized storage footprints for the virtualized systems? Is it best to look only at operating system metrics? That may be an adequate solution for each operating system, but the aggregation will be from different sources and outputs. What are you doing to address storage utilization when you are not using storage virtualization?


May 22, 2008  9:34 AM

VMware entering final phase of virtualization evolution: Cloud computing

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

As new vendors enter the x86 virtualization space, pioneer VMware, Inc. is moving on to the next frontier, cloud computing, said VMware President and Chief Executive Officer Diane Greene in her keynote address at the JP Morgan Technology Conference in Boston on May 21.

“The dream of cloud computing is fast becoming reality,” she said.

With cloud computing, workloads are assigned to connections, software and services, which are accessed over a network of servers and connections in various locations, collectively known as “the cloud.” Using a thin client or other access point, like an iPhone or laptop, users can access the cloud for resources on demand.

Greene told the event attendees that the evolution of virtualization begins with users deploying VMs for testing and development, then easing into server consolidations for production environments. The third phase is resource aggregation, with entire data centers being virtualized, followed by automation of all of those aggregated workloads. The final “liberation” phase is cloud computing, Greene said.

“We now have competition going after the first two phases of virtualization evolution with 1.0 products, but we are very much in the aggregate, automate and liberate phase,” Greene said.

Other vendors have their sights set on cloud computing as well. IBM Corp. and Google announced plans to promote cloud computing in October by investing over $20 million in the hardware, software and services at universities, and Reuters reported this week that Microsoft expects companies will abandon their own in-house computer systems and shift to cloud computing as a less expensive alternative.

While VMware moves towards cloud computing, the company is in the thick of the automation phase and has released a number of virtualization automation products recently, including VMware Site Recovery Manager for Disaster Recovery, VMware Stage Manager and VMware Lifecycle Manager for lifecycle management and VMware Lab Manager, as well as product and service bundles.

The company is also focusing on desktop virtualization with Virtual Desktop Infrastructure and has introduced services and products to move that inititive forward.

“Desktop virtualization does require a major change in the infrastructure, so it could be 2011 before we see desktop virtualization adoption in the millions. We do have hosted desktop virtualization customers with large deployments…but [adoption] will happen at a measured pace,” Greene said. “I do think someday everyone’s desktop will run in a virtual machine, whether it be on PCs or MACs, thin clients or phones. With the advantages from a security, manageability and flexibility standpoint, it will become mainstream.”

The cost of desktop virtualization is a barrier to adoption, but Greene said the price per user of desktop virtualization will come down steadily over the next few years. It is in the $800 per user range today, she said.


May 20, 2008  9:26 AM

Using virtual appliances with VMware

Eric Siebert Eric Siebert Profile: Eric Siebert

VMware’s Virtual Appliance Marketplace has over 800 appliances available for download over a wide range of categories that can be used in your VMware environment. For those who may not be familiar, virtual appliances are pre-built, pre-configured virtual machines for use in virtual environments that are built to serve specific functions.

Some of the type of appliances available with VMware include anti-spam, database/app/web servers, firewalls, network monitoring, operating systems and administration tools. There is even an appliance for running DOS-based games from the early 90s. Most of the appliances are free to download and use except for some of the certified appliances from vendors such as IBM, Symantec, VMSight, Bluelane and Bea which must be purchased. Almost all of the appliances run various distributions of Linux to avoid operating system licensing costs and many utilize free open-source applications.

These appliances are compatible with any of VMware’s products including Player, Workstation, Server, Fusion and ESX. Appliances range in size from a few megabytes for some of the small router or firewall appliances to a few gigabytes for some of the bigger, more featured applications. A typical appliance download will usually include the virtual disk (vmdk) file(s), configuration (vmx or ovf) file and usually a few companion files. Once you locate an appliance that you want to use, simply download it and copy the files to your VMware server or workstation. After adding the VM via your management interface, you’re ready to power it on and start using it. Most appliances are pre-configured to use DHCP to automatically assign an IP address to the VM but they will usually allow you to manually configure a static IP address if needed. A new feature in VirtualCenter 2.5 allows you to automatically download and import ovf file-format appliances via a simple wizard interface. You can also use VMware Converter to import virtual appliances into ESX.

Below are a couple of notable appliances that you might want to check out:

ESX Deployment Appliance – (free) Makes deploying new ESX servers simple and fast

X-M0n0wall – (free) A great little firewall for protecting virtual networks

Network Security Toolkit – (free) Contains many open-source network security applications

NagiosVMA – (free) All-in-one open-source host and network monitoring system

LAMP Appliance – (free) A complete web environment including web server (Apache), database (MySQL) and scripting language (PHP)

Remote CLI – (free) Remote command line utility for managing ESXi servers

Browser Appliance – (free) Safely browse the internet inside a virtual machine to prevent malware from infecting your desktop.

LeftHand Virtual SAN Appliance – ($) Converts internal storage of VI3 servers into a iSCSI SAN

SpamTitan – ($) A full-featured email security appliance

VM Sight – ($) Provides virtual network reporting and analytics

VMware Infrastructure Perl Toolkit 1.5 – (free) Provides a Perl interface to manage and control a VI3 environment

vKernel – ($) ESX resource monitoring and reporting including chargeback reports


May 16, 2008  9:45 AM

Still no Linux VMware VI client

Eric Siebert Eric Siebert Profile: Eric Siebert

As this long running thread in the VMware forums indicates, many users are frustrated with VMware’s lack of support for a Linux-based Virtual Infrastructure client to manage VI3 environments. Currently, the VI Client will run only under Windows (as it’s written in .NET), so Linux shops are forced to purchase and install Windows to run it. An alternative web interface does exist; however, it can only manage virtual machine operations and not the ESX hosts which severely limits its usefulness to VMware administrators.

While VMware has not officially announced any plans to develop cross-platform versions of the VI Client or any of its other Windows-only applications, the above-mentioned thread includes one response from a VMware employee who hints that VMware may eventually release a Linux version. A Linux version of a VI Client would be considered a welcomed addition by many VMware customers, if not as an essential feature for those that are using ESX servers in non-Windows environments.

Many customers have also been wanting a Linux version of VirtualCenter, VMware’s centralized management product for ESX,  and support for open source databases like MySQL. VirtualCenter will only install on a Windows server and its required database only supports Microsoft SQL Server or Oracle databases. You can also use SQL Express with VirtualCenter, but it is not recommended or supported for production environments. Because of this limitation, customers that wish to use VirtualCenter must also plan on the additional expense of Windows operating systems licenses for the VirtualCenter server as well as a database license if they do not already have an existing SQL/Oracle database server that they can use for the VirtualCenter database.

Unless more customers speak up and request that VMware produce cross-platform versions of their current Windows-only applications, they will probably not end up developing them. If the demand exists, there’s a better likelihood of it happening. Having Linux versions would also help VMware compete in an increasingly competitive virtualization market. If you would like to see VMware develop a Linux version of the VI Client and other applications, contact your VMware sales representatives and let them know.


May 16, 2008  9:28 AM

Virtual environment architecting requires network zone placement

Rick Vanover Rick Vanover Profile: Rick Vanover

Almost every virtualization admin that I interact with has materially changed their strategy at some point with their first generation of server virtualization before the entire project is complete. Among the strategy changes are those related to network zoning, which will become a more important consideration as organizations approach higher levels of virtualization.

Specifically, the placement of external facing systems on the same virtual host as systems which house internal systems can put both sides of the network at risk if a compromise is made to the hypervisor from the external facing systems. This becomes especially important as the virtual appliance space allows organizations to easily consider firewall, intrusion detection, VPN and other external facing roles to be placed into the virtual environment as well as the frequent goal to virtualize everything.  

A more isolating strategy creates a separate environment with hosts dedicated to hosting virtual machines (VMs) that are external facing and not simultaneously host VMs on the internal network. While the hosts may be connected both to the internal and external networks in a DMZ network role, a compromise to the hypervisor or host system would not have as direct of an impact to the VMs running only on the internal networks. This also helps in emergency remediation by allowing a virtual host to be fully isolated or powered off until the issue is identified without impacting the internal network VMs.

When planning your next generation of server-side virtualization, consider the risks of placing internal and external network zones on resources that may contain external facing and internal only VMs. This type of architecture can bake in some inherent security into your environment that may save the day in the event of a zero-day vulnerability situation that affects the guest operating system or the virtualization hypervisor.


May 15, 2008  7:06 AM

Changing the default web access page in VMware VI3

Eric Siebert Eric Siebert Profile: Eric Siebert

The page that displays by default when you enter the hostname of an ESX or VirtualCenter server has links that some administrators like to suppress. In addition to the option to log into the server, there are also links to download the installation program for the VI Client application and the VI SDK and also to log into the scripted installer. Because authentication is not required for the default web page of the server, anybody can download the VI Client to the chagrin of administrators.

Having only the web access login page displayed instead is the preferred way. To get to the login page, you have to click the link from the default page or add “/ui” to your URL.

You can modify the default web page on both the ESX and VirtualCenter servers so the download links are not displayed or simply force a re-direct to the login page instead. To do this, simply follow the steps below. Please note that future upgrades to VirtualCenter or ESX will usually overwrite your changes and you will need to do this again after any upgrades.

For ESX servers, you will need to modify the index.html file that is located in the /var/lib/vmware/hostd/docroot directory on the ESX server. Be sure and make a backup of this file before editing it. You can log into the Service Console and edit it with Nano or use an application like WinSCP to connect to the server and edit the file. For VirtualCenter servers, you will need to modify the index.html file located in the C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\docRoot directory.

 

If you simply want to hide the download links, you can comment out those lines of code in the index.html file. The comment tags tell the browser to not execute a section of code and are usually used to add documentation to programming code. The start tag is “” (without the quotes). Add “” on a new line after the second tag to comment out the two download links under getting started. Then save the file and refresh your browser and the links should be gone. Alternatively, you can remove the code completely so the links will not be visible if you viewed the source of the web page.

If you wish to automatically re-direct the default welcome page to the login page, add the following line at the beginning of the index.html file: If you do not want the main page to show before the redirect, either delete all the other code in the file or add a “” at the very bottom of the file. This comments out all the code in the file except the first line.


May 14, 2008  9:20 AM

VMware pushes desktop virtualization on management and security benefits

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

VMware Inc. Senior Director of Enterprise Desktops Gerald Chen visited our office on Tuesday morning to discuss the different types of desktop virtualization and answer common questions about Virtual Desktop Infrastructure (VDI), for example, how it differs from terminal services and cost issues.

Here’s how VDI works: each end user gets a virtual machine (VM) that is deployed from a server in the data center directly to a PC, laptop or thin client computer. Each VM is customizable, so all of the user’s settings are saved and re-booted each time the user signs in, Chen said.

When a user logs off for the day, their VM goes idle, and wakes back up when the user logs into their system again, according to Chen. Chen believes that the advantage of VDI is that sensitive data is not being stored on desktops, which can easily be lost or stolen, and these virtual desktops are easier to manage than physical ones.

“VDI is great for industries like health care that are really concerned about information security and compliance. The real value though, is in management. All of the information is safe in the data center, and centrally managed through Virtual Infrastructure,” Chen said. “For instance, if you have 100 new employees who need desktops, you can deploy a VM for each of them in just minutes, and manage all of them centrally.”

VDI is different from Sever Based Computing (SBC) systems like Citrix Systems Inc.’s XenApp in that VDI is connects a single user to a single operating system (OS), instead of having multiple users share one OS.

“Not every application likes to share an OS, and there is also bad isolation; if one application crashes, everyone sharing that OS crashes as well. Those desktops can’t be customized either. It is a locked environment.”

Chen went on to explain that with VDI, four to ten VMs per server core are supported, so a server with one quad-core processor can, theoretically, house 40 VMs. Of course, that varies depending on things like workload, applications and memory. If the VMs become too heavy for the server to handle, management features in VI3 intervene. VMotion can move live VMs from one server to another when capacity issues arise, as can Dynamic Resource Scheduler, which allocates and balances computing resources as needed using VMotion.

Desktop virtualization case study
As VMware announced customer case studies in February, including one at Huntsville Hospital in Huntsville, Alabama.

The hospital needed to implement a new medical information application throughout its network while protecting HIPAA-related data. Deploying hosted desktops on VMware, the hospital could lock down sensitive patient data and reduce the cost and complexity of desktop management.

They used combinations of thin clients and blade servers to access the centralized virtual desktops, and in turn, reduced power consumption across the hospital by 78%, improved longevity with lower hardware maintenance needs and made wireless thin clients on wheeled carts available to hospital staff. Also, doctors can remotely access their VMs through the Internet using a web browser when necessary.

The downside to desktop virtualization
While the benefits are clear, there are some downsides to desktop virtualization: extra storage and initial cost.

Chen told SearchServerVirtualization.com that VMware is working on reducing image sizes and has designed a way to keep only one copy of files that are identical among many users, like icons and other graphics, to reduce the amount of storage necessary.

The cost of implementing desktop virtualization turns users off. According to Ars Open Forum blogger ‘Bright Wire,’ the cost and the magnitude of system upgrades required is not worth the benefits.

“The cost of deploying virtual desktops is massive,” Bright Wire wrote. “You will need to re-gear your existing desktops to run the virtual or you will need vendor equipment that costs twice as much as a new desktop. Either way, the cost is big in manpower. On top of that, your infrastructure will need serious review.”

According to VMware’s product specifications, local desktop virtualization requires a 500 MHz or faster processor with recommended 256 MB of memory, though Forrester reports that PCs must be faster and have more RAM to work efficiently.

“In addition you need to look into the server infrastructure,” Bright Wire said. “You are talking about needing a lot of iron on the backside to handle the needs of the server to supply two to 16 desktops. All this adds up quickly and can easily swamp a datacenter.”

As for pricing complaints, VMware is used to hearing them and holds firm to the ‘you get what you pay for’ mantra, saying the management benefits are worth the price.

The company charges $150 per concurrent user plus additional costs for support, either Gold or Platinum levels. Both bundles include VMware Infrastructure Enterprise Edition for VDI (which consists of VMware ESX Server 3.5 and VirtualCenter 2.5) and the VMware Virtual Desktop Manager 2. The VMware VDI Starter Edition, which enables 10 virtual desktops, has a list price of $1,500. The VMware VDI Bundle 100 Pack, which enables 100 virtual desktops, has a list price of $15,000.

The market indicates a demand for desktop virtualization, as a number of other vendors also entered the desktop virtualization space including Sun Microsystems Inc., Citrix., Pano Logic Inc. and Symantec. Chen would argue that many customers come for reduction in hardware but stay for the management applications.

“Reducing hardware costs is not a reason to use VDI, it is management. We have customers who have seen 40% to 50% ROI in terms of management costs and the amount of time it frees up.”


May 13, 2008  9:36 AM

Burning in virtual server RAM prevents headaches

Eric Siebert Eric Siebert Profile: Eric Siebert

When system administrators receive new servers, they are often anxious to get them unpacked, in the rack and loaded up with ESX so they can start creating virtual machines. But an important first step should be done before proceeding with virtualization software installation on the server: always burn-in the memory to test for defective memory modules.

Defective memory will usually be unnoticed in a newly-deployed server and it may be months before signs of defective memory start to show. In one group of five HP servers, I had to replace seven memory DIMMs over an 18 month time period. Most of these were eventually detected by HP’s Insight Manager agents that reside on the server, but two of them caused hard server crashes of VMware ESX servers commonly known as a PSOD (Purple Screen of Death). A PSOD on one of your production servers, loaded up with important virtual machines, is never a good thing. You can reduce your chances of this happening by burning in your memory.

Most servers do a brief memory test on startup as part of their POST procedure. This is not a very good test and will only detect the most obvious of memory problems. A more thorough test checks the interaction of adjacent memory cells to ensure that writing to one cell does not overwrite an adjacent cell.

A good, free memory test utility is available, called Memtest86+, that performs many different tests to thoroughly test your servers memory. You can download it as a small 2MB ISO file that can be burned to a CD and booted on your new server. Let the memory burn-in for at least 24 hours (the longer the better though). Memtest86+ will run indefinitely and the pass counter will increment as all of the tests are run. The more RAM you have in your system, the longer it will take to complete one pass. A system with 32GB will generally take about one day to complete. Memtest86+ not only tests your system’s RAM but also the CPU L1 and L2 caches. Should it detect an error, the easiest way to identify the memory module that caused it is to simply remove a DIMM and run the test again and repeat until it passes. Documentation on Memtest86+ includes troubleshooting methods, detailed test descriptions and the causes of errors.

If you already have ESX servers running and want to test their memory, you can use the little known Ramcheck service to do this while ESX is running. This service is non-disruptive and runs in the background consuming minimal CPU cycles.

The extra time you spending testing memory before deploying servers helps eliminate potential problems down the road.


May 12, 2008  7:51 PM

Savings from a green data center takes time to grow

Eric Siebert Keith Harrell Profile: SAS70ExPERT

Last month, SearchServerVirtualization.com blogger Eric Siebert discussed the cost benefits of virtualization, which stirred some discussion about the role these savings play in the larger scheme of server virtualization strategies.

It seems that the virtualization gospel of cost reduction has drawn criticism from some who see these claims as pie-in-the-sky deals, or at least not as awe-inspiring when compared with less apparent expenses. While Siebert focused on the savings created by decreased data center power consumption, his blog received this response on the Virtual Data Center blog:

I think that the core message behind Eric’s post is a good thing, but it’s missing the big picture. Thinking that saving on raw power is going to translate dollar-for-dollar into OpEx savings is short-sighted. Please do begin looking into power consumption as one of your data center cost metrics and as part of your overall virtualization strategy, but also factor in everything else that’s going to be required to complete this task. You may find that you save a ton of money within 12 months of converting, or you may find that savings is much less than you originally anticipated; just make sure you know that before hand and know what you’re getting into so you don’t promise your CIO $1M in savings only to spend $950k getting there.

While Siebert’s original comments were limited to the savings associated with a 10-to-15-cent reduction per kilowatt-hour (resulting in estimated savings of between $219,000 and $328,500 for this particular project), Siebert agrees that any enterprise virtualization project requires a financial investment up front. “ROI will occur over time,” according to Siebert, “and will be a big factor in offsetting the costs of the project.” But the Virtual Data Center blogger Alan Murphy insists that savings can be misleading, a virtual “red herring” that drives customers to adopt virtualization under the mistaken impression that the technology amounts to free money.

Decreased power consumption is not the only way to save on utilities. As Bridget Botelho reported a few weeks ago, utility companies now offer rebate incentives to data centers that adopt power-saving virtualization technologies. Apparently, though, few adopters have cashed in on these rebates because of some loopholes.

Other data center changes that accompany virtualization projects can also contribute to power-related savings. Jacinda Duffy, a network administrator at Ecom Atlantic Inc., tells us that when her organization virtualized its data center six months ago, it brought in a heating, ventilation and air conditioning (HVAC) company to diagnose the airflow in its server room. After determining that hot air from the ceiling actually flowed back into the server room on weekends, HVAC technicians redirected the ceiling airflow to alleviate the room’s cooling demands. As a result, the settings on the company’s cooling units’ thermostats were adjusted to a higher temperature. Finally, after having consolidated its servers, Ecom Atlantic decided to space them out to allow for a more efficient airflow between servers. While it has only been a few months with the room’s new layout, Duffy anticipates some “significant savings in the near future.”

If you are cultivating your own green data center savings, we’d like to hear about your experience. Feel free to drop us a line and let us know how you are doing it.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: