Portsmouth, NH-based VKernel is deep into the beta stages for their new capacity bottleneck analyzer virtual appliance (VA). I had a chance to preview a pre-release candidate this week and was very happy the product. The capacity analyzer, a follow up to VKernel’s chargeback product that launched last year, plugs into a VMware VirtualCenter or ESX host for data based on storage, networking, processor and memory usage for elements of the virtual environment. The capacity analyzer also includes new features that work well with the growing virtualization environment.
Downloading the virtual appliance, assigning an IP address and pointing it to the VirtualCenter server was a breeze and I was looking at a base dashboard in less than 20 minutes. Probably the most notable feature of this VA is the main dashboard that provides a snapshot view of the environment. Everyone can learn something from their virtual environment from looking at the dashboard from the cumulative view. Upon first look, I learned some new things about my environment. For starters, I have one virtual machine that has an abnormally large storage requirement and I did not know it was as big as it was now.
The dashboard is element-sensitive, so if I have multiple hosts, datacenters or VMware ESX clusters , the bottleneck dashboard can display the status relevant to that object. For example, the figure below shows the dashboard from the summary for one particular datacenter:
One thing I find very beneficial to the virtualization admininstrator is the storage details. The for the storage dashboards you can get very detailed within the VA. For example, when looking at the dashboard’s datastore statistics, important information about the environment is displayed. Storage is usually one of the virtualization administrator’s biggest pain points, and any tool that increases visibility to the storage usage and trending is welcome. I mentioned a large virtual machine earlier, this figure shows that virtual machine at the top of the storage resource consumer’s list with 81 GB allocated for its storage:
Note also that the LUNs are enumerated and their serial numbers are presented. Determining the LUN serial number is not possible through the Virtual Infrastructure Client, so I have had to frequently use the esxcfg-mpath command to get the serial number. Depending on your storage environment, you may need to reconcile LUNs by the serial number. In the scenario of many LUNs with the same geometry, the serial number is the only true way to identify the drive to the virtual environment. This is important in the case of a LUN return, as you do not want the incorrect LUN being removed from the storage system while in use.
Release candidate coming soon
The capacity bottleneck analyzer VA is soon to go into release candidate mode, so be sure to check out the VKernel evaluation site for the standard edition evaluation download.
Hypervisors and VMs are becoming commoditized, resulting in a shifting emphasis towards user interface and management tools. In other words, anyone can make a virtualization platform but the platform that survives Hyper-V’s entry into the virtualization space is the one that develops stand-out management features.
Virtualization pro Andrew Kutz discusses the components of an evolving virtualization ecosystem.
[kml_flashembed movie="http://www.youtube.com/v/cbYfZPlFCDA" width="425" height="350" wmode="transparent" /]
System administrators can’t seem to stop gushing about virtualization benefits. Data center folks reduce hardware footprints and lower power and cooling costs by consolidating servers. When IT pros take virtualization to the next level, such as implementing desktop and application virtualization, the benefits seem to expand exponentially.
However, system admins beware: some management issues cannot be glossed over, such as what to do about multiple OS images. In this video, virtualization expert Barb Goldworm discusses some potential risks when extending virtualization to the desktop and how to avoid them.
[kml_flashembed movie="http://www.youtube.com/v/4xVNW4A0cVs" width="425" height="350" wmode="transparent" /]
Atlanta-based virtualization pro Mark Dean shares his thoughts in a guest blog for SearchServerVirtualization.com
One of the more popular products in the growing virtualization market is Parallels Virtuozzo Containers. Virtuozzo Containers provide a stable, high performance virtualization platform. However, this same technology also has some drawbacks in restricting the operating systems that can be used for both hosts and VMs.
I deployed Virtuozzo for a customer that wanted to leverage virtualization but was uneasy about the performance of their database server. I suggested Virtuozzo since it straddles the physical-virtual line with OS containers, it is similar to the technology that Sun Solaris uses.
The strengths of Virtuozzo Containers are really in that blurred line between the physical and virtual platforms. Instead of a hypervisor between the OS and the hardware, Virtuozzo Containers virtualize the OS by sharing OS code, files, memory and cache from the root OS, which is called the Hardware Node (and is represented as Container ID 0) in Virtuozzo terms. This means that the VMs are using the hardware directly rather than having calls to the hardware trapped by a hypervisor and then executed which translates into better performance for I/O workloads. This increase in performance is one of the main reasons companies will deploy Virtuozzo. High I/O workloads such as heavy transaction based databases benefit from the shared code nature of the containers.
But as hardware advances with the option of CPU hardware assistance from the CPU manufacturers (AMD-V and Intel-VT), I see Virtuozzo’s technology becoming irrelevant. Since I can now run unmodified VMs using Xen or KVM on Linux and no longer have the (over-exaggerated) performance hit of the binary translation hypervisor (as in ESX), why go with a limiting technology?
Virtuozzo imposes some limitations on what you can run in your farm. Since the VMs are basically sharing code form the root Container OS, you can only run that type of VM on the host. Virtuozzo Containers currently only supports Windows 2003 Server and main Linux distributions. You cannot run Solaris, BSD or NetWare. Now, for some IT shops that may not be a problem but just about all the places I’ve seen, there is a mix of Unix, Windows and for many government places, NetWare.
If a virtualization vendor does not enable live migration, host/VM isolation or embrace the concept of a farm, it is not good for production workloads. Virtuozzo does have some of them, but their isolation is not as good as I like to see. I find that right now only VMware VI3 ESX 3.5 Server has all those concepts down. Xen Enterprise, when coupled with CPU assisted virtualization is the best contender to challenge VMware’s space right now.
Mark Dean is a VMware Professional Partner and a Microsoft Certified Partner with certifications in VMware, Microsoft, Novell, Citrix, IBM and HP along with HP and IBM hardware. Dean has his own virtualization consulting company, VM Computing.
Continuing our review of Hyper-V, the recently released Microsoft virtualization product for Windows Server 2008, we focus on the management aspect of the hypervisor. In two other recent blogs, I took a quick look at the Hyper-V Manager and the simple creation of a virtual machine. Also on SearchServerVirtualization.com, fellow contributor Anil Desai gave advice on using the Hyper-V Manager. Now we’ll take a closer look at System Center Virtual Machine Manager.
I installed the System Center Virtual Machine Manager, or VMM, for the management of virtual machines within Windows Server 2008. Installing the VMM is fairly straightforward, but it is worth noting the following pre-requisites:
- .NET framework 2.0 to start the install, which is automatically upgraded to version 3.0
- PowerShell from the Windows Server 2008 release environment
- A SQL Server database instance (can be a local SQL Server 2005 Express edition or hosted database)
- Domain membership
After some quick iterations of the VMM install, you will have the console application available. From there, you can add a host to the console as I have done below with the HYPERV-TEST-RWV host:
Inside of the VMM console, I added the single host to the same server that the console is installed and spent most of my time afterwards navigating the help file. Surprisingly, the VMM help file (C:\Program Files\Microsoft System Center Virtual Machine Manager 2007\help\Help.chm) is quite comprehensive and provides a good starting point for most configuration tasks in a single resource. One of the best initial observations of the VMM console is the Windows integration for delegated security and rights options. Within the VMM, this is referred to as a self-service policy. These policies offer one option above the competition in that you can assign users to be local administrators through the self-service policy. In other products, this is easily addressed by group policy. Below is the configuration of a self-service policy:
Like other virtualization platforms, the management layer is the key to an enterprise implementation. System Center VMM is no exception, and this initial release has support for migrating virtual machines and shared storage. However, this part of my evaluation of VMM will not focus on these components just yet, so stay tuned for my next entry on Hyper-V.
Microsoft resources for VMM
System Center is a comprehensive set of products that can manage components including Systems Management Server (SMS) and other pieces. VMM is a piece that can function as a stand alone or with the rest of the pieces in use. Microsoft has a nice video that showcases how VMM will fit into the management space for the virtual machines. If you are familiar with virtual machine management, this will be somewhat introductory. But towards the end of the video there are some quick tasks performed on-screen to give a feel for the process of managing the virtual machines in the VMM environment.
Palo Alto, Calif.-based VMware, Inc. announced the general availability of VMware Lifecycle Manager, which was first announced and covered by SearchServerVirtualization.com in February.
The product is VMware’s attempt to control virtual machine sprawl by showing who owns a virtual machine, when it was requested, who approved it, where it is deployed, how long it has been in operation and when it is scheduled to be decommissioned.
VMware Lifecycle Manager also gives IT managers the ability to measure and chargeback the use of virtual machines to individual department owners.
So while VMware Lifecycle manager helps manage the creation, operation and decommissioning of virtual machines in compliance with company policies and standards, VMware Stage Manager helps to transition application stacks of multiple virtual machines through the integration and staging process prior to production. VMware Lab Manager helps with provisioning and allows manageability over the entire virtual environment.
There are a number of companies that offer virtual machine lifecycle management software similar to VMware’s, and the majority of them are based on VMware virtualization. They have the pleasure of selling against VMware while also supporting the virtualization giant’s product.
Some virtual machine lifecycle management vendors to consider if you are on the market for one of these tools include a new Alpharetta, GA-based company called vmSight that offers software for application performance, capacity planning, VM Sprawl control, billing and chargeback and regulatory compliance for VMware virtualization. Similarly, products from Buffalo Grove, IL.-based vizioncore Inc. with vCharter and Portsmouth, NH-based vKernel, which received kudos from Gartner Inc. for its VKernel Virtual Appliance Suite for Systems Management, offer virtualization lifecycle management products that go head to head with VMware’s offering.
VMware Lifecycle Manager is now available for purchase through VMware’s network of distributors, resellers and OEMs. VMware Lifecycle Manager is purchased a la carte and requires the purchase of the standard product offering per processor. There is also a customization option (one per VMware Lifecycle Manager server) that will allow customers to tailor fit VMware Lifecycle Manager into existing organizational tools and processes.
The companies I speak with about their virtualization projects always list the same reasons for going virtual: they don’t have enough space in their data center to add more physical servers; they can’t afford power and cooling bills; they want to consolidate physical machines; and they want to consolidate physical people.
That’s right. The majority of people I speak with – employers and employees alike – say they deploy virtual machines to avoid deploying more IT staff. While this is great for corporations, it isn’t so good for IT job seekers.
For example, I went to a VMware Inc. User Group meeting in Boston on March 27 where one user gave a presentation about the virtualization project he oversaw at a Maine-based paper manufacturing company, SAPPi. “One reason we wanted to virtualize is we needed to lower our IT headcount,” according to the systems engineer. “We needed to get rid of high end support and just keep desktop support.”
A company called QualComm Inc has seen a similar side-effect of virtualization. At the VMware Virtualization Seminar Series in Providence Feb 26, VMware presented a case study of the wireless technology company about how it started with 1,200 servers and consolidated down to 100physical servers (12:1 ratio) , increasing data center space and cutting back on power and cooling.
That’s great. And the cherry on top? They have not had to increase their IT staff at all in almost three years.
And at the growing Owen Bird Law Corp. in Vancouver, British Columbia, their sole IT staffer, Stephen Bakerman, went with Virtual Iron virtualization to avoid adding more physical servers and hiring more staff to help him manage it all.
“The cost savings is probably $100,000, and the time savings for me are incredible. Once everything is virtualized, I can run everything from my desktop remotely from my office or at home. I don’t have to hire someone else, and I would have if we kept adding servers,” Bakerman said.
Sure, I get how cool virtualization is, and the benefits it brings from a savings and management stand-point, but is anyone else concerned those IT college kids who dream of days spent engineering systems? or those system administrators who may get consolidated from many to few along with their servers?
On March 28, 2008, VMware released the second beta for the Server 2.0 platform. Version 2.0 beta 1 introduced sweeping changes to the user interface that was met with feedback from the beta team. VMware has also confirmed that a beta 3 will soon be released as well for the free server virtualization product.
The release of VMware Server 2.0 beta 2 continues the VMware initiative to present itself as the starter product for companies that are new to virtualization. Among the core changes for beta 2 are:
- Auto start of virtual machines
- USB 2.0 support
- Additional guest OS support (Windows Server 2008 and Vista)
- Links to the VMware marketplace for virtual appliance downloads
The beta 2 is available now for both Linux and Windows distributions from VMware’s website. VMware has made an effort to be involved in the VMware Communities blogs for the beta, and are very keen to beta user concerns.
Warning: The following blog post contains biting sacarsm and marginally humorous commentary that may offend sensitive VMware executives. Reader discretion is advised.
An open letter to VMware:
Hey VMware, it’s me again. I know you’re probably still mad at me for last week. Well, I’m going out on a very public limb here to apologize for something that I did.
I’m sorry that I forgot your version.
Yes, you let everyone know that your version was coming up, but I forgot to create a calendar reminder for it and I just plain forgot. You know how that goes, right?
Now I don’t mind owning up to my bad memory, but here’s the thing — you have sooo many versions! Most people just have one version per year, you have at least five. There’s the version for VMware Infrastructure (VI), currently at 3.5. ESX is already 3.5 versions old, and ESX 3i has its own version too. Then there is the VirtualCenter and the VI client at 2.5. VMware Consolidated Backup (VCB) is straggling behind at 1.1. I think the VI SDK is also 2.5 versions old, but with the VI Perl Toolkit at version 1.5 and the VI Toolkit (for Windows) in beta, it is hard to keep up.
VMware, your enterprise portfolio has expanded far beyond simply ESX, and none but two of the versions align. Therefore, with so many available products, it is fast becoming impossible to understand which version works with which. You should release minor point releases between major revisions in order to maintain a consistent major version number for your enterprise product offerings.
I know you’re a busy company, and it is hard to get everybody together on one day out of the year to celebrate your version, but I beg you, please try. Except for those closest to you, it is getting extremely difficult to remember your versions, or figure out which version we actually mean. Here’s an idea: for the rest of the year, skip all of your versions and then start over your versions all at once on a single day. Maybe even at VMworld? It can be your special version day. I’ll even bring party hats and cake (if you will invite me.)
VMware Infrastructure 4 (VI4) can include:
– ESX 4
– ESX 4i
– VirtualCenter 4
– VI SDK 4
– VI Perl 4
– VI Toolkit (for Windows) 4
– VCB 4
I know it will throw people off at first; your customers might think they missed some of your versions. However, I think in the end you’ll have a lot of people thanking you.
I feel real bad about missing your version, and I don’t want to let the announcement pass me by again. Maybe I should use Outlook?
If one thing annoys me to no end, it is unused capacity.
That’s why I like virtualization. And it’s also why I like grid computing. Heck, that’s why I like cheeseburgers (there’s no empty space in my stomach after a trip to In-and-Out.) Virtualization makes efficient use of existing hardware to control costs. VMware environments often have host servers with more than a small RAID1 array in them because they were existing servers retasked as ESX hosts. Sometimes this space gets used for ISO file storage, sometimes it gets used for virtual machine toys (so called gray- or black- boxes) and sometimes it’s used for production VMs. Labs are often set up on the local storage with machines that have no production value hosted, which makes great use of that space (but perhaps not as good of use of CPU or memory for production machines).
A case study in space, storage and VMs
Then there are times when I walk into sites like the one I saw last week. They had a virtual-machine iSCSI SAN set up on each ESX host homed to the local storage. This was in addition to their FC SAN, by the way. They even ran part of their production environment off of it, using the unused internal disk space on the ESX servers to store virtual appliances that ran iSCSI Targets in them, similar to what I described in an earlier post. What a great use of space. Kudos to them!
I was concerned about how they were to keep those virtual machines up in the event of host failure. First I was told that they put in a poor-man’s round robin, which is to say, host 1 has iSCSI SAN 1, host 2 has iSCSI SAN 2, 3 had 3, and 4 had 4, and they all replicated to each other; VMs hosted by ESX 1 were on SAN 2, those on ESX 2 were on SAN 3, those on ESX 3 were on SAN 4, and those on ESX4 were on SAN1. Then, anti-affinity rules were used to prevent VMotioned VMs from winding up on the same host as the SAN on which their files existed. The replication prevented any single SAN failure from becomming a nightmare. They hadn’t done anything unusual on the network side though, which bothered me (I would prefer dedicated physical NICs for the SAN VMs!), but their performance testing showed no need for the extra NICs to be added. It’s a little hard to follow-the-leader, but it worked reasonable well. This was done with a variety of open-source packages, some of which I had never seen before. I recognized IET right away, but the SAN appliances were all custom builds, and it took me some time to figure out what was what and what was going on where. It was not an efficient use of that internal disk space because of all the replication across servers being, essentially, mirrors of mirrors.
Ingenious? Yep. Complicated to track? Yep. Functional. Yep. Needing of a little less work to manage? Yep.
There are commercial products to do this, one of them is LeftHand Network’s Virtual Storage Appliance. I’ll post more about my experiences with this very soon.