Scott Lowe recently blogged about things that he loves but are not yet available. VMware, it seems, is getting as bad as Apple in announcing products far ahead of their release dates. This generates great market buzz and sends stock prices soaring (or in the case of VMworld Europe leaving them as low as they have been), but it does little to satiate the desires of systems administrators, it only serves to increase want and provide little respite. Well, then I am about to send you into the Sahara with the promise of an oasis, only to find a sign-post that says “Coming Soon!” at the end of your journey.
Please join me in anticipation of VMware Application High Availability (AHA).
AHA has not been announced by VMware, but it is coming. How do I know this? It is next logical step for their HA portfolio. ESX 3 brought us server HA. 3.5 introduced us to VM HA. And the recent announcement of the VMsafe all but secures the eventual release of AHA.
What is AHA? Simply put, you will be able to right-click on a VM from the VI client and indicate that if Microsoft Exchange fails then the Exchange service should be restarted, or the VM should be failed over to another ESX server. Or perhaps you want to monitor the Apache web server — just check a box. How will VMware achieve this level of fine-grained control? Allow me to refer to the VMsafe product page:
VMsafe provided in-guest, in-process APIs that enable complete monitoring and control of process execution.
This API will allow VMware to monitor and control processes within the guest OS. That, my friends, is how AHA will work.
AHA will allow VMware to take even more market share away from companies like Sun and Microsoft, both who have their own clustering technologies. Why cluster at the OS-specific level when you can create clusters the same way no matter the OS or application underneath!
VMsafe is set to debut later this year, and I am quite certain that alongside VMsafe, or shortly thereafter, we will see VMware announcing and releasing its application level high availability software. I hope you aren’t too parched : )
Fellow SearchVMware.com blogger Andrew Kutz has done it again with his release of the integrated SSH plug-in for use in the VMware Infrastructure Client (VIC). This plug-in saves ESX administrators the hassle of launching Putty or another mechanism to start a session to the ESX host. Details on the plug-in and the download link are available on this post in the VMware communities thread and I had a chance to use the plug-in for a while successfully.
Installing the console plug-in is straight forward from the ConsoleClientSetup-0.1.5.msi and it is easily added from the plug-ins menu in the VIC. Once added, each ESX host in your inventory will have a new tab called Console that performs in a similar fashion that the Console tab does for virtual machines. The difference is that authentication to the ESX host is passed through the plug-in. This requires that SSH be enabled on the ESX host, and should you wish to use the root login there is a slight configuration to enable root SSH access which is explained here on the ITKE by David Davis.
The console tab now visible on the far end of the VIC and is context-sensitive so it is only displayed when you have an ESX host selected in the left windows pane. Once installed the plug-in allows you to authenticate directly to the host with a username and password:
Now it is very easy to log into the ESX host and perform simple tasks such as running esxtop, pinging the console, and enabling visibility to the file system within ESX. Note that as long as you have the console connection established, it runs as long as your VIC session is running. So use the disconnect link when you are complete with a session. If you close the VIC, the session is disconnected from the ESX host. Here is my session running the esxtop command while in maintenance mode from within the VIC:
While this plug-in definitely makes a quick look to the ESX console quite accessible, there is one limitation that I came across. The resolution of the embedded VIC console is not as scalable as a Putty session. This is especially noticeable if you are running some of the esxcfg-xxxxxxx commands. Take for example running the following command:
esxcfg-mpath -l -v
This command has 140 positions of text on the longest line, which is visible on one line in Putty if you have the resolution. For the console plug-in, the width is limited to 80 positions of text, and for long commands they should be run in pipe fashion to show the full output. For the example above, this iteration would be as follows:
esxcfg-mpath -l -v | more
Important note on plug-ins
While the ESX plug-ins work in the VIC, they are not supported by VMware. With this warning, adequate testing is required to ensure that you do not have any issues in your environment that may effect live systems. As a matter of practice, I do not install any non-essential components on the Virtual Center server so a tool like this would be best suited on client systems only. All that said, Andrew has put out another great plug-in to make administration of an ESX environment more centric to the VIC. I can’t wait for his next plug-in!
When VirtualCenter (VC) 2.5 was released, I, like many others, started on a path to migrate to the new version for my VMware implementation. After the VC installation, my ESX hosts, which had only one network interface, displayed a message similar to the one shown below:
Initially, I was somewhat irritated at this message. I had already planned out my connectivity for the ESX hosts in the VC 2.0 and ESX 3.02 version behavior. But after some thought I determined that management network redundancy is actually a good idea despite the slight hassle. Here is what I did to quickly and solidly get rid of this message and the corresponding yellow indicator on the cluster:
Get an additional IP address
An additional TCP/IP address is required to resolve the lack of redundancy for the role of VMware service console. In my environment, it worked best to have this additional IP on the same VLAN as the primary VMware service console address. Furthermore, I have one DNS entry for the ESX host that I will leave configured for the primary interface. This is unless there is an issue requiring me to have all service console traffic migrated to this secondary interface. In that scenario, I would also change the DNS entry.
Stack the roles
I chose to “stack” the role of service console on top of an existing vSwitch that was, up to this point, configured to only provide virtual machine traffic. Here are the steps to do this:
- go to the ESX host in the VMware Infrastructure Client, (VIC)
- selected the configuration tab,
- In the networking section, select Add Networking
- and add the Service Console role on top of the existing vSwitch as shown below:
This interface will not have significant traffic back to the VC server unless it is configured to be the primary interface. In my case, the DNS entry will still point to the primary interface on vmnic0. In this configuration, I am not taking precious bandwidth from the virtual machines (not shown) on the PROD-VLAN network across the two physical interfaces vmnic1 and vmnic2.
In general, I do not wish to stack roles on physical adapters. My initial design was based on having dedicated interfaces for virtual machines, vMotion, service console and a hardware management interface (Dell DRAC or HP iLO). In this situation, there is virtually no traffic on the new service console, and the benefit of having the degraded cluster condition cleared is work changing the practice of stacking network roles.
In fulfilling the redundancy requirement, a true issue that would cause the cluster to be in a degraded state with the yellow icon would not be masked. The other, more intuitive option is to add or allocate a physical interface with the specific role of the additional service console assigned. Most implementations, however, don’t have extra physical network interfaces available.
Feel free to share your own strategies in addressing this annoying growing pain of VC 2.5 by commenting.
The advent of virtualization has created a burgeoning market for systems management tools that can handle the complexities of virtualized environments. Now along comes KACE Networks Inc. with a systems management tool with a virtual twist: The Mountain View, Calif.-based vendor has just introduced the Virtual KBOX System Management Appliance or V-KBOX, the first virtual systems management appliance.
As with virtual appliances, V-KBOX is a fully pre-configured and pre-configured application. After installing the V-KBOX on a network, all that’s required to get the application up and running is an IP address.
The new V-KBOX provides the same systems management functionalities as KBOX, KACE’s physical hardware-based systems management appliance. The big difference, of course, is that V-KBOX is solely a software-based tool, and it that runs in VMware’s Infrastructure 3 environment. Unlike physical systems management tools, V-KBOX does not require dedicated hardware in order to run, which makes it a less expensive alternative particularly for organizations that have not already invested in systems management tools.
Once installed, V-KBOX performs typical lifecycle systems management functions, such as hardware and software discovery and inventory, software distribution, scripting and configuration, patch management and alerting.
According to KACE, there are two significant advantages of a virtual systems management appliance that are similar to the advantages of virtualization itself: a virtual appliance is quick to provision and it is easy to scale.
Jason Cummins is an IS services manager at retailer Jordan’s Furniture, and he is in the process of evaluating the V-KBOX appliance. Jordan’s already uses KACE’s physical appliance for systems management, and Cummins says the functionality is identical. He added that Jordan’s may implement the V-BOX for testing situations, due to VMware’s inherent ability to provide snapshots. (Currently, Jordan’s doesn’t run VMware, but may do so within the year.) “The ability to take snapshots is really attractive about the virtual appliance,” he said. Such a capability would allow Jordan’s to quickly test and recover snapshots for its Windows-based environment of 1,100 networked PCs and servers.
The V-KBOX systems management appliance supports VMware ESX Server 3.x and 3.i, VMware Server 1.0 and 2.0, and VMware Player. Managed operating systems include Linux (Red Hat Linux AS and ES versions 3, 4 and 5), Mac (OS X 10.2+), Solaris (9 and 10) and Windows (Vista, XP, 2003, 2000).
The V-KBOX systems management appliance is specifically geared to midsized organizations that have already invested in VMware platforms, but don’t have the financial resources or manpower for traditional systems management tools. There are two models of the V-KBOX systems management appliance: The 1100 (recommended for 100 to 1,000 nodes) and the 1200 (recommended for 1,000 to 8,500 nodes).
With the addition of a virtual systems management appliance into the mix, the ongoing debate among systems administrators is bound to get more interesting. It’s no longer an issue of appliance versus a traditional software tool for systems management. Now system administrators can argue the multidimensional pros of cons of physical appliance, versus virtual appliance versus systems management software.
The VMware Infrastructure 3.5 Plugin and Extension Programming Guide – Revision 1 is now available at VIPlugins.com. This document is not sponsored or supported by VMware in any way. In fact, an excerpt from the text:
“While the succeeding pages may give the impression that this paper was written in cooperation with VMware, this work is the result of hours of using Lutz’s Reflector to peer into VMware’s intermediate language (IL), Lutz’s Resourcer to figure out where icons come from (it’s not the icon stork), ProccessMon, FileMon, and RegMon to take a look at things happening in real time, and finally the Microsoft structured query language (SQL) manager to explore the new VI 3.5 database schema. In summary, although the knowledge from these explorations resulted in an idea of the VI plugin architecture and working plugin, do not consider it to be the final word on anything. We will simply have to wait for VMware to provide finality to this matter.
In short, all the information contained in this document may be entirely and completely wrong. Read it at your own risk. If you find yourself stuck in an infinite time loop once you finish, remember two things: 1) ice sculptures impress the heck out of the ladies and 2) you are not god. You may be a god, but not the god. That honor is left to Mr. Morgan Freeman.”
This paper focuses on educating developers on:
Client Plugin Architecture
This includes where plugins are installed, how the VI client discovers local plugins and ones advertised on the VC server, and finally how to create a client plugin.
Server Extension Architecture
Includes how to register server extensions and how to make client plugins centrally available.
Discusses how the VirtualCenter Tomcat installation impacts server extension daemons.
Review the new VirtualCenter database tables that are related to extensions.
Creating Windows Installers
Reveals some problems with creating Windows Installers for server extensions.
Details the namespaces and assemblies VMware provides to create plugins and extensions.
Coins new terminology that developers can use when discussing the above concepts.
Hope this helps!
The VMware Infrastructure client allows for plug-ins for third party applications and other enhancements. Andrew Kutz of lostcreations has released a plug-in for Storage VMotion that you can use outside of the VirtualCenter Remote Command Line Interface (Remote CLI). I had a chance to test the plug-in, and I can tell you that it works and is quite easy to use.
The plug-in is discussed here on the VMware communities Website with some feedback from other ESX administrators. A link for the download as well as installation instructions are available there as well. Once the plug-in is installed, a right-click option Migrate Storage appears when clicking on a virtual machine. When the plug-in activates, a window appears that lets you select what storage you want to migrate the running virtual machine to:
Once you select the correct storage, the task is initiated to VirtualCenter and at this point it is indistinguishable to the task as it would be performed in the Remote CLI. The plug-in does not break compatability rules. For example, if you attempt to migrate a virtual machine’s storage running on an ESX 3.02 or lower host, VirtualCenter will prohibit the task due to compatability.
The plug-in is freely available under the new BSD license. Give it a test drive and post your feedback here or on the community board..
The VI3 Remote CLI (command line interface) virtual appliance (VA) is provided by VMware to perform Virtual Center tasks that you cannot do in the interface, or for tasks that you want to script for automation. The Remote CLI is based on the VI Perl Toolkit to perform a lot of the tasks in the environment. Be sure to check out some resources by Schley Andrew Kutz on the VI Perl Toolkit. I tend to prefer the VA model for something like the Remote CLI for the following reasons:
·The functionality is centrally accessible and contained to one place (the Remote CLI can be installed in Linux and Windows systems as well). You can have multiple Remote CLI virtual appliances should you wish.
·The VA has everything needed in one environment (perl and other libraries).
·The VA can run scripted commands that may take a long time to run (instead of your workstation console).
Getting the Remote CLI VA is straight-forward. I will go through downloading it from within VirtualCenter 2.5. The first step is to select Import from the Virtual Appliance option of the File menu within the Virtual Infrastructure Client (VIC). The VIC will then present the Import Virtual Appliance Wizard, and select to import the VA from the VMware Virtual Appliance Marketplace:
Select the Remote CLI Appliance (at 119 MB at time of this blog) to download, and then click Next to proceed with the download. Note that when you download this VA, it is downloaded from your VIC – not the VirtualCenter server. You will be required to agree to the licensing and then be able to place the VA in your datacenter, cluster, storage destination, change the name (default is vicfg-rcli), and network configuration. Once the download is complete, the VA will be ready for some minimal configuration and you will be ready to run remote command line commands to Virtual Center.
Powering it on for the first time
Now that you have the virtual machine downloaded, you can check the basic hardware inventory of the system. It’s default configuration is quite thin at 256 MB of RAM, 1.4 GB of disk, but no CPU limitation. The Remote CLI VA is a Debian Linux-based system, and when you power it on you are presented with the license agreement again and then an opportunity to set a new root password, time zone setup, and then a logon prompt. The base VA will come up similar to the following:
It would be handy to have the VA placed on a vswitch that has DHCP, as then you would have zero post-download configuration to perform. Should you need to set an IP configuration. Log into the VA as the username ‘network’ to configure the IP configuration. From here, you simply enter the basics of the network and you will be presented with a screen like the following when you are complete:
From this point, the VA is configured for use in your environment. Any command that you run interactively (or script) will require authentication to the Virtual Center server. So, from here the base environment for the Remote CLI is ready for tasks.
Why would I want this?
The big reason is Storage VMotion, a new feature of ESX 3.5 and Virtual Center 2.5. SVMotion is only launched from the Remote CLI interface. From the Remote CLI VA, you are able to script multiple Storage VMotion tasks as well as run single iterations interactively.
So I use BackupPC for a lot of my SMB backup needs. For my money it is the best open-source backup program hands-down. However, one of the problems that I have run into is when I backup my VM directories on VMware Server systems, the VMs sometimes go wonky because they do not like being backed up when they are running. So I need a way to suspend the VMs prior to a backup and resume them after a backup.
BackupPC implements what are called Pre and Post commands that allow an IT administrator to supply commands to be before and after a backup operation is to take place. The two Pre and Post commands we are concerned with are DumpPreUserCmd and DumpPostUserCmd. The BackupPC documentation explains how to use the Pre and Post commands to send a command to the server that is going to be, or just was, backed up via SSH. Now obviously this works great for Linux, UNIX, and OS X servers, but for an OS that lacks certain fundamental necessities like an SSH server, and I am of course talking about Windows, the burden is on the IT administrator to install one (it is possible to obtain a free SSH server for Windows).
So now that we know how to issue commands to a server before and after it is to be backed up, we need to be able to tell that server to suspend and then resume its VMs. However, it is not enough to simply issue a suspend and resume command to all VMs, there needs to be some logic involved. For example:
– The VM should only be suspended if it is running.
– The VM should only be resumed if it is suspended and was suspended as part of the backup operation.
Lucky for you I have written two bash scripts (Linux/Unix/OS X compatible) that do just that.
Note: The IFS variables have a letter O in them instead of the numeral 0 because WordPress keeps escaping the 0. So if you copy and paste the scripts, please replace the letter O with a number 0.
suspend_vms – This script will suspend all of the VMs currently running that are listed in the server’s VM list file. A note will be placed into a temporary file /tmp/vms_to_be_resumed that this VM was suspended as part of a backup operation. This temporary file will be read by the start_vms script in order to determine which VMs to resume.
#!/bin/bash IFS=$'\\O12' for F in $(grep "config" /etc/vmware/vm-list | sed "s/\(config \)\(.*\)/\2/g") do VMCFG=$(echo $F | tr -d \") VMSTATE=$(/usr/bin/vmware-cmd $VMCFG getstate) echo -n "$VMCFG - $VMSTATE" if [ $(echo $VMSTATE | grep on) ] then echo $VMCFG >> /tmp/vms_to_be_resumed echo " - suspending" /usr/bin/vmware-cmd $VMCFG suspend fi done echo
start_vms – This script will parse the contents of /tmp/vms_to_be_resumed and resume the VMs that are listed in the file and are currently suspended. The temporary file will be deleted at the end of this script’s execution.
#!/bin/bash IFS=$'\\O12' for F in $(cat /tmp/vms_to_be_resumed) do VMCFG=$(echo $F | tr -d \") VMSTATE=$(/usr/bin/vmware-cmd $VMCFG getstate) if [ $(echo $VMSTATE | grep suspended) ] then echo "$VMCFG - $VMSTATE - resuming" /usr/bin/vmware-cmd $VMCFG start fi done rm /tmp/vms_to_be_resumed echo
These scripts should allow you to successfully back up VMs running on VMware Server on a Linux server. Simply place them in an appropriate location on the server to be backed up that has VMs, such as /usr/local/bin, and use BackupPC’s pre and post commands to call them via SSH. If someone would like to write the equivalent scripts for Windows I would be happy to post those here as well (if I do not write them soon myself).
Hope this helps!
The latest version (1.5) of the formidable VI Perl Toolkit was released last week. What does the new download hold in store for eager developers? Well:
– Supports connecting to and managing VC 2.5, ESX 3.5, and ESX 3i servers
– Increased supported platform support (that you can install the VI Perl Toolkit on)
– The virtual appliance containing the VI Perl Toolkit is now in Open Virtual Machine Format (OVF)
– A whole new slew of documentation
– A new Web-services library that enables SMASH and CIM management of ESX servers (experimental)
The new version of the toolkit has also fixed some pressing issues lingering from 1.0:
– The scripts finally prompt for passwords so the passwords are not available in process lists.
– Can handle special characters in element names (such as spaces)
So I was going to write an article about my experiences upgrading from VI3 to VI3.5, but Mike over at RTFM-Education beat me to it. He has put out a great PDF talking about a lot of the steps and nuances involved in the upgrade process for both VC2.5 and ESX3.5. I highly recommend you take a look at Mike Laverick’s Upgrading to ESX 3.5 and VirtualCenter 2.5 Experiences.