As an instructor, Mike Laverick firmly believes that “pulling the plug” (or, restarting the host VM) is not the answer every time you run into a problem with a virtual machine that resides on the host VM. He is a believer in figuring out what’s causing the issue and fixing it. After all, why shoot ten VMs if only one is causing a problem? It’s like shooting all the innocent hostages just to get the bad guy.
So recently he published a “live blog” that chronicles his attempt to find a frustrating — but highly amusing (at least to me) problem: a VM that had been hot-migrated, but failed, so it was powered-off. But vCenter and other parts of the virtual infrastructure were stubbornly sure that the VM was, in fact, still on. Of course, although the VM was listed as powered-on, the options to control it were grayed out, so the poor Mike couldn’t power it off a second time.
Not to be defeated by the rogue, defiant VM that decided to vacation somewhere in vSphere land without his permission, Mike researched, found some possibly suitable solutions (and many not suitable at all), and finally, a lightbulb hit. Mike didn’t want to sacrifice the other VMs relying on their Mama VM (host VM). So why not send the VMs off to grandma’s (read: hot migrate them to another host) for a little vacation while Mama VM gets a reboot? Problem solved, albeit Mike had to bend his own rule of not “pulling the plug” on the host VM to fix a problem.
The full post is worth a read, especially if you’ve experienced your own virtual frustrations. Just grab a cup of coffee, since it takes a bit of time to read in its entirety.
Readers: If you’ve got your own VM troubleshooting story, email it to me. If I get enough submissions, I’ll record it in a podcast that you can later listen to and giggle along with.