The companies I speak with about their virtualization projects always list the same reasons for going virtual: they don’t have enough space in their data center to add more physical servers; they can’t afford power and cooling bills; they want to consolidate physical machines; and they want to consolidate physical people.
That’s right. The majority of people I speak with – employers and employees alike – say they deploy virtual machines to avoid deploying more IT staff. While this is great for corporations, it isn’t so good for IT job seekers.
For example, I went to a VMware Inc. User Group meeting in Boston on March 27 where one user gave a presentation about the virtualization project he oversaw at a Maine-based paper manufacturing company, SAPPi. “One reason we wanted to virtualize is we needed to lower our IT headcount,” according to the systems engineer. “We needed to get rid of high end support and just keep desktop support.”
A company called QualComm Inc has seen a similar side-effect of virtualization. At the VMware Virtualization Seminar Series in Providence Feb 26, VMware presented a case study of the wireless technology company about how it started with 1,200 servers and consolidated down to 100physical servers (12:1 ratio) , increasing data center space and cutting back on power and cooling.
That’s great. And the cherry on top? They have not had to increase their IT staff at all in almost three years.
And at the growing Owen Bird Law Corp. in Vancouver, British Columbia, their sole IT staffer, Stephen Bakerman, went with Virtual Iron virtualization to avoid adding more physical servers and hiring more staff to help him manage it all.
“The cost savings is probably $100,000, and the time savings for me are incredible. Once everything is virtualized, I can run everything from my desktop remotely from my office or at home. I don’t have to hire someone else, and I would have if we kept adding servers,” Bakerman said.
Sure, I get how cool virtualization is, and the benefits it brings from a savings and management stand-point, but is anyone else concerned those IT college kids who dream of days spent engineering systems? or those system administrators who may get consolidated from many to few along with their servers?
On March 28, 2008, VMware released the second beta for the Server 2.0 platform. Version 2.0 beta 1 introduced sweeping changes to the user interface that was met with feedback from the beta team. VMware has also confirmed that a beta 3 will soon be released as well for the free server virtualization product.
The release of VMware Server 2.0 beta 2 continues the VMware initiative to present itself as the starter product for companies that are new to virtualization. Among the core changes for beta 2 are:
- Auto start of virtual machines
- USB 2.0 support
- Additional guest OS support (Windows Server 2008 and Vista)
- Links to the VMware marketplace for virtual appliance downloads
The beta 2 is available now for both Linux and Windows distributions from VMware’s website. VMware has made an effort to be involved in the VMware Communities blogs for the beta, and are very keen to beta user concerns.
Warning: The following blog post contains biting sacarsm and marginally humorous commentary that may offend sensitive VMware executives. Reader discretion is advised.
An open letter to VMware:
Hey VMware, it’s me again. I know you’re probably still mad at me for last week. Well, I’m going out on a very public limb here to apologize for something that I did.
I’m sorry that I forgot your version.
Yes, you let everyone know that your version was coming up, but I forgot to create a calendar reminder for it and I just plain forgot. You know how that goes, right?
Now I don’t mind owning up to my bad memory, but here’s the thing — you have sooo many versions! Most people just have one version per year, you have at least five. There’s the version for VMware Infrastructure (VI), currently at 3.5. ESX is already 3.5 versions old, and ESX 3i has its own version too. Then there is the VirtualCenter and the VI client at 2.5. VMware Consolidated Backup (VCB) is straggling behind at 1.1. I think the VI SDK is also 2.5 versions old, but with the VI Perl Toolkit at version 1.5 and the VI Toolkit (for Windows) in beta, it is hard to keep up.
VMware, your enterprise portfolio has expanded far beyond simply ESX, and none but two of the versions align. Therefore, with so many available products, it is fast becoming impossible to understand which version works with which. You should release minor point releases between major revisions in order to maintain a consistent major version number for your enterprise product offerings.
I know you’re a busy company, and it is hard to get everybody together on one day out of the year to celebrate your version, but I beg you, please try. Except for those closest to you, it is getting extremely difficult to remember your versions, or figure out which version we actually mean. Here’s an idea: for the rest of the year, skip all of your versions and then start over your versions all at once on a single day. Maybe even at VMworld? It can be your special version day. I’ll even bring party hats and cake (if you will invite me.)
VMware Infrastructure 4 (VI4) can include:
– ESX 4
– ESX 4i
– VirtualCenter 4
– VI SDK 4
– VI Perl 4
– VI Toolkit (for Windows) 4
– VCB 4
I know it will throw people off at first; your customers might think they missed some of your versions. However, I think in the end you’ll have a lot of people thanking you.
I feel real bad about missing your version, and I don’t want to let the announcement pass me by again. Maybe I should use Outlook?
If one thing annoys me to no end, it is unused capacity.
That’s why I like virtualization. And it’s also why I like grid computing. Heck, that’s why I like cheeseburgers (there’s no empty space in my stomach after a trip to In-and-Out.) Virtualization makes efficient use of existing hardware to control costs. VMware environments often have host servers with more than a small RAID1 array in them because they were existing servers retasked as ESX hosts. Sometimes this space gets used for ISO file storage, sometimes it gets used for virtual machine toys (so called gray- or black- boxes) and sometimes it’s used for production VMs. Labs are often set up on the local storage with machines that have no production value hosted, which makes great use of that space (but perhaps not as good of use of CPU or memory for production machines).
A case study in space, storage and VMs
Then there are times when I walk into sites like the one I saw last week. They had a virtual-machine iSCSI SAN set up on each ESX host homed to the local storage. This was in addition to their FC SAN, by the way. They even ran part of their production environment off of it, using the unused internal disk space on the ESX servers to store virtual appliances that ran iSCSI Targets in them, similar to what I described in an earlier post. What a great use of space. Kudos to them!
I was concerned about how they were to keep those virtual machines up in the event of host failure. First I was told that they put in a poor-man’s round robin, which is to say, host 1 has iSCSI SAN 1, host 2 has iSCSI SAN 2, 3 had 3, and 4 had 4, and they all replicated to each other; VMs hosted by ESX 1 were on SAN 2, those on ESX 2 were on SAN 3, those on ESX 3 were on SAN 4, and those on ESX4 were on SAN1. Then, anti-affinity rules were used to prevent VMotioned VMs from winding up on the same host as the SAN on which their files existed. The replication prevented any single SAN failure from becomming a nightmare. They hadn’t done anything unusual on the network side though, which bothered me (I would prefer dedicated physical NICs for the SAN VMs!), but their performance testing showed no need for the extra NICs to be added. It’s a little hard to follow-the-leader, but it worked reasonable well. This was done with a variety of open-source packages, some of which I had never seen before. I recognized IET right away, but the SAN appliances were all custom builds, and it took me some time to figure out what was what and what was going on where. It was not an efficient use of that internal disk space because of all the replication across servers being, essentially, mirrors of mirrors.
Ingenious? Yep. Complicated to track? Yep. Functional. Yep. Needing of a little less work to manage? Yep.
There are commercial products to do this, one of them is LeftHand Network’s Virtual Storage Appliance. I’ll post more about my experiences with this very soon.
Using Storage VMotion (SVM) from the command line is fairly straightforward. However, a lot of admins may not be comfortable with the inefficiency of taking extra steps to perform a task that can be better handled via an integrated GUI tool.
It’s a feature that was clearly lacking in the VI 3.5 / VC 2.5 product release. It’s one that should have been addressed by the product team but wasn’t (I suspect for reasons related to release dates more than anything.) I looked at a few of the graphical tools for managing SVM, and found a couple of them to be pretty robust. One comes from fellow TechTarget blogger Andrew Kutz, while another comes from developer Alexander Gaiswinkler.
Gaiswinkler’s application is pretty neat and does good job outputting information. It takes the remote command line interface (CLI) utility for managing a VMware environment and adds a graphical layer to it. Setup is relatively simple and quick, taking a few short steps. I would expect, if a firm were to make a product around it, an installation package would be able to handle the process easily. Simply put, to install the SVM GUI, you need only follow the following:
- Install VMware’s remote CLI
- Save the vms.pl script to %PROGRAMFILES%\VMware\VMware VI Remote CLI\bin\
- Save the svmotiongui.exe to your PC (I put mine in %PROGRAMFILES%\VMware\SVMGUI\ (a directory I created manually, but it really doesn’t matter where you put it)
- Launch said exe file
In testing, I was able to SVM several vmdk files across my iSCSI data stores without incident. My only complaints about the application are that it’s seperate from the VI client interface and that it needs to be documented. But it’s a great tool even without that integration.
I also took Andrew Kutz’s plugin-based SVM GUI for a spin, and was equally happy with it. Because it’s an actual plugin for the VI console, integration was a given. The installation was smooth, just like installing any add-on. The steps are:
- Launch the server and client installers from your VC host and clients and follow the prompts
- Open your VI Client, go to Plugins, and the Available tab
- From the Click Download and Install Plugin
- Follow the prompts again
- From the Installed tab and check the box
I sent the same virtual machines flying around the iSCSI skies again without the slightest problem. While it’s not at a 1.0 release, I was so impressed that I’ve moved it into production. The one downside I’ve found is in the two seperate installers, but since that’s a relatively common practice for server and client-side applications that’s not much of a downside at all. I would personally like one installer which can detect the presense of the VC server, run the server installer if it is present, then run the client installer.
The seperate GUI application is solid. On a scale of 1-10, 10 being terrific, I’d have to give it an 8. The VC integatred client gets 9. Both are great tools, but Andrew’s plugin-based model is tops in my book. Both could use more documentation, and a more streamlined install processes, but neither one is badly in need of either. Both do a great job.
Citrix Systems, Inc. today announced a development and distribution agreement with Hewlett-Packard Co. (HP) to integrate an HP-specific version of Citrix XenServer into 10 models of 64-bit HP ProLiant servers.
Citrix worked with HP to develop a version of XenServer called HP Select Edition, which is different from XenServer in that it is tied into HP management tools, like HP Insight Control and HP Integrated Lights-Out for remote server management, a Citrix spokesperson explained.
Existing ProLiant server users can upgrade and virtualize their servers with XenServer with a license key and USB drive, according to a Citrix spokesperson.
Citrix XenServer HP Select Edition will be available and supported by HP starting on March 31, 2008.
As previously reported on SearchServervirtualization.com, HP started reselling XenServer back in October, but hadn’t agreed to pre-install the virtualization technology into its servers until today. At that time, Dell announced plans to resell and pre-install the OEM edition of XenServer into its PowerEdge servers.
Earlier this month, Lenovo also announced its plans to use the OEM edition in its servers and distribute it in China.
Hyper-V, code-named Viridian, is hypervisor based virtualization for x64 versions of Windows Server 2008. The Hyper-V hypervisor will also be available as a stand-alone offering, without the Windows Server functionality, as Microsoft Hyper-V Server.
Microsoft has been giving public users a taste of its virtualization offering with Beta releases of Hyper-V since December 2007. Microsoft also released a Community Technology Preview (CTP) of Hyper-V in September 2007.
This new release candidate includes three areas of improvement:
* An expanded list of tested and qualified guest operating systems including: Windows Server 2003 SP2, Novell SUSE Linux Enterprise Server 10 SP1, Windows Vista SP1, and Windows XP SP3.
* Host server and language support has been expanded to include the 64-bit (x64) versions of Windows Server 2008 Standard, Enterprise, and Datacenter editions in various language options.
* Improved performance and stability for better scalability and high throughput workloads.
Despite Microsoft’s image as a market dominatrix, the computing giant may have a tough time chipping away at VMware’s dominance in x86 virtualization, said Charles King, principal analyst with Hayward, Calif.-based Pund-IT, Inc in his weekly Pund-IT review today.
“The conventional wisdom around Microsoft’s market impact tends to follow a common theme; that the company’s sheer size makes it a serious competitor wherever it decides to play, but we see a number of obstacles in the way of Microsoft’s leadership goals,” King wrote. “First, though the x86 virtualization market is relatively small (Microsoft estimates that only 10% of servers are currently virtualized) VMware has found a remarkable number of Fortune 1000 customers who drive significant sales and revenues.
“In addition, those large companies tend to be among the most conservative of IT users; once they choose a reliable technology and vendor they tend to stick with them through thick and thin,” King said.
But considering its relatively low entry price of about $28 per Hyper-V Server, Microsoft could be the pathway to virtualization for a wider audience than other high priced players like VMware have been able to reach, King reported. If purchased standalone for hard-drive installation, ESX Server 3i list price is $495 per 2 processors, according to VMware’s website. “If Hyper- V’s features prove to be as robust and beneficial as Microsoft claims, the company could become a significant virtualization player for years to come,” he said.
Bad luck sometimes follows you. Then again, I don’t believe in luck, or its evil twin Murphy’s Law.
Still, bad stuff happens, and in IT, you’re usually there to see it. This happened at an undisclosed location near a super secret military installation next to a classified roadside diner in an unidentified town west of an unnamed river. In other words, my office, the place where I’m both the head VMware and Citrix person and the IT Director (which may give me multiple personality disorder someday, but at least I won’t rust out!).
Arriving at 8:15AM, as I’m wont to do in order to get a jump start on the day, I get an angry mob assaulting me with torches and pitchforks before I’m even to my office. That usally means a big problem, and in this case, nothing’s running. No problem, probably just a power outage knocking things around again. We have frequent outages here, and since we’re not a 24×7 shop, sometimes the UPSes run out of juice and I sometimes have to restart things by hand. Normally everything shuts down gracefully, but on this occassion, the whole place was a mess.
A quick check of server health and I see that all of my physical boxes, including my SAN and VI hosts, are up and running and have been all night. I log into VC to see exactly what I had expected: lots of powered-off servers. Now had there been an event, these servers should have VMotioned off to other servers, and failing that, come back up automatically in a set order. They didn’t. That’s usually a SAN-related issue.
Ok, I check the SAN and see that it’s fine now, but showing uptime since a little past midnight. It’s looking more like the power outage was long enough to take down the UPS that the SAN was on, but not the VMware servers, which shouldn’t be the case unless there are battery problems I’m not being alerted to by APC’s software. Guess what? Yep, that’s it. Ok, lets restart some machines – Linux servers all come up wonderfully. Windows servers, not so much on some of them. The blue screens of death are visible as far as the eye can see — so blue, I thought I was in the Caribbean, except for the stress and lack of rum-based drinks.
Most of them are reporting disk and kernel related problems. Most of the error messages relate to a missing %WINDIR%\SYSTEM32\CONFIG\SYSTEM file. Another common one reports that NTOSKERNEL.EXE is missing. Great, that’s a huge part the registry and the system kernel. This is gonna take days to fix if I have to pull backups and restore from them. Well, maybe not if I’m lucky and its just some corrupted space on the virtual disks.
Treating them like physcial machines, the next step is to boot the recovery console from CD (in this case an ISO file) and run chkdsk with the /p and /f switches as a first step to troubleshooting. Except of course, that there’s no hard disk to be found by the Windows installer, which cause a brief, although painful heart flutter at the though of pulling from backup. It’s one thing to do a quarterly test, it’s another when it’s real. Successful tests or not, massive documentation and howtos or not, the worst starts to flash through your head. You question your methodology: no matter how thorough you tried to be, you begin to think that maybe the test methods were somehow flawed and the backups will fail. Ok, the problem at hand is that I can’t get the disks to be seen by the Windows installer CD. Time to focus and forget doubt.
And this is where it becomes about virtualization, in case you thought I was going off-topic.
Fear pushed aside, it’s time to look at this from a hardware point of view, but also to remember that the hardware is all virtual. A common pattern emerges: the machines in trouble are all converted machines from an existing VMware Server 1.x install that we P2Ved some time ago. It wasn’t something I noticed right away, as a few of those machines were never P2V-ed, some were P2Ved by hand (i.e., just rebuilt), and our Linux VMs from that same VS box are all fine.
The common denominator is that all of the problem boxes had the Bus Logic SCSI controller. None of them would see the virtual hard drives. Switching over to the LSI Logic controller and accepting the change allowed me to run the recovery console, as the Windows installer saw the disks. It was a quick fix: from the recovery console, I ran the disk check and recovered with no further steps needed.
So, I get the Windows boxes back up, curse Murphy for his Law, and vow that in all future conversions, I will change over the controller before the converted box goes into production. Oh, that and I will have some people in to look at the UPS situation. Now, where are those rum-based drinks?
Inspired by a great walkthrough on the LTB Blog about getting VI3 running on a mac via VMware Fusion, I decided to go ahead and give VI3.5 a try on my Macbook Pro using Parallels. However, I wound up disappointed in my effort. I love Parallels for my XP and Linux virtual machines, but ESX 3.5 was just too far out on the fringe for it to handle. Nevertheless, I will blog about the experience.
Here’s what I used in my setup:
- VMware’s VI3 installation set, obtainable in demo form.
- An Intel-powered Mac (the MacBook Pro 17 / 2GB is what I used for this demo) with at least 1GB of RAM, but preferably more.
- Parallels Desktop for Mac.
- Lots of disk space (I used a 250GB firewire external hd).
I built the base ESX server, ESXtest1 with only 768MB of RAM, as I am a bit RAM shy on this machine. I wanted to have another machine, a second ESX box, for VMotion, Storage VMotion, etc. VirtualCenter will be hosted on an external box, since it’s going to sit on XP and we already know Parallels can do XP beautifully. It was a straightforward build with very few changes to the default settings:
- The default location of the virtual machine files
- The amount of RAM
- The boot media (I used an ISO)
- The network type (I used bridged)
I didn’t get far. As soon as I booted up, I received the dreaded error, “The installer was unable to find any supported network devices.” This means one thing: VMware doesn’t support the NIC (a Realtek 8029 AS) that Parallels emulates and doesn’t have drivers for it. Parallels doesn’t have any alternative devices to use, even though they have a drop down box.
And thus endeth my travels into purely fun-testing land. Oh well.
Sun Microsystems, Inc. announced this week it has added new features to its Virtual Desktop Infrastructure software, originally released at VMworld in September 2007, including Sun’s Virtual Desktop Connector (VDC).
Sun’s VDI 2.0 provides interfaces to PCs, mobile devices, and thin clients including Sun’s own Sun Ray thin client offering. With it, centralized desktops can be delivered through the LAN or WAN to Windows Vista, Windows XP, Mac OS X, Solaris or Linux on the desktop, which is fairly unique in the Windows-centric desktop market, said Chris Kawalek, Product Line Manager, Desktop & Virtualization Marketing, Sun Microsystems.
Sun’s VDC, meanwhile, is is more or less a connection broker that interfaces with ESX 3.5 and 3.0.x and Virtual Center Server 2.0.x and 2.5 (VMware infrastructure 3) to create pools of virtual machines that can be defined based on templates.
With Sun’s updated VDI offering, administrators can statically or dynamically assign users to specific VMs, either for a set number of days or indefinitely. Another feature is the ability to ‘reset’ end users’ virtual machines (VMs) if problems arise. For instance, if the user contracts a virus while on the web, the VM can be reset to a date before the issue occurred and operate as it did on that date, Kawalek said.
The tight integration with VMware virtualization software can be attributed to the OEM agreement Sun signed with VMware Inc. in February. Thus, with VDI 2.0, users can actively manage VMware virtual machines, but VMs from other vendors like Virtual Iron can only be statically created and assigned, Kawalek said.
Kawalek said Sun moved into the VDI space last year because it embodies Sun’s ‘the network is the computer’ message. Another reason? It’s the popular thing to do. “Everyone is very interested in centralizing their desktop environment, which is why vendors like Hewlett-Packard and VMware are in this space,” he said.
Sun’s VDI Version 2.0 became available March 18 at $149 per user, including one year of support. Sun Ray thin clients start at $249. Directions on how to install VDI 2.0 are available online, and a free trial can be downloaded from Sun’s website.