The Virtualization Room


January 28, 2008  9:24 AM

Citrix XenServer gets VMLogix’s LabManager

Alex Barrett Alex Barrett Profile: Alex Barrett

Test-and-development environments that want to see how software runs on Citrix Systems Inc.’s XenServer virtual machines can now do so, thanks to VMLogix, which has added Citrix XenServer to the list of platforms supported by its LabManager offering.

Citrix XenServer joins a comprehensive list of virtualization platforms supported by VMLogix, including VMware ESX Server, VMware Server, and Microsoft Virtual Server; support for Oracle VM and Sun xVM is also forthcoming, said CEO Sameer Dholokia. For the time being, Dholokia said, the company has seen “a fair bit of interest in testing VMLogix on Citrix XenServer.”

The VMLogix offering competes directly with VMware’s Lab Manager and conceivably with the newly announced Stage Manager. In fact, Dholokia claims that VMLogix’s offering already includes much of the functionality included in Stage Manager and said that VMware customers may not understand the distinction between the Lab and Stage Manager products. “It will be interesting to see how they manage the confusion factor: ‘When do I use [VMware] Lab Manager, when do I use Stage Manager?'” Dholokia said.

Pricing for VMLogix LabManager is $25,000, plus a $2,500 agent fee per two-CPU server.

January 23, 2008  9:23 AM

Introducing ivi, Java’s universal virtualization management GUI

Alex Barrett Kutz Profile: Akutz

ivi (pronounced eve-e) stands for Java Virtual Interface, and it is a project that aims to create a single, graphical, management interface for all the major virtualization products.

Implemented in Java+Swing, ivi is truly portable. Currently ivi uses the VMware Virtual Infrastructure 3 (VI3) software development kit (SDK) to communicate with VI servers and the XenApi to talk to Xen servers. Future plans include adding support for libvirt to allow communication with KVM and OpenVZ and, eventually, support for the Common Information Model (CIM) as a way to talk with VMware, Xen, and Microsoft all through one interface.

The number one barrier to a properly utilized datacenter is the lack of a single management tool to control a heterogeneous environment. The idea behind ivi is to create a single management application that can be used to control all of your datacenter’s virtualization solutions. With so many virtualization products available, using so many different types of architectures, it is more important than ever to possess a management tool that can provide IT professionals a single point of management.

You can read more about ivi at http://www.lostcreations.com/code/wiki/ivi.


January 22, 2008  10:00 PM

Putting VMware Server 1.04 on CentOS 5.1 x86_64

Alex Barrett Joseph Foran Profile: Joe Foran

Since 2.0 is almost out, I imagine this will need a follow-up at some point. (Good…it’ll keep me focussed on blogging.) In the meantime, I decided to give the 64-bit world a whirl, as we’re evaluating moving to Exchange 2k7.

With my box racked up and plugged in, I grabbed the ISO for CentOS 5.1 and gave it a whirl. On a side note, it installed perfectly on a Dell PowerEdge 1950 w/ SAS disks in a RAID 5 array on a PERC5 card. In spite of some people having problems related to the RAID setup, mine went through flawlessly. (Apparently there is a known issue with multiple arrays on a single card and GRUB’s placement on the wrong array.)

After getting the OS up and running, I gave it a whirl using a FAQ I found at Nixcraft to install VMware server.

I did it once on a fully-updated install, complete with the updated kernel packages, and it bombed out. Going back, and using an updated kernel worked flawlessly. To sum up the process, here’s what to do:

  1. Install CentOS. Make sure you have your gcc packages installed (under the Development trees during setup).
  2. Grab the latest rpm from VMware and install: # rpm -ivh VMware-server-<version>.rpm
  3.  Install some more needed libraries: yum install libXtst-devel libXrender-devel
  4. Install xinetd: # yum install xinetd
  5. Run your config: # vmware-config.pl

Boom…done.


January 18, 2008  4:29 PM

Thinstall acquired, will PortableApps be next?

Alex Barrett Joseph Foran Profile: Joe Foran

Considering the recent acquisition of Thinstall by VMware, I have to wonder: Is PortableApps next on the list of to-be-acquired companies? The two companies have one thing in common: They both have products designed to take entire applications and put them into a single container for portability and reduction in complexity. I’ve kept a number of PortableApps applications on my USB stick for a long time — it’s nice to have a quick set of tools to use without having to use somebody else’s settings, leave traces of my work on their systems, etc. Throw in VMware player and a stripped-down guest OS with PA’s software on it and you have a real winner. Take it to the next step and put PortableApps applications onto a server that distributes software via a thin approach (Citrix, 2X, etc.) and you have a hit in application virtualization. One big beef I, and many otherwise-fans, have about Citrix is the all-too-real potential for winding up DLL Hell. Applications to be served via server-based computing solutions like Citrix (or 2X, TS, etc.) often need to be isolated if they are mission critical (would you run your Peoplesoft app on the same Citrix server with your Office 2007 suite?), which usually means adding more Citrix servers. This, in turn, means a heavier workload for staff and host servers (if you virtualize Citrix).

Enter application virtualization. There are a lot of good brands out there, notably Softricity, which was swallowed up by Microsoft already. Thinstall is another application virtualizer (albeit via an entirely different process). Then there is PortableApps, which does much of what Thinstall does, just not as much of it. Thinstall 3.3’s product description reads (in part) as follows:

Thinstall is an Application Virtualization Platform that enables complex software to be delivered as self-contained EXE files which can run instantly with zero installation from any data source. The core of Thinstall VS is the Virtual Operating System, a small light-weight component which is embedded with each “Thinstalled” application.

PortableApp’s reads like this:

A portable app is a computer program that you can carry around with you on a portable device and use on any Windows computer. When your USB flash drive, portable hard drive, iPod or other portable device is plugged in, you have access to your software and personal data just as you would on your own PC. And when you unplug the device, none of your personal data is left behind.

There are differences, of course, but the overall business models are very similar. PA is an open-source outfit, which makes it a bit more transparent than Thinstall, which has a mix of OSS and proprietary products. Since PA is purely open source, and relies a lot on the community to deliver portable-ized apps, its list of of programs is smaller and limited to open-source and other freely licensed software. Still, with the recent focus on OSS in the enterprise, one can’t help but see the value of a PortableApps version of OpenOffice sitting on a Citrix Server for thin-client and virtual desktop users to access.

Since Thinstall and PortableApps both provide OpenOffice (Thinstall does so as a demo), I took them for a spin. In my “everyman” test, which was certainly not scientific, I ran them both from a network share on the same NAS box over SMB to a Parallels virtualized XP machine with 512 MB of memory. The Thinstall demo downloads as a zip file that you extract to the desired location. The PA app downloads and installs (using an NSIS installer) to the desired location. Once extracted, I ran them three times each — the total time listed is until the app was usable. The Thinstall application (which uses v2.4 of OpenOffice) took 18 seconds on the first test to load up, 13 seconds on the second pass, and 16 seconds on the third. The PortableApps version, which uses v2.3 of OpenOffice, required setup information (your name, if you want to register, etc.). I decided to discount this from my scoring, but it took 17 seconds, in case anyone is interested, from launch to the registration screen. Once that was done, loading time until a usable screen appeared took 18, 22, and 17 seconds. The splash screens appeared at 10, six and eight seconds, respectively, for Thinstall; and six, eight and eight seconds for PortableApps (excluding the PA-promo splash, which took one second each time and is a separate splash from the OOO splash).

I moved them to the local drive and things got livelier. Thinstall loaded in six, three and three seconds (with splash at two, one and one seconds). PA loaded in six, two and two seconds (with splashes at one second each, but the PA-specific splash obscured them two of three times).

There are a lot of other interesting applications out there just ripe for the virtualization space; browser-based desktops like that offered in alpha by g.ho.st comes to mind. Expect a post on that sometime in the near future.


January 18, 2008  4:27 PM

GParted: Save yourself from virtual test screw-ups

Alex Barrett Joseph Foran Profile: Joe Foran

Todays episode is … screwing up your test environment and fixing it. As many know, I test management tools like some people switch socks. I have a simple theory on it: If what I’m using could be better, and there’s no business impact in switching, I’m going to switch. I went from Groundwork Open Source to Spiceworks 1.0 (see my prior post here) to Hyperic. Spiceworks 2.0 is out (and has been for some time) and it’s even spicier than the last version, although the Chili Peppers were missing for a while. So time to put it into the demo lab, right? Absolutely! A little short on time? Of course! In so being, I hosed that Spiceworks 2.0 demo box from sheer stupidity. And I mean sheer stupidity. I didn’t follow any best practices. The worst thing I did was putting the app on a default XP build image. It didn’t need to be on a server OS for the initial testing phase, and I had a blank XP image ready to go. To channel my Inner Yoda: Saved myself some setup time I did. Hosed myself in the long run I had.

The default image for an XP Virtual Desktop is 8 GB in size. This is just enough space for the applications to be installed and to have room for updates, patches and whatnot. It makes maximum use of disk space on our NAS box without leaving the XP machine’s disk too stripped for future expansion. Or so I thought.

As it turns out, Spiceworks eats up disk space, about 4 GB for the one subnet I have it scanning. I didn’t have the helpdesk module in use except for my own testing, so that space count doesn’t include tickets with attachments. Considering that this machine had just under 4 GB of free space, I soon found myself with a problem, one that caused my Spiceworks install to fail, and the box to slow to a crawl.

What to do … should I rebuild? I could, but then I’d lose the data this thing had accumulated over the few days’ time it took to fill the disk, and I liked the results of the test. It was performing on par with Hyperic in terms of general scanning and reporting, although the drill-down detail was somewhat less, and the ability to customize access to a given resource was less (Spiceworks supports configuring only one Windows domain account and one *nix account, and uses SSH and WMI, as opposed to local agents reporting back in the way that Hyperic does). The alerting system was solid. The inventories were solid. Everything about the software was flying high and showing happy, except, of course, the one problem I was having – keeping it from eating the “entire” disk drive. Also, it was running great on XP, and with VMware, there’s no real difference in managing disaster recovery or backup for an operating system. I’m still ticked that there’s no version to install on a Linux box … but that’s for a different forum than this.

Should I recover as much disk space as I can and hope for the best? Sure, why not give it a try? I’ve already hosed myself here, in spite of being addicted to documentation and following templated procedures whenever possible. A short while later, no real luck. As much as I remove, Spiceworks continues to grow as it records scanning results and archives historical data.

Time to fix the problem at the disk level. I did this not too long ago on my XP machine in Parallels, based on this blog post, so I just modified the process to fit the VMware Server 1.04 environment. First off, to expand this disk size, I went to the terminal on my Macbook, remoted to my VMware Server box via SSH, and executed the following command from within my virtual machine’s directory (on my boxes, this is /vmdir rather than the default) :

vmware-vdiskmanager -x 14GB “My Disk Name Redacted.vmdk”

After a short time the disk was successfully expanded. Now, of course, just because the disk was expanded doesn’t mean the space is usable, which is where GParted comes in. Grab a copy from the wonderful folks who host it (SourceForge). GParted makes you go through some configuration when you boot from it, just so that you get the best display resolution and so that it boots properly, but most anyone with experience in IT can follow the process without me outlining it here (especially since it only involved pressing enter twice, and has been outlined elsewhere ad infinitum). I switched the removable drive over to my GParted iso file, booted and configured. At this point, as you can see from the images, I had a full view of my disks and could manipulate to my heart’s content:

I simply right-clicked, selected Resize/Move, and slid the slider from the current position to the end of the drive. The steps, graphically, are as follows:

Strangely enough, it errored out on me (and I didn’t get a cap … dangit). Turns out it wasn’t a show-stopper, but it was strange. When I rebooted, it loaded into Windows normally, which threw me for a loop — it is *supposed* to do a disk check. When the check completes, Windows should see the expanded drive size. As this pic shows, there was a decided difference in opinion between what the drive’s capacity was and what was available in reality.

Not deterred, I manually ran the chkdsk for the next reboot, whereupon nothing changed. Since I had an error before, I decided to try again. I resized the drive to add 1 MB of unallocated space at the end, applied (this time with no errors), and rebooted.

Windows came up, ran a quick scan of the disk, and voila! It found the extra space just fine.

Was the error just an anomaly? I checked Google but nothing really popped. I checked the GParted forums — nothing really popped. Chalk it up to a slight hiccup at the wrong time, perhaps a bug as yet unseen. I posted the info and moved on.

So, as you can see, GParted helped me pull my test bacon out of the test fire before it test-burned. This is definitely a utility I will be keeping in my toolbag from now on.


January 15, 2008  3:48 PM

VMware ESX 3.5 upgrade path: Where to start

Rick Vanover Rick Vanover Profile: Rick Vanover

VMware’s flagship products ESX 3.5 and Virtual Center 2.5 have been available for a little over a month now. When the upgrades were made available, there was much excitement on the newly touted features. So, many IT professionals quickly hurried off and downloaded their product updates and then came to a collective stopping point. How do we upgrade ESX while in use? Sure we upgraded from ESX 3.01 to 3.02 with very little impact. But the change from 3.0x to 3.5 may seem worthy of more preparation because the scope of the change is larger with some of the new features, like Storage VMotion. With the release, here is a simple upgrade strategy that many are adapting:

  • Allocate two ESX 3.0x systems as 3.5 candidates (not everyone will be able to do this, I realize).
  • Carve these two systems into their own cluster or data center. 
  • Make sure all existing VMware DRS rules would be okay with two systems removed.
  • Upgrade or fresh install one of the systems to ESX 3.5.
  • Test migration from ESX 3.0x to the new ESX 3.5 system.
  • Test VMware tools versioning and test any upgrade virtual machine tasks.

This strategy will replicate what you will likely face in a real upgrade situation, as you may not be able to. Because you may only be able to have a limited number of systems available for maintenance at any given time, it is good to be able to replicate that in a test datacenter or cluster. In smaller implementations, this could be repeated with one host where the migration test would be from a live ESX host. Evaluation software may also be a consideration to make available the correct number of hosts to simulate the co-existence of ESX 3.0x and 3.5.

Keep it moving

Should your configuration allow seamless migration between your ESX 3.0x and 3.5 hosts – that should not be a crutch for undefined periods of mixed versions. A good practice would be to have all hosts on the same version of ESX within a cluster. Larger environments may have difficulty moving all systems to the new version, but strategize within your Virtual Center configuration to determine the best configuration for temporary mixed versioning. The goal should be to get all systems on the same version enterprise wide – but only after you are completely comfortable with 3.5 in your infrastructure. 

The horse’s mouth

VMware provides many quality resources online, I’ve saved some work for you and collected some of the highlighted pieces here for review in relation to ESX 3.5 upgrades:

ESX 3.5 upgrade guide

ESX 3.5 compatibility matrix

What’s new for storage in ESX 3.5

VMware Consolidated Backup improvements in ESX 3.5

These resources are a good strategy in being well informed for the what your plan for ESX 3.5 will entail. Simply installing without preparation is surely a recipe for mis-configuration or incorrectly applying your configurations as intendend. And the test upgrade procedure to become familiar with a mixed environment will allow you to clear the way for an end-state configuration of a single version of ESX.


January 14, 2008  4:44 PM

SearchServerVirtualization.com Products of the Year – Not without their share of snubs

Rick Vanover cwolf Profile: cwolf

Fortunately for me, my job never requires me to determine vendor awards. However, Alex Barrett and the SearchServerVirtualization.com staff aren’t so lucky. While it’s great to have the power to name Products of the Year, it also means that you’re stuck hearing complaints from everyone that wasn’t named. In case you missed it, Alex recently published the SearchServerVirtualization 2007 Products of the Year.

I think that Alex and the editorial staff did a great job with selecting products, but thought that I would take a moment to highlight some vendors with excellent products that did not make the list. After all, it’s just as much fun to debate the vendors that were not recognized as it is for those who were.

VMware

Yes, VMware’s on the list, but at the same time they’re not on the list. If you didn’t notice, VMware ESX Server 3.5 is nowhere to be found in the article. The SearchServerVirtualization.com editors informed me that ESX 3.5 missed the cutoff date for award consideration (November 30th), and therefore wasn’t eligible. Editors do need time to work with a released product in order to make a fair judgment, so I understand the reasoning for the cutoff. Still, ESX 3.5 was a significant release from VMware, with features such as Storage VMotion adding significant value to VMware deployments.

Novell

Novell quietly had a great 2007, from a virtualization product perspective. Novell was right behind Citrix/XenSource in achieving Microsoft support for their Xen-based virtualization platform, and was pushing the innovation envelope throughout the year. Novell was the very first virtualization vendor to demonstrate N_Port ID virtualization (NPIV) on their Xen platform. Novell was even showing their work with open virtual machine format (OVF) last September at their booth at VMWorld. When you factor in Novell’s work with their heterogeneous virtualization platform management tool, ZENworks Virtual Machine Manager, you’re left with a pretty nice virtualization package. The vendors mentioned in the virtualization platform category (VMware, Citrix/XenSource, SWsoft) are all worthy of recognition, and I think it’s equally fair to recognize Novell’s work in 2007 as well. Perhaps Novell’s heavy lifting in 2007 will result in recognition in 2008; however, it’s safe to say that Novell is going to have some stiff competition from VMware, Citrix/XenSource, Microsoft, Sun, Parallels, and Virtual Iron.

Symantec

I thinks it’s hard to leave Symantec Veritas NetBackup 6.5 out of the discussion. In fact, amongst backup products, I’d list them as first, right alongside CommVault. Symantec was the first major backup vendor to announce support for Citrix XenServer backup, while all other backup products officially supported one virtualization platform – VMware ESX Server. The NetBackup team was also very innovative with VMware Consolidated Backup (VCB), as NetBackup 6.5 includes the capability to perform file level recoveries of VCB image level backups. Typically, a backup product performs two VCB backup jobs – an image level backup for DR purposes, and a file level backup for day-to-day recovery tasks. NetBackup 6.5 provides the ability to do this in a single pass, which I found to be pretty innovative. Factor in Data-deduplication (extremely valuable considering the high degree of file redundancy on VM host systems), also available in NetBackup 6.5, and it’s hard to see how NetBackup could be ignored.

SteelEye

SteelEye is another vendor in the data protection category that I’m surprised did not make the list. VMware HA by itself will not detect an application failure and initiate a failover job as a result, as it’s primarily designed to monitor and react to hardware failures and some failures within the guest OS. SteelEye LifeKeeper, on the other hand, provides automated VM failover in response to application and service failures (in addition to guest OS and physical server failures). Many failures are software-specific, and products that can automate VM failover or restarts in response to software failures go far to improve the availability of VMs in production.I’m limiting my comments only to the award categories, hence I’m only listing some of the products I’ve worked with in 2007 that fit into one of the SSV categories. I hope that for the 2008 awards, we’ll see a higher number of award categories, so all products in the virtualization ecosystem are represented.

Do you agree with editors’ choice of winners? Which deserving vendors do you feel were left off the list? I’d love to hear your thoughts.


January 9, 2008  10:26 AM

Parallels Server beta goes public; supports new Apple Mac Pro, Xserve

Alex Barrett Alex Barrett Profile: Alex Barrett

Looking for an alternative to VMware ESX, Xen, or Microsoft Hyper-V? SWsoft, soon to be renamed Parallels, announced the public beta of its Parallels Server hypervisor today, a few short weeks since the hypervisor entered into private beta.

In addition, Apple and Parallels have worked out a deal to allow Parallels Server to run on the new Apple Xserve and Mac Pro hardware, which use the latest generation of Intel chips featuring Intel Virtualization Technology for Directed I/O (Intel VT-d). Parallels has promised experimental support for VT-d systems in beta and full support when Parallels Server becomes generally available.

This announcement makes Parallels Server the first bare-metal hypervisor to run on Apple hardware. Apple shops will be able to run Mac OS X Leopard as well as Windows, Sun Solaris and Linux virtual machines on their high-end Apple hardware. Parallels Server is also notable in that it can be installed either on bare metal à la VMware ESX Server or as a “lightweight hypervisor” running on top of a host operating system. At installation, users can choose how they want to deploy Parallels Server.

Last fall, Apple paved the way for this announcement when it altered its end-user license agreement (EULA) to enable the virtualization of Mac OS X. Of course, Parallels Server also runs on non-Apple x86 hardware, although Apple prohibits Mac OS X from running there.

According to Parallels, Parallels Server beta includes the following features:

  • support for more than 50 different varieties of x86 and x64 guest operating systems;
  • remote control of the virtual machines via the Parallels Management Console;
  • support for up to 64 GB of RAM on the host computer;
  • two-way symmetric multiprocessing (SMP) support in virtual machines, to go to four-way SMP in the final version;
  • multi-user access to the same virtual machine;
  • support for ACPI (Advanced Configuration and Power Interface) in virtual machines; and
  • open, fully scriptable APIs for customized management* Full support for Intel VT-x, and experimental support for Intel VT-d.

For more information or to participate in the beta, visit the Parallels Web site.


January 7, 2008  10:11 AM

Is VMware underpriced?

Alex Barrett Alex Barrett Profile: Alex Barrett

Of late we’ve written a lot about VMware pricing (see VMware pricing draws large enterprises’ ire, and VMware costs inspire Virtual Iron purchase), which in turn prompted a conversation with VMware Certified Professional (VCP) and former customer Michael Tharp, who is now a server virtualization practice lead at Mainland Information Systems, a VMware Authorized Consultant (VAC) in Calgary, Alta.

Tharp has an interesting perspective on VMware: Given the value that most firms derive from it, VMware is “fairly priced, or perhaps even a bit underpriced.”

Back in the day when Tharp worked in IT operations, he was stunned by the low cost of VMware ESX Server. “I would have paid twice as much for VMware without blinking an eye,” he said.

Today, as a VAC, “The worst ROI I’ve ever seen was for a customer with just 14 servers that needed to buy a new SAN [storage area network]. They saw ROI in less than a year and saved $180,000 over three years,” he said. “That’s still pretty compelling in my book.” Shops with more servers and a pre-existing SAN tend to see dramatically better numbers.

But setting its pricing so low was “a mistake” on VMware’s part, Tharp argued. “It gave them no wiggle room” a situation that manifests today as a “reluctance to negotiate any kind of discount.”

This intransigence may rub some potential customers the wrong way; but for the time being, Tharp sees no valid alternatives. When it comes to VMware competitors like Citrix Systems’ XenServer or Virtual Iron, “I don’t see the value and don’t think I could find enough customers to make it worth the effort,” he said.


January 4, 2008  10:04 PM

Virtualization and high availability: User ponders products, path

Jan Stafford Jan Stafford Profile: Jan Stafford

Systems admin Michael Gildersleeve wishes for 100% uptime and wonders whether virtualization will bring him closer or further away from that goal. It seems to him that virtualization options only cover one server at a time. “What if I need to do an OS update or patch, or what if some critical hardware fails?” he asks. In that case, he feels a bit more comfortable with a cluster than with virtualization.

Gildersleeve is evaluating high-availability options for virtual machines. VMware’s High Availability (VMware HA) is on his list, but he’s not sure whether that product will work well with his legacy software. He’s also not sure whether HA is as mature and robust as other products on the market.

I’m answering his call for more information. I hope that you will too, either by commenting on this post or emailing me at jstafford@techtarget.com.

Gildersleeve works for a company that has a Progress database running on a Unix server. Hundreds of Windows clients and Web applications are attached to that database and server through Progress Brokers via service file ports. “I need to provide 365 by 24 by 7 uptime,” Gildersleeve said. “With our new Web business, East and West Coast facilities, and vendors managing our stock and replenishment, we need to be available all of the time.”

He wants to run his database across at least two servers, in a setup like an Oracle Real Application Cluster. He continued:

This would allow me to upgrade the OS, reboot a server or take a server down for maintenance without affecting the database or the users. So far I have only found solutions that will give me a two- to five-minute downtime between switching from one server to another.

Yes, Gildersleeve has looked a little at server virtualization. He’s evaluating server virtualization options and VMware HA to see whether he can reduce the downtime to nil.

What I have seen so far is that if I upgrade my Progress app to v10 (Progress OpenEdge), and then move to two Integrity servers running High Availability, that if one server fails or if we need to do maintenance on a server, we can manually switch to the second server; but the problem with this is that my users will feel the switch because I will need to bring one server down. They will need to log out and in again to the app, or whatever needs to be done to bring the ready server into production mode.

Gildersleeve is willing to evaluate Sun Microsystems options, if they are truly viable for running Progress. Microsoft operating systems are out of the question.

In his evaluations, Gildersleeve has come up with a lot of questions, and he’s looking for advice from HA experts. Can you provide some advice and share your experiences by commenting on this post or emailing me a jstafford@techtarget.com?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: