December 5, 2007 4:41 PM
Posted by: Alex Barrett
With all the hype and hoopla surrounding virtualization, picking the right platform can be a scary proposition. For those of you who haven’t yet chosen VMware’s, Citrix’s or Microsoft ‘s virtualization platform, IT consultant Steve Shah explores their respective virtualization approaches (rather than their technical differences) to help you decide which one is right for you.
If you do go down the VMware road, we have a bunch of tips to get you started. Expert Rick Vanover offers advice on ensuring a successful mass physical-to-virtual migration. The takeaway: Measure twice, cut once.
Meanwhile, for the cheapskates among us, Harley Stagner gives us a two-part tip on how to get failover and high availability for a file server running on ESX without buying VMotion or VMware High Availability. In the first part, Stagner explains how to set up Windows Server 2003 as a file server on an iSCSI storage array; in part two, he describes a nifty way to use the free VMware Converter 3.0.1 and p2vtool.exe to script the failover of that file server, without incurring downtime. Neat-o.
Finally, expert Andrew Kutz helps a VMware user struggling with a script to help him shut down and power up VMs outside VirtualCenter. The answer, Kutz says, lies in the FindByDnsName method. Kutz also explains how to create a library assembly (DLL) that can be referenced from other .NET applications.
December 5, 2007 11:23 AM
Posted by: Rick Vanover
Microsoft Virtual Server
, Product announcements
, Rick Vanover
, Virtualization management
, Virtualization platforms
When you think virtualization, VMware comes to mind as the leader, correct? Sure ESX is the premium product right now for x86 virtualization, but there is a movement that needs awareness right now. Last week I mentioned that we should evaluate Citrix XenServer and this week I will expand the scope of that recommendation. The base technology of virtualization will soon be a commodity, and basic elements are free with VMware Server, XenServer Express Edition, Microsoft Virtual Server, and Microsoft Virtual PC. The base virtualization technology is now readily available so many ways, that the real distinguishing factor will be the management of the virtual environments, the high-availability, costs, and ease of use.
Once shunned out of many IT shops in lieu of the “Windows Revolution” Novell now offers a virtualization management layer, Novell ZENWorks Orchestrator. Now, before you blow off to some other post – consider this – most Novell products are really good at what they do. NetWare was a superior file server (sure – there were client issues and interoperability issues – but there still is no better rights assignments for file serving), Novell jumped on the Linux boat early, and you can see how Linux has clearly maintained its momentum. So, from the management standpoint we will really need to evaluate this solution as well. Orchestrator also is going to embrace cross-platform (Xen, VMware, and Microsoft) management. That alone should be enought to get your ears perked up. Remember, virtualization is relatively young in the x86 space – so anything we can do to not close any doors from the intial embracing of the technologies would be a good idea.
December 4, 2007 10:06 AM
Posted by: Alex Barrett
, Virtualization management
Just as it said it would back in October, Sun Microsystems Inc. is releasing version 1.0 of its “new” xVM Ops Center management suite, which is really a merger of its Sun N1 System Manager (N1SM) and Sun Connection configuration management software packages that have been brought under the auspices of Sun’s nascent virtualization brand.
Nevertheless, xVM Ops Center Director of Marketing Oren Teich claims there’s more to Sun’s new management bundle than branding. There’s a nice new Web user interface, of course; but perhaps more to the point, xVM Ops Center can manage a far greater number of operating system instances than its prior incarnation. To wit: N1SM maxed out in the 100- to 250-node range; Sun Connection gave up around 500; but Teich claims xVM Ops Center is able to manage around 5,000 running OS instances.
“That’s a lot bigger than any of our customers have calls for using,” Teich said. That includes the Texas Advanced Computing Center, whose 3,936-node Sun-based supercomputer will be managed by xVM Ops Center.
That surge in scalability stems from a redesign of xVM Ops Center’s network architecture. Taking its cues from the Web feed format Really Simple Syndication (RSS) we all use to read frequently updated content like blogs or news, xVM Ops Center shuns the old model of a central management server regularly polling its charges and instead adopts a subscription model where agents take the initiative to report up to the mother ship — the satellite server — via distributed proxies.
“The agents talk to proxies, and the proxies talk to the satellite servers,” Teich said, not the other way around.
Sun’s xVM Ops Center agents also mimic RSS in its use of XML over HTTP. This approach, Teich said, helps Ops Center “avoid having to punch holes all over your firewalls, which has always been a nightmare.”
But as impressive as this 5,000 number may be, you’ll have to wait until the second quarter of 2008 and version 2.0 for Sun xVM to actually be able to manage xVM, the Xen-based hypervisor. “Version 1.0 doesn’t include Xen management; that’s because we don’t have it yet,” Teich said. For now, xVM Ops Center is limited to managing physical machines running Linux or Solaris [Windows support is forthcoming, although no date has been specified]. For tasks within a virtual environment, “customers today would probably use VMware [VirtualCenter].” That’s OK, though, because for now, about 75% of Sun’s “many hundreds of customers” for N1SM and Sun Connection are still running in a physical environment.
Sun has released pricing for xVM Ops Center: $10,000 for the central satellite server and $100 to $350 per managed instance, or guest.
November 29, 2007 12:20 PM
Posted by: Alex Barrett
Server consolidation via virtualization can save an organization so much money that people sometimes forget about ways to stretch a virtualization budget further. Not so for Rick Vanover, a tip writer and systems administrator, who offers his thoughts on the most cost-effective servers, network cards, and host bus adapters (HBAs) to buy for a virtualization host.
Speaking of saving money, a local credit union avoided having to spend tens of thousands of dollars on extra disk space for its NetApp storage by running A-SIS data deduplication on its VMware Virtual Disk Infrastructure volumes. A-SIS comes “free” with NetApp NearStore arrays, although of course, NearStore isn’t free.
Got VMs running on SAN storage? In another tip, Rick Vanover details a technique for retrieving a logical unit number (LUN) when the presence of VMware native drivers prevents you from using normal techniques. He also describes the esxcfg-mpath command that you can use from the service console to reveal information about HBAs, multipathing, LUN configurations and the like. Read this; your storage administrator will love you for it.
November 28, 2007 4:38 PM
Posted by: Joe Foran
, Joseph Foran
, Linux and virtualization
, Virtual machine
Today marks an host-oric day, as the first virtual desktops are ready in the lab for my most forward-thinking users (and, as temporary machines, any who happen to suffer hardware failure). As my company is a mid-sized firm, taking this plunge is a bit of “bleeding edge” for us, but it’s too promising to pass up. The early test environment was pretty basic – a few desktops with souped up memory, CentOS 4, VMware Server, and our XP build. First a side-note on CentOS – I love CentOS because it’s almost 100% binary compatible with Red Hat Enterprise Linux. In fact, it’s compiled from their SRPMs, with the copyrighted materials (the logo, some artwork, etc.) removed. On the client-side, ThinStation or any of the other many thin-client linux distros meant to communicate via RDP will work just as well (perhaps better). The roots of this initiative lie in my wanting to have my XP desktop available from where I was — my Macbook Pro, My Freespire 2 desktop, or my Vista desktop. All have desktop virtualization on them, but since they don’t all have the “same” products, mounting a share somewhere wasn’t going to work — and performance might be a bit… underperforming.
The best route was to have it available via RDP. I also wanted to build virtual desktops rfor users. The result, to kill an old commercial’s memory, is that VMware got their peanut butter in my chocolate, or I got my chocolate in VMware’s peanut butter. Either way, I liked the results. It was simple enough to do, and it performed well under even the limted circumstances. Best of all, it’s not complicated to manage. ESX and VirtualCenter more than did the job (though I thinkg a fortune 500 would have need for enhanced management tools, if only for filtering and tracking users to desktops).
After that worked out well for me, I started trimming it back to a more common user-centric desktop build as opposed to the IT-Centric build, taking temporary desktop replacement as a start-in point. The big first was security, while limiting complexity ran a close second. Thanks to AD’s Group Policy handling profile and folder redirection, there’s really no perceivable difference between the user’s original computer and the server-hosted VD. When their PC is fixed, they get it back, and we move on to the next broken-box situation.
The VD solution proved its value there, beating our 2X application server thin-clients (which fared well, but less well than VDs because of the difference in user experience between a linux desktop running a full-screen browser and an XP PC). The next step is to see if we can make this permanent. So, a few IT-savvy first-adopter types are going to get some very old PCs with some very new tricks. I can’t wait…
November 27, 2007 7:07 PM
Posted by: Alex Barrett
There are a lot of vendors talking about virtualization management, but their pitches can sound frightfully similar. One exception is ManageIQ, whose co-founder Joe Fitzgerald lays out some concrete examples of what makes managing virtual machines so complex — and so important.
Curious about Microsoft’s forthcoming Hyper-V, formerly Viridian? So is Anil Desai, our resident Microsoft virtualization expert, who weighs in with a discussion of the Microsoft Hyper-V architecture. Among the key differences he finds between it and VMware ESX is how drivers are handled: “With Hyper-V, drivers are installed within the guest OS, not within the hypervisor layer,” Desai writes. “This allows vendors and administrators to use drivers that were designed for the server’s physical hardware, rather than the virtualized hardware.”
Is it safe to serve up both Internet and intranet content from VMs on the same ESX host? That depends on whether you trust VMware’s networking stack, writes site expert Andrew Kutz. If not, you’re better off segregating them, he writes.
Also, for the VMware value-added resellers (VARs) among you, I just noticed an interesting “VMware FAQ for Resellers” guide on our sister site SearchSystemsChannel.com, which caters to the channel set. Among other things, learn about VMware’s incentives for resellers, the value of the VCP (VMware Certified Professional) certification, and how VARs can sell services on top of the embedded ESX 3i. Good stuff.
November 27, 2007 2:13 PM
Posted by: Joe Foran
, Virtual machine
, Virtualization platforms
VirtualBox, built by a German company called Innotek, is the underdog of the virtualization market. They are receiving very little press and very little market share, yet it’s a fairly robust, if young, platform.
Like VMware Server and Parallels Desktop, VirtualBox isn’t a type-1 hypervisor, but rather a type-2 that sits on an existing operating system (OS). I’ve been running it on my Mac and my PC for a bit and have tried out a few virtual machines, some migrations, etc. It shows excellent performance, stability, and usability, but does have a few caveats. Like most open source projects aimed at business users, VirtualBox has two sides — a purely open source side that’s beloved by the community and some proprietary additions geared towards enterprise customers that puts bread on the table for Innotek’s staff.
Like most virtualization platforms, VirtualBox supports USB devices, audio, and most of the other features typically expected of a virtualized computer. There’s no OpenGL / DirectX support as of yet, but I imagine that’s not a long stretch to implement now that Parallels has released the modified source code that they used to accomplish the task, and VMware has put 3D acceleration into Fusion. Snapshotting is there, which of course is key in any virtualization platform. I don’t see much on the management side, making it similar to where Xen and it’s ilk were not too long ago – the hypervisors were in place but the management apps were still in development and came to market later in the game, once the initial splash of “hey, there’s a new hypervisor in town” flattened out a bit. This isn’t a big deal for me, since I’m using it in a very raw test environment and don’t need much in the way of managing my own personal playground, but a real lab might want better management tools. Still, the basics are there, and the product compares favorably to VMware server and Workstation. I’d even go so far to say that it’s a got a slight edge in out-of-the-box usability, since the guest machine can act as RDP servers, meaning that you can use an RDP client to remotely view the desktop, regardless of what’s installed on the guest (no need for rdesktop packages or other client-side tools) and without having VBox installed on your local machine.
One neat item I liked on the Mac version: They mapped the Apple key to release the keyboard and mouse from input capture. OK, that’s about as important as what color they made the icon, but it was a neat touch and shows that they really are playing well in the Mac space. Personally, I hated Fusion so much that I wiped it off my system after about a week, going back with each subsequent beta until I get fed up with it for the last time after general release. I love Parallels, but it’s nice to have a hand in all of the pies, even if it’s only half-a-finger deep or so (mmm… pumpkin). Most of my testing was done on a Vista host, so there may be caveats in the Mac beta I don’t know about yet.
Documentation is first-rate. The user manual is very clearly organized, very clearly stated, with excellent screen captures and a walkthrough approach that is meant to get the job done. Technical documentation is likewise well-organized and stated, with an eye towards how the system is meant to work. I didn’t read the whole of the user manual; my familiarity with other products was enough to get me through everything until it came time to install the VirtualBox Additions software (like VMware Tools and Parallels Tool, these are for timesync, keyboard, mouse, and video performance improvements). I read through, did the installs, and moved on to using my guests.
Using the software was easy. If you can navigate in Parallels Destkop or in VMware’s Fusion or Workstation or Server, then you’ll have little problem with using the product. One caveat I noted right away when I tried to open a VMware VMDK machine containing XP Pro with VMware tools made under Server 1 (not the beta) was well-documented in their wiki. It repeatedly bombed out trying to boot because of hardware changes. On attempting the same thing with a Linux install (DSL), it booted and was fully functional. I made a few native guests, and found that the compatibility page for Vbox is quite accurate – as expected, my Ubuntu 7.10 failed miserably because Vbox doesn’t support PAE and Ubuntu uses it without checking for support. All my CentOS Linux installs (4 and 5) worked nicely, as did Fedora Core 7 and Windows XP. I didn’t try Vista or FC8, but that’s more a matter of time than any concern about them working.
Overall, it’s gets a solid 8 pokers out of ten as a competitor to VMware Workstation / Fusion and Parallels Desktop.
November 27, 2007 11:35 AM
Posted by: Alex Barrett
Here’s one that slipped through the cracks last week: Dell has signed on to resell Virtual Iron, following in the footsteps of Hewlett-Packard and IBM. Virtual Iron’s new CEO, Ed Walsh, has been beating the channel drum, so this should come as no surprise.
In a far-reaching tip, contributor Anne Skamarock opines on how to avoid virtual sprawl. Hint: It involves doing a proper inventory of your environment before undertaking consolidation.
If backup is your bag, Burton Group analyst Chris Wolf has the lowdown on issues affecting VM backup: CPU, disk I/O and network I/O bottlenecks, and he offers an overview of where backup options like image-level backups; VMware Consolidated Backup, or VCB; file-level backups; and continuous data protection fit into the data protection continuum.
November 27, 2007 10:34 AM
Posted by: Akutz
A while back I reported on what was a sticky issue for many people: VMware Server 1.0.4 did not work with the latest Linux kernel (2.6.23+) because the VMware Server memory module used the dumpable bit which had been removed in 2.6.23+ in favor of the GPL v3 exported set_dumpable and get_dumpable functions. Because the VMware Server memory module is not GPL v3 compliant (it does not use the MODULE_LICENSE macro to declare itself such), either a kernel recompilation was required in order to redact the GPL v3 changes to the set and get dumpable functions or a vmmon module recompilation was required in order to lie about its license type. Unlike the writer’s strike, a compromise has been reached.
The Linux Kernel development team has not removed set and get dumpable’s GPL v3 requirements and VMware has not made the vmmon memory module GPL v3 compliant (which in turn would require VMware Server to be licensed under GPL v3). VMware did not even future proof themselves by creating a Kernel module shim licensed with LGPL. VMware simply access the dumpable bit directly with set_bit and clear_bit now. Lines 1663 of the vmmon source file driver.c begins with:
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 23) || defined(MMF_DUMPABLE)
/* Dump core, readable by user. */
While some may hail this change as a good thing, I do not. What happens next time when there is no work-a-round? Both VMware and the Linux Kernel Development team have a chance to showcase that closed-source and open-source can work together. That closed-source companies are open to listening to the reasons for things like GPL v3. And proponents of GPL v3 have an opportunity to show that they are not just zealots whose blind actions damage the usefulness of their software to end users.
I think it is great that VMware listened (whether or not it was to me) and fixed this issue. I just wish that the opportunity for two communities to come together could have been embraced instead of side-stepped.
Until next time.