The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog


March 7, 2008  11:31 AM

VMware user on Oracle VM: Hell no, we won’t go!



Posted by: Alex Barrett
Virtualization

Abbie wouldn't have switched to Oracle VM eitherOracle doesn’t officially support its products running on VMware, but it will happily support you if you virtualize on its Xen variant, Oracle VM. But at least one large Oracle PeopleSoft customer with which I spoke recently refuses to play along and will maintain its VMware status quo.

“We looked at Oracle VM, but it’s where VMware was two or three years ago,” the systems administrator said, who asked that he and his organization not be named.

Not only did this system administrator find Oracle VM to be technically inferior to ESX Server, but also he didn’t want the burden of having a second virtualization environment to run and manage. “We’d rather not do that,” he said.

The other alternative — to switch from PeopleSoft to a competing product that’s supported on VMware — isn’t an option. “Our investment in PeopleSoft is too great,” he said, and “in the grand scheme of things, running it on dedicated hardware is a drop in the bucket.”

It’s a shame, he said, because in the past six months, his group has begun actively virtualizing not only “the low-hanging fruit,” but increasingly more production workloads. “This whole Oracle-hating-VMware thing has really put a crimp in our style,” he said. Meanwhile the organization’s CIO has approached Oracle and told the vendors, “We’d like to [virtualize Oracle applications], but with terms that don’t involve unilateral demands that we use only your software,” the administrator reported. As of yet, no word back from Oracle, but as far as he’s concerned, there’s no deal to be made.

“[Oracle] can either change their mind, or we’ll keep buying physical hardware. We’re not moving to Oracle VM.”

March 3, 2008  5:30 PM

OEM + old machine + P2V migration = Murphy’s law



Posted by: Joe Foran
Joseph Foran, P2V, VMware

OK, I had a foul up in my last post about my physical-to-virtual (P2V) migration journey. I used Vizioncore’s vConverter in my file server P2V migration, and it didn’t work. I then posted PlateSpin’s product name with Vizioncore’s product name (i.e. PlateSpin vConverter) as if there were some merger from the great beyond, rather than doing the simple thing and actually editing my own posts for accuracy. It’s all been edited out correctly now, but for full disclosure purposes, there was indeed a company name and product name mix up.

Now, that said, I’ve used both products on other OEM boxes and they went just fine, so take it for what it is: a singular experience and the nature of blogging (and working with editors… that “no thanks to” line was not mine).

I seem to have a hard time with company names… I previously used incorrect capitalization for eGenera (it’s actually Egenera) some time back, and I often refer to Openfiler as OpenFiler.

Back to the tale… The domain controller from hell – a Windows 2000 server with OEM disk drivers, OEM RAID controller management tools, OEM disk management tools, and OEM network management tools. A machine nearly as old as my oldest child, one that shipped with Windows 2000 before there were service packs. It has had patches, service packs, driver roll-ups, OEM driver updates, and (probably) chocolate sauce slathered onto it in it’s lifetime.

It has also had so many applications added and removed that I think I could actually hear grinding from worn spots in the registry and creaking from the drive platters. Needless to say, this DC was spiraling down into the vortex at the end of usefulness. That’s not to say it wasn’t a great machine for a long time, but with full disks and a gurgling CPU, the poor thing was about done. It still doesn’t beat the teenagers I decommissioned early in 2007 – pair of Novell 4.11 boxes that were old enough to drive (well, with a Learner’s Permit anyway). Gotta love longevity.

First order of business: a little cleanup. Backup. Backup again to a second location. Remove IIS (it’s not used anymore). Remove an extraneous antivirus management console (a competing product is in use, this one’s just dead weight). Dump the temp files. Wipe out randomly saved setup files for applications like Acrobat Reader. Compress what I can. Defragment.

Finally, enough free space is there to support the VMware Converter agent. Lets try that and see how it goes (often, it’s the only tool I need). Failure. Hours of waiting are gone, even though the conversion hit 100% and claimed success. Turns out there’s an invisible OEM partition sitting out there that the OEM tools don’t show, and said partition has hosed the boot device order on the new virtual machine (VM). What do I see – the Blue Screen of Death (BSOD), pointing me to INACCESSIBLE_BOOT_DEVICE.

Not a huge deal, right? Just edit the boot.ini, right? No need to worry about that missing partition, right? Nope. Sure, I try to repair it by mounting to a good VM and going into the disk to edit the ini file. No luck. Ok, lets get rid of the driver. We can see the partition. Done. Now lets try again.

Failure. Are we sensing a pattern here? Same BSOD. Just like the first time, Converter goes 100% and the box BSODs on boot again. So, now that the disk management tools are no longer hiding the OEM partition, I edit boot.ini to get rid of the partition, make sure that the partition is unchecked in Converter, and try again. It succeeds!

Kind of. It’s slower than molasses in the Minnesota winter, that kind of winter where all you want to do is sit inside by the fire and let the good folks out at — sorry, for a minute there I was channeling my inner Garrison Keillor. I’m back. It’s drivers.

The OEM RAID drivers are still in there, but they are easy to strip. And it even boots up again and runs. There’s no network though. OEM NIC drivers get the strip next, but still no network (as expected). Reinstalling the VMware tools to replace the drivers doesn’t help. Next step is to shut down the VM, remove the NIC, boot again, and then add in a new NIC.

Hosed. Now the machine boots up, but it won’t let me log on. The OS is toast, and I’m not whipping out the recovery CD.

Time to pull out another product. Vizioncore’s vConverter, an acquisition from Invirtus that’s stable, robust, and more feature-rich than VMware’s offering. Redo the P2V with this tool. Same problems with the boot screen.

And there it sits for a day, in limbo, while I spend some time on Google, TechTarget, and VMware’s websites.

Finally, it’s all together… AD is corrupt. Somewhere in all that stripping of drivers, I’ve whacked Active Directory. Ok, lets fix that: Start over. Whack the VM. Re-backup. Run DCPROMO and demote the server so that it’s no longer a Domain Controller.

Time to P2V – I used vConverter again, but edited the VM before boot so that there’s no NIC. I boot it, and remove all the OEM drivers then add in the NIC. It boots. It runs. It flies. No need to robocopy. All the apps are in place and running. It just hums along happily and serves it’s purpose.

Murphy’s law: Whatever can go wrong, will.


March 3, 2008  1:22 PM

Chris Wolf: VMsafe is cool because …



Posted by: Bridget Botelho
Chris Wolf, Virtual appliances, Virtual machine, Virtualization, Virtualization security, VMworld

VMsafe, the new security technology created by VMware Inc., gives virtualization users the ability to monitor and secure virtual machine resources in ways never before possible.

After I wrote a short article on VMsafe last week, I received feedback from Burton Group analyst Chris Wolf, who was at the VMworld conference in Cannes, France. His comments weren’t included in the story, but they put things in perspective, so here they are:

“VMsafe is a very important technology in my opinion, as it changes how virtual environments are secured. Today, security appliance virtual machines (VMs) typically monitor other VMs by connecting to them over a virtual switch. The result is virtual network monitoring that resembles physical network monitoring,” Wolf said. “The current model is fine until VMs begin to dynamically move across a virtual infrastructure. Dependent security appliances always have to follow their associated VMs as a VM is relocated. This can complicate the live migration and restart processes.”

“With VMsafe, you would typically configure a security appliance per physical host, such as an [intrusion prevention system] virtual appliance. The security appliance vendor would leverage policies to determine what to monitor (such as by VM, virtual switch, subnet, etc). With VMsafe, the appliance can connect to any virtual switch by accessing it through the hypervisor; you no longer have to configure a special promiscuous port group for network monitoring,” Wolf said. “With security configured at the host level with no direct attachment to virtual networks, VMs can move freely about the virtual and physical infrastructure and still have their related security policies enforced.”

secure data - Pentagon Freight image

Wolf continued, “VMsafe also provides the framework for offloading many security activities to special-purpose security VMs, including roles such as antivirus monitoring. As we move to an automated or dynamic data center, having special-purpose security appliances that are capable of enforcing security policies at the hypervisor level can ease security management in an environment that will be constantly changing. Sure, it’s possible to enforce security policies with special purpose network-based appliances, but such configurations would be substantially more complex to deploy and manage than comparable solutions based on VMsafe technology.”


February 29, 2008  12:54 PM

Buying servers for virtual machines? Think blades



Posted by: Rick Vanover
Blade servers, Networking, Rick Vanover, Servers, Virtualization, Virtualization management, Virtualization strategies

You may be considering new blade servers for your virtual host environments, and you are not alone. Gone are the days of the perception that blade servers have less horsepower than their general purpose counterparts. I recently attended a local virtualization user group meeting, and we talked at length about some new blade server products. Here are some takeaways of what virtualization administrators need to know about the new blade products:

Processor and memory inventory

The newest blade servers can run 4 sockets and 4 cores in one blade, and one model in particular that was favorably discussed is the HP ProLiant BL680c series. This is great for virtualization implementations with an incredibly small footprint. With the BL680c, each blade can house up to 128 GB of RAM. ESX 3 and Microsoft Windows Server 2008 are supported operating systems for virtualization implementations for this series of blades. One important note on the HP blade series is the Virtual Connect product for network connectivity. Fellow TechTarget contributor Scott Lowe covers this well in a recent tip.

You have to love the small footprint

With the momentum of virtualization migrations not slowing, the small footprint is very welcome in crowded data centers. The BL680c can have 80 hosts of the speck above in one 40U rack with four enclosures! Using general purpose servers would take at least double the space to get the same number of virtual hosts.

Limitations?

Given the very small footprint of the blade server, there are some limitations to connectivity. While the BL680c excels in most areas, it is limited to only three expansion interfaces for additional networking and fiber channel connectivity. Most implementations, however will be able to meet their connectivity requirements from the available options.

A smaller issue may be power sources. Blade servers will generally take different power sources compared to standard general purpose servers. The trade off is that in feeding a blade server a L15-30P outlet you may not need a power distribution unit (PDU). The PDU may take the same L15-30P interface, so some planning on your power sources and availability to get the correct sources available.

The verdict

The current generation of blade servers are serious contenders for virtualization hosts. The small footprint only makes the case more compelling. As the blades now are able to offer comparable performance specs of the traditional server counterparts, we should consider them for the host hardware environment.


February 29, 2008  12:51 PM

Getting at internal disk space with Linux distro Openfiler



Posted by: Joe Foran
Joseph Foran, Storage, VMware

Based on the volume of media coverage, explosive growth of iSCSI storage area network (SAN) companies and my own experiences, I think it’s safe to say that NFS and iSCSI SANs have begun to prove their worth as valid storage platforms with VMware’s Infrastructure 3, and both are also a lot cheaper than 2 GB and 4 GB Fibre Channel SANs. This has made it very interesting to those who are in the lower-tier of purchasing power – namely the many small and medium-sized companies that can’t afford the initial acquisition costs (the bulk of my clientele, by the way). That brings in the great equalizer – open source NFS and software iSCSI solutions, though given a choice I prefer iSCSI, because it can support VM partitions.

One of these OSS iSCSI packages is called IET, the iSCSI Enterprise Target, and it’s rolled into a Linux distro called Openfiler, a distribution focused on being a storage platform. Before we all rush out and use Openfiler for all our needs, IET itself has had some problems with supporting VMotion, so it’s earned a bit of a bad rap. In fact, there are often wails of protesters screaming about how IET is unsupported by the good folks at VMware because of the way two SCSI commands are structured and how that can hose a VMotion move. Luckily, there’s some good news on that front.

The truth be told, the good news there is only half-good. IET’s release 0.4.5 addresses those problems. The better news is that this update is rolled into the latest patches for OpenFiler (which has a self-update option, just like any other distro). There’s a nice link in a blog here. The reason it’s only half-good… IET is still not supported, patch or no patch. Still, I’ve been using it in the lab, and it works just fine for me. I’ve done 1, 2, and 10 VMotion moves by manipulating DRS, both manually and automatically, and haven’t hit a snag yet. That said, it isn’t supported.

Hitting an Openfiler target with VI3.5 is easy. First, install OpenFiler, ESX and VirtualCenter and configure them. I will skip most of the details of basic setup, as it’s superflous. Openfiler’s install guide is here, but the Petri IT Knowledgebase has a specific article that covers soup-to-nuts here. I’ll post another below that’s a little shorter and to the point. You’ll need to create the virtual machine on the local disk of the ESX host (otherwise, what’s the point?) and configure it to take up just enough, but not too much of that precious internal disk space. In my example, I built a virtual machine with two virtual disks (you can do it with one, if you want). The trip from an unallocated lump o’ disk to iSCSI-enabled volumes went like this:

Part One: Disk Management

Set up your physical storage from the get-go. During setup, I chose NOT to do anything to the second drive – which is to say, I didn’t let Disk Druid touch it at all. When Openfiler boots, and you have your basic settings done, things like ntp, the admin password, etc., go to the Volumes button, and click on Physical Storage Mgmt. From there, create the extended and logical partitions you will need to get your iSCSI volumes up. The steps, once Openfiler is up, look a little like this:

The basic setup. System, and nothing else:

Creating the Extended Partition:

Creating the Logical (Physical) Partition:

The Partitons on the Second Drive:

During creation, I ran into a problem on both machines I did this build on – in each case I had created the logical partition, but OF didn’t see it. I had to go delete it, and create it again.

Part 2: Creating the LUN

First up, enable the server to act as an iSCSI Target. This is done from the Services tab, under Enable/Disable. In this example, I’m only using iSCSI, but if you wanted to enable NFS, this is where you would do so. Mine looked like this:

Then, go back to Volumes, under Volume Group Mgmt, and create the iSCSI volume group. In this example, I’m using the whole disk, but you don’t have to.

The before and after looks like this:

 

Next, head over to Create New Volume (one tab left). Creating a volume is, like the rest of the process, relatively straightforward. I made one volume, comprising the entirety of my one Volume Group. Having chosen the name”san” (how creative of me) for the process, I kept with the same simplicity for the volume I created.

 

After that, you are forwarded to the List of Existing Volumes tab. Congratulations, you have an iSCSI SAN. Now, about configuring that SAN…

Part 3: Security

There are two ways to lock down your SAN. One is via the network, and choosing only to allow certain hosts or networks access to the SAN. From the List of Existing Volumes tab, select the Edit link under the Properties column. You will then be presented with a screen called Volume’s Properties giving you a number of options for securing the LUN. I won’t dwell too much on the network access restrictions, as my screen captures will be different from other networks. The long and short of it is that in the General tab there is a sub-tab called Local Networks. You can create individual hosts and subnets there, and then allow or deny access to these networks on the Volume’s Properties tab.Using MS-CHAP to lock down who can connect to your target is a much smarter plan.

Part 4: Getting VMware to See Your LUN

This is extremely straightforward, and is no different than any other iSCSI implementation. The only thing I’ll mention here is something that is left out of many, many documents. Open the ports for iSCSI on your ESX hosts! In 3.5, it looks like this:

 

There are a lot of considerations that need to be addressed about what you keep there, chiefly around reliability and performance. For example, if you put virtual machines in the iSCSI SAN on the physical host, when the physcial host dies, it takes the SAN with it, along with all those virtual machines. Another consideration is networking. The long and short of it is this – if you have a whole lot of unused disk space on your ESX hosts, you can use Openfiler to put stuff there. As to what stuff, that’s entirely up to you. I personally prefer iso files and other junk that doesn’t need to go anywhere or do anything, and isn’t going to hurt me when something fails. Of course, you could also use NFS, which Openfiler supports.

The end result, Openfiler gets a solid 8 pokers from me. I give it a ten for being a soup-to-nuts full-featured distro that I can use at almost any client site for almost any purpose. If this were a storage column, it would remain a ten. It drops back because they haven’t yet earned a nod from VMware, which is something I would have expected them to pursue, considering their growth and media attention lately. What makes it frustrating is that Xinit systems, the company behind the project, makes storage appliances. Since they’ve been producing Openfiler since 2003, I think it’s about time they got on the move with this!


February 27, 2008  10:41 AM

VMware’s “rookie” Seminar too lightweight



Posted by: Bridget Botelho
Application virtualization, Desktop virtualization, VDI, Virtual machine, Virtualization, Virtualization management, Virtualization strategies, VMware, XenSource

With virtualization adoption teetering on mainstream, I am sure it is difficult for VMware to find the balance between what to explain about the technology and what is considered common knowledge.

Judging by a show of hands, a lot of what the 40-or so IT admins who attended VMware Inc.’s Virtualization Seminar Series at the Hilton Hotel in Providence, RI Tuesday morning heard was the latter. The seminar was a low-level look at VMware technologies on the market and those coming down the pipeline. It also had some case studies supporting virtualization, and a snore-inducing spiel from their sponsor, data networking company Brocade.

The case study that seemed to be of most interest to attendees was about the technology team at IntelliRisk Management Corporation (IRMC), a company with call centers and clients all over the world, deploying VMware’s Virtual Desktop Infrastructure (VDI).

Using VMware’s VDI, IRMC was able to centralize its global data center operations by giving their employees access to applications, operating systems, etc. via virtual desktops.

One unimpressed system administrator at the seminar asked, “And this is different from Citrix how?”

Peter Marcotte, VMware’s Systems Engineering manager, said the good old server based computing (SBC) environments from Citrix Systems, Inc., where each user connects to a remote desktop running on a Microsoft terminal server and/or a Citrix Presentation Server, doesn’t offer the kind of flexibility VDI does. He also said applications don’t run as well in SBC environments as they do in isolated virtual desktop machines.

Independent Technology Analyst and Blogger Brian Madden wrote an analysis of VDI and SBC that weighs their pros and cons and when to use each.

Madden wrote that VDI offers better performance from the users’ standpoint, it doesn’t have application compatibility issues, and offers better security than traditional SBC.

In the case of IRMC, they deployed virtual desktops and can add a new PC image in less than 10 minutes. All of the virtual desktops can be managed from one location through VirtualCenter. After deploying VDI, IRMC saw an annual return on investment (ROI) of 73%, with a payback period of 1.37 years, the case study shows.

On the flip side, Madden wrote that SBCs have the maturity advantage — it’s been around for a decade — and it is easy to manage.

“With SBC you can run 50 to 75 desktop sessions on a single terminal server or Citrix Presentation Server, and that server has one instance of Windows to manage. When you go VDI, your 50 to 75 users have 50 to 75 copies of Windows XP that you need to configure, manage, patch, clean, update, and disinfect. Bummer!” Madden blogged.

Of course, VMware’s Marcotte didn’t mention that Citrix announced its own VDI product, Citrix XenDesktop back in October 2007 to compete with VMware’s VDI offering.

The seminar was helpful to some people I am sure — there were questions here and there – but overall I am a bit annoyed because by definition, seminars are supposed to teach us something and I’m not sure this one accomplished that.

Additionally, I, unlike most attendees, who either left or used the time to catch up on emails via Blackberry, sat through Brocade’s commercial for their Advanced Fabric Services, expecting a “Live Customer Testimonial” to follow as scheduled, but that part of the program never happened.

Hearing an actual user talk about their experiences with virtualization is far more helpful to other users than seeing vendor slide presentations. Users could have asked about snags during deployment, positive results and gotten some good advice.

Hopefully other VMware seminars include the Live Customer Testimonials; it would make the time more worthwhile for attendees.


February 25, 2008  1:33 PM

VMware underwhelmed by Workstation security flaw



Posted by: Alex Barrett
Virtualization

Security is the VMware topic du jour, what with VMware releasing several security patches for ESX 3.0.2, and with Boston-based Core Security Technologies revealing a vulnerability in VMware Workstation, ACE and Player that exploits the use of shared folders.

On the latter front, it appears that the shared folders vulnerability hasn’t sent shivers down VMware’s spine. According to Core Security CTO Ivan Arce, VMware has known about this vulnerability since last fall.

“We contacted VMware about this and reported it to them on Oct. 16,” of 2007, Arce said. Since then, VMware told Core that it was working to release a fix for the flaw, which was originally scheduled for December. But VMware has delayed the fix multiple times and has now scheduled the fix for this later month.

However, Arce said that Core has received no confirmation that the coming release will actually fix the problem.

Rather than wait any longer for VMware to resolve the problem, Core Security decided to go ahead and alert users to the vulnerability and its simple fix, Arce said.

But even though it’s been over four months since Core first alerted VMware to this vulnerability, know that VMware is by no means the most irresponsible independent software vendor Arce has worked with. “I think that they have been responsive, but they could have been more responsive,” Arce said. “There’s definitely room for improvement” in terms of “improving processes and getting things done faster. But there are companies that are far worse than VMware.”

Well, that’s something.


February 25, 2008  12:27 PM

Review: VMware’s Storage/SAN Compatibility Guide for ESX Server 3.5 and ESX Server 3i



Posted by: Joe Foran
Joseph Foran, VMware

VMware documentation has traditionally been great – it provides useful how-to and compatibility information, and the plain truth in quick-to-read documents with accurate charts, and the information is kept current. Usually the documentation is even written in such a way that you can get a couple of chuckles.

That said, I was sorely disappointed with the content in the latest Storage/SAN Compatibility Guide for 3.5. I’ll give credit where credit is due, however, and VMware is clearly telling the truth when they say the following right in the introduction:

You will note that this guide is sparsely populated at present. The reason for this is that the storage arrays require re-cerfitification for ESX Server 3.5 and ESX Server 3i, and while many re-certifications are in process or planned, relatively few have been completed to date.

That’s the absolute truth. I suppose the reason I’m so disappointed with this document has little to do with the document itself. Having upgraded to 3.5 in my own shop, replacing 2.5, VMware Server 1.0, and even a GSX box, I’m a bit miffed in general about some of the smaller bugs, documentation omissions, and oddities of the product, but to see such a sparse storage compatibility document is a big disappointment.

Thankfully, my department isn’t off-list, but I have private-practice customers who are. Overall, the document gets six pokers. It’s an easy read, it’s informative and it’s truthful (good for around two pokers each).

What it lacks is content, and that, I have a feeling, was more due to pressure to get a big product release out during the IPO period and first year of trading to keep Wall Street happy. At the risk of getting onto my soapbox for a minute, the fact that VMware has to admit to sparsity in their documentation is brutal – it shows the potential beginnings of a corporate shift away from product focus and onto market focus. While some may argue that market focus is good for business, innovation and the economy, I’m not one of them – I’m all in favor of doing away with quarterly reporting, focusing on the long-term value of public companies and letting the day traders and short sellers eat their own cooking.

I sincerely hope that I’m wrong, and that there were other reasons for putting out a product without first recertifying the MOST important hardware involved in the underlying infrastructure. I’ve been to ex-parent and current majority stakeholder EMC’s lab – in my F500 days I got the grand tour because we bought some very expensive SAN gear, installation and support services. It’s huge. It has everything in it, from mainframes to micros, blades to whiteboxes. I know VMware’s own labs are no small affair either (though I’m still waiting on an invite to go there – which might make a good article if it ever happens).

I just don’t see why such a successful, independently minded, historically thorough company would, simply put, goof it up by not dedicating enough resources to recertifying products.

So six pokers and a soapbox admonition it is.

Can I get a little help getting down from my high horse, please? It’s a bit drafty up here.


February 25, 2008  12:18 PM

P2V migration success, thanks to Robocopy



Posted by: Joe Foran
Joseph Foran, P2V, Virtualization, VMware

…and no thanks to VMware Converter Enterprise or Vizioncore’s vConverter. 

The situation: a very successful physical-to-virtual (P2V) migration, with only two servers to go. Both original equipment manufacturer (OEM) boxes.

One is a Windows 2003 Server File/Print/VMware Server. One is a Windows 2000 domain controller, with accounting and payroll software. The owners have been very reluctant to migrate from stable boxes, which have run reliably, backed-up successfully, and (until recently) have also performed decently.

However, disk space is at an all-time low and prompting alerts from the system’s management console so often that it’s been put on the exclusion list, complete with note taped to the ops board. There’s a plan to upgrade to Exchange 2007 and thus get out of Windows 2000 Native mode in Active Directory.

The players: me, VMware VI3.5, VMware Converter Enterprise (of course), VMware tech support, and Vizioncore’s vConverter.

The end result:  Less than stellar migration with VMware Converter and Vizioncore’s vConverter. On the file server, it went the easiest. After the first (Converter) P2V attempt failed, and vConverter came up empty, I took a hint from the ITIL playbook and implemented the workaround (check the many writings on ITIL and Change Management for more meaty details than I care to post here). That workaround? Robocopy, IP changes, and host name changes.

Robocopy

Robocopy is your friend. It is your dear, dear friend that loves you. It’s a tool of similar functionality to the *nix rsync command, in that it can mirror a directory structure, survive the occasional network interruption, etc. It has fundamental differences, but it comes from the same root – an improved version of the copy command that exists in every operating system ever designed. My favorite part? /SEC, which copies NTFS permissions from host to host (normally, these are destroyed by being replaced by inherited permissions at the target). So, it’s just a simple batch script. That’s right… batch. That old beast of burden, come back to ride high once more.

@ECHO OFF

SETLOCAL

SET _rcsource=\\SOURCEHOST\d$\shared

SET _rctarget=\\TARGETHOST\d$\shared

SET _rcaction=/COPYALL /TBD /ZB /E /SEC

SET _rcopts=/R:20 /W:1 /LOG:FSMIGRATE.log

ROBOCOPY.EXE %_rcsource% %_rctarget% %_rcaction% %_rcopts%

The end result is a complete copy of all directories from the source to the target that can survive network outages, copies NTFS security, retried in-use files 20 times with a one-second delay, and logs it all.

I’ve long since lost the source for that batch, but I’ve used it on countless file servers. After that it was very simple to swap IP addresses and host names, remove the old shares on the source server, and share out the appropriate directories on the target server. World’s easiest P2V not done via P2V tool – mostly because a file server is simple.

Next post, the Windows 2000 Server domain controller, a.k.a. my private OEM hell.


February 25, 2008  11:39 AM

VMware Server guru discusses ESX, Microsoft Hyper-V



Posted by: Ryan Shopp
Microsoft Hyper-V, Virtualization, Virtualization platforms, VMware

Author and programmer Eric Hammersley had launched into using VMware Server when he realized virtualization’s space and cost-saving potential. At the time, he was installing multiple server and switch racks on a ship for the U.S. Navy. His book Professional VMware Server: Programmer to Programmer discusses installing and configuring VMware Server, tips for creating base imagines, image library organization best practices, integrating and leveraging VMware for your environment and more.

In this interview, Hammersley shares his thoughts on Microsoft Hyper-V and VMware ESX, explains basic virtualization architectures, and compares the three major VMware offerings: VMware Server, VMware ESX and VMware Workstation.

SearchServerVirtualization.com: Your book Professional VMware, Programmer to Programmer makes it’s clear that you’re a fan of VMware. Do you think Microsoft Hyper-V will affect VMware adoption?

Eric Hammersley: No, not yet. Virtualization is a disruptive technology (to use a rather brainy marketing term), and VMware leads the movement, though Microsoft has two advantages in this battle. First is the observation of trends in the technology. They have the benefit of VMware having been in the virtualization market for many years. No doubt, Microsoft has been following how VMware’s technological advances and has been analyzing this information for years. If you dig deep into Hyper-V you’ll see that its virtualization layer and approach is different than that of VMware’s. Did Microsoft discover a better way? Only time will tell.

The second advantage Microsoft has is sheer market dominance in the products that a very large percentage of VMware customers want to virtualize. If you’ve ever tried to virtualize an Exchange Server, you’ll know what a treat it is. As a virtual server, if Exchange has any kind of load it doesn’t work well enough for production use. Microsoft has a clear and rather major advantage in that regard. It can develop its server platforms and enterprise applications to utilize Hyper-V in a way that no one else can.

Do I want Hyper-V to succeed? Yes, mainly because I’m out here on the front lines virtualizing domain controllers, Exchange Servers (or trying to), SQL and many other Microsoft products. Hyper-V done right will make my job easier; done wrong and I’ll be spending time explaining to people with checkbooks why their virtualization initiative didn’t work well.

We could also see some interesting developments with Citrix’s acquisition of Xen. We should all keep an eye on them. In the end, we all benefit from a diverse and rich selection of products.

You’ve mentioned previously that you didn’t work with VMware ESX because you prefer hosted architecture. Can you explain hosted or operating system virtualization versus the hypervisor or bare-metal approach?

Hammersley: Hosted architecture can be seen in products such as VMware Workstation, Server, and Microsoft Virtual PC, to name a few. A hosted architecture relies on an underlying operating system to provide driver support for the machine’s hardware. This reduces the virtualization software’s job in a sense because it doesn’t need to know how to talk with X or Y piece of hardware, just how to interact with the hardware via the host operating system –whether it’s Windows, Linux, or something else.

A hypervisor is a different beast, more of a bare-metal approach, if you will. You can see a hypervisor approach in VMware ESX Server. While with a hosted architecture you rely on the host operating system to provide driver and hardware support, hypervisor architecture performs this on its own. The drawback of this is obviously hardware support. A typical hypervisor package would provide its own kernel and drivers for select pieces of supported hardware, removing the need for a host operating system to provide the driver support. This offers a big advantage of increased resources for your virtual machines, but it comes at an increased cost.

Has the release of ESX 3.5 prompted you to reconsider VMware’s ESX product suite?

Hammersley: My problems with ESX in the past have always been with the limited amount of approved hardware, along with the high cost. As an IT professional who has spent a large portion of his career in smaller business, the price has always kept me away. On the bright side, at least VMware doesn’t charge per core.

A few weeks ago, I was brought onboard for a new project that will implement ESX Server to thousands of users in many key data centers across the country. The specifics are not important, but my love for VMware has grown even more. ESX Server is a great product — VMware Server on steroids, if you will. It’s sound, stable, and flexible. The scalability, plug-in modules such as VMotion, High Availability and others make it a perfect addition to the bigger business data center looking to modernize, streamline its processes, save some rack space, and cash, once you recoup the initial cost. I’m looking forward to really digging into ESX and plan on sharing my experiences with it, especially with the API.

What are the differences between VMware ESX, VMware Server and VMware Workstation?

Hammersley: Well, they all three squeeze out the virtualized goodness, but in slightly different ways. Right off, ESX Server is what’s called a bare-metal solution. While most applications today run atop the operating system, be it Windows, Linux, or something else, ESX Server is everything rolled into one. ESX provides the operating system and virtualization technology right out of the box. Since ESX provides the operating system it offers a very tight and robust virtualization layer between the hardware and the provided operating system.

In addition the ESX operating system has a very small footprint, only about 32 MB, which allows greater amounts of system resources for your virtual infrastructure to utilize. It is, however, pretty pricey; so ESX is more of the Cadillac, if you will, of the product line.

Next would be VMware Server, which is what my title is based on. VMware Server is really a child born from a previous product called GSX Server. Both ESX and GSX provided ultimately the same thing: server-based virtual machines served out via client applications installed on the desktop. GSX’s reduced price and greater flexibility in terms of hardware made it a populate substitute for ESX.

Now, in 2006, VMware announced the demise of GSX Server in favor of its new, and free I might add, VMware Server product. Why they made the shift away from GSX in favor of a new free product is a question for them, however, the move caused an extraordinary amount of excitement. With the new VMware Server we’d get the same server based virtual machines, still served out via the VMware client application to the desktop, still with the APIs and automation capability, but with a reduced price tag. What more could you want?

Finally there’s VMware Workstation. The workstation flavor is just that: a workstation-only virtualization product. For many years it lacked APIs required for automation, the ability to run virtual machines as services in the background, and the ability to perform centralized management. This made it rather crippled in terms of functionality. Many of the features I mentioned above, however, have made it into the newest VMware Workstation, including API automations, a long-awaited feature. Honestly, I spend more of my time on VMware Workstation than any other product due to its snapshot support and ease of use. This doesn’t mean it’s a better product than VMware Server, it just fits my needs a little better.

Do you prefer VMware on Linux or VMware on Windows?

Hammersley: It depends. I prefer VMware Server on Linux when I really need to squeeze out every piece of performance I can get. Linux, on average, has a greater amount of system resources available because the operating system footprint is considerably smaller than Windows. The catch however is that the configuration and management can be a bit of a trick sometimes, and if you have a mostly Windows shop you’ll have trouble getting staff buy-in.
VMware Server on Windows offers a simple install and configuration. The management of VMware Server on Windows is also pretty easy. The downfall, and this can be huge, is system resources. Your memory and CPU available to virtual machines is drastically lower on Windows than Linux. This means fewer virtual machines can run on a Windows install if you compare it to Linux with similar hardware.

In the end, it’s really about what fits into your environment.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: