The Virtualization Room


March 10, 2008  12:42 AM

VMware ESX 3i on HP ProLiant servers: Ballyhoo or big idea?

Keith Harrell Profile: SAS70ExPERT

IT pros are split on the potential impact of VMware ESX Server 3i and on the importance of new bells and whistles, such as Hewlett-Packard Co.’s plug-and-play deployment capabilities and support from other major hardware vendors.

Last week, VMware and HP announced that at the end of March, VMware ESX 3i will be packaged on 10 models of HP ProLiant servers. So do embedded hypervisors like ESX 3i represent the next stage of the virtualization evolution?

Of course VMware seems to think so, saying the integrated offering will provide “greater speed and simplicity for customers new to virtualization, as well as increased capacity expansion for customers who already use VMware’s data center virtualization and management suite, VMware Infrastructure 3 (VI3).”

Will this optimism translate into increased virtualization in the enterprise? VMware and virtualization expert Andrew Kutz thinks that the exclusivity of the plug-and-play capability of 3i on HP is a stretch:

Plug-and-play is another no-win for 3i. The plug-and-play functionality of 3i is as artificial as its simplified management. VMware asserts that independent hardware vendors (IHVs) will be able to ship servers with 3i directly to the customer, where the customer can simply plug the box into the network and storage, boot it, and presto: installation complete. That’s fantastic! But I can order a server from an IHV with ESX 3 pre-installed on it today. The difference is that VMware has added this data center plug-and-play functionality exclusively to its 3i product. There is no reason that it cannot work with 3.0 or 3.5 as well. This is just another example of a company trying to promote a new product with features that do not have to be exclusive; they are exclusive only because someone decided they should be.

While Kutz believes that 3i is a significant step up, he says on SearchVMware.com that “ESX 3i is simply an evolution, not a revolution.”

The biggest change between ESX 3i and its predecessors (ESX 3.0, 3.5) is that with 3i, agents cannot be installed on a host. Erik Josowitz, the vice president of product strategy at Austin, Texas-based Surgient Inc., a virtual lab management company, says that for independent software vendors, “VMware’s roadmap for virtualization management runs through VirtualCenter.” Putting 3i on solid state “sends a clear signal that VMware doesn’t want people installing on the host anymore,” according to Josowitz. He notes that “from a security standpoint, it’s a good thing,” since it locks down the partition that used to be available under the host, thus keeping out any applications that might weaken a system. But now, organizations that want to work with blended images will need to architect their tech support to talk through VirtualCenter rather than a host agent.

While the solid-state product promises plug-and-play deployment of VMware’s thin hypervisor product on HP’s ProLiant servers, some analysts are still saying, “Don’t believe the hype about 3i.” Citing problems with monitoring and scaling of 3i, the ToutVirtual blog complains that 3i is “a complete disappointment” at general release. “Combine this weak infrastructure design issue with the fact that you can not get any realistic information out of the hardware state of a 3i server,” makes VMware ESX 3i “dead on arrival.”

But SearchServerVirtualization.com expert Rick Vanover begs to differ. Vanover holds ProLiant servers in high esteem, and if ESX 3i is good enough for HP, then it’s good enough for him:

I’ve worked on many different types of servers, and I think the ProLiant servers are superior. The big reason is that the ProLiant BL blade series do not have a competitor to the Insight Control Environment. Further, the Integrated Lights-Out 2 Standard Blade Edition (or iLO) is a better management interface compared to its competition. If VMware takes HP as a partner (or at least as their first partner) for an ESX 3i supported platform, I would choose it in a heartbeat.

But does it really matter that 3i is overhyped? Major vendors now put 3i inside their servers. This reduces the need for major evaluation and opens the door for IT shops to choose servers with “3i inside” and use it when and how they want.

What do you think? Leave us a comment below or send us your thoughts.

March 7, 2008  11:31 AM

VMware user on Oracle VM: Hell no, we won’t go!

Alex Barrett Alex Barrett Profile: Alex Barrett

Abbie wouldn't have switched to Oracle VM eitherOracle doesn’t officially support its products running on VMware, but it will happily support you if you virtualize on its Xen variant, Oracle VM. But at least one large Oracle PeopleSoft customer with which I spoke recently refuses to play along and will maintain its VMware status quo.

“We looked at Oracle VM, but it’s where VMware was two or three years ago,” the systems administrator said, who asked that he and his organization not be named.

Not only did this system administrator find Oracle VM to be technically inferior to ESX Server, but also he didn’t want the burden of having a second virtualization environment to run and manage. “We’d rather not do that,” he said.

The other alternative — to switch from PeopleSoft to a competing product that’s supported on VMware — isn’t an option. “Our investment in PeopleSoft is too great,” he said, and “in the grand scheme of things, running it on dedicated hardware is a drop in the bucket.”

It’s a shame, he said, because in the past six months, his group has begun actively virtualizing not only “the low-hanging fruit,” but increasingly more production workloads. “This whole Oracle-hating-VMware thing has really put a crimp in our style,” he said. Meanwhile the organization’s CIO has approached Oracle and told the vendors, “We’d like to [virtualize Oracle applications], but with terms that don’t involve unilateral demands that we use only your software,” the administrator reported. As of yet, no word back from Oracle, but as far as he’s concerned, there’s no deal to be made.

“[Oracle] can either change their mind, or we’ll keep buying physical hardware. We’re not moving to Oracle VM.”


March 3, 2008  5:30 PM

OEM + old machine + P2V migration = Murphy’s law

Alex Barrett Joseph Foran Profile: Joe Foran

OK, I had a foul up in my last post about my physical-to-virtual (P2V) migration journey. I used Vizioncore’s vConverter in my file server P2V migration, and it didn’t work. I then posted PlateSpin’s product name with Vizioncore’s product name (i.e. PlateSpin vConverter) as if there were some merger from the great beyond, rather than doing the simple thing and actually editing my own posts for accuracy. It’s all been edited out correctly now, but for full disclosure purposes, there was indeed a company name and product name mix up.

Now, that said, I’ve used both products on other OEM boxes and they went just fine, so take it for what it is: a singular experience and the nature of blogging (and working with editors… that “no thanks to” line was not mine).

I seem to have a hard time with company names… I previously used incorrect capitalization for eGenera (it’s actually Egenera) some time back, and I often refer to Openfiler as OpenFiler.

Back to the tale… The domain controller from hell – a Windows 2000 server with OEM disk drivers, OEM RAID controller management tools, OEM disk management tools, and OEM network management tools. A machine nearly as old as my oldest child, one that shipped with Windows 2000 before there were service packs. It has had patches, service packs, driver roll-ups, OEM driver updates, and (probably) chocolate sauce slathered onto it in it’s lifetime.

It has also had so many applications added and removed that I think I could actually hear grinding from worn spots in the registry and creaking from the drive platters. Needless to say, this DC was spiraling down into the vortex at the end of usefulness. That’s not to say it wasn’t a great machine for a long time, but with full disks and a gurgling CPU, the poor thing was about done. It still doesn’t beat the teenagers I decommissioned early in 2007 – pair of Novell 4.11 boxes that were old enough to drive (well, with a Learner’s Permit anyway). Gotta love longevity.

First order of business: a little cleanup. Backup. Backup again to a second location. Remove IIS (it’s not used anymore). Remove an extraneous antivirus management console (a competing product is in use, this one’s just dead weight). Dump the temp files. Wipe out randomly saved setup files for applications like Acrobat Reader. Compress what I can. Defragment.

Finally, enough free space is there to support the VMware Converter agent. Lets try that and see how it goes (often, it’s the only tool I need). Failure. Hours of waiting are gone, even though the conversion hit 100% and claimed success. Turns out there’s an invisible OEM partition sitting out there that the OEM tools don’t show, and said partition has hosed the boot device order on the new virtual machine (VM). What do I see – the Blue Screen of Death (BSOD), pointing me to INACCESSIBLE_BOOT_DEVICE.

Not a huge deal, right? Just edit the boot.ini, right? No need to worry about that missing partition, right? Nope. Sure, I try to repair it by mounting to a good VM and going into the disk to edit the ini file. No luck. Ok, lets get rid of the driver. We can see the partition. Done. Now lets try again.

Failure. Are we sensing a pattern here? Same BSOD. Just like the first time, Converter goes 100% and the box BSODs on boot again. So, now that the disk management tools are no longer hiding the OEM partition, I edit boot.ini to get rid of the partition, make sure that the partition is unchecked in Converter, and try again. It succeeds!

Kind of. It’s slower than molasses in the Minnesota winter, that kind of winter where all you want to do is sit inside by the fire and let the good folks out at — sorry, for a minute there I was channeling my inner Garrison Keillor. I’m back. It’s drivers.

The OEM RAID drivers are still in there, but they are easy to strip. And it even boots up again and runs. There’s no network though. OEM NIC drivers get the strip next, but still no network (as expected). Reinstalling the VMware tools to replace the drivers doesn’t help. Next step is to shut down the VM, remove the NIC, boot again, and then add in a new NIC.

Hosed. Now the machine boots up, but it won’t let me log on. The OS is toast, and I’m not whipping out the recovery CD.

Time to pull out another product. Vizioncore’s vConverter, an acquisition from Invirtus that’s stable, robust, and more feature-rich than VMware’s offering. Redo the P2V with this tool. Same problems with the boot screen.

And there it sits for a day, in limbo, while I spend some time on Google, TechTarget, and VMware’s websites.

Finally, it’s all together… AD is corrupt. Somewhere in all that stripping of drivers, I’ve whacked Active Directory. Ok, lets fix that: Start over. Whack the VM. Re-backup. Run DCPROMO and demote the server so that it’s no longer a Domain Controller.

Time to P2V – I used vConverter again, but edited the VM before boot so that there’s no NIC. I boot it, and remove all the OEM drivers then add in the NIC. It boots. It runs. It flies. No need to robocopy. All the apps are in place and running. It just hums along happily and serves it’s purpose.

Murphy’s law: Whatever can go wrong, will.


March 3, 2008  1:22 PM

Chris Wolf: VMsafe is cool because …

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

VMsafe, the new security technology created by VMware Inc., gives virtualization users the ability to monitor and secure virtual machine resources in ways never before possible.

After I wrote a short article on VMsafe last week, I received feedback from Burton Group analyst Chris Wolf, who was at the VMworld conference in Cannes, France. His comments weren’t included in the story, but they put things in perspective, so here they are:

“VMsafe is a very important technology in my opinion, as it changes how virtual environments are secured. Today, security appliance virtual machines (VMs) typically monitor other VMs by connecting to them over a virtual switch. The result is virtual network monitoring that resembles physical network monitoring,” Wolf said. “The current model is fine until VMs begin to dynamically move across a virtual infrastructure. Dependent security appliances always have to follow their associated VMs as a VM is relocated. This can complicate the live migration and restart processes.”

“With VMsafe, you would typically configure a security appliance per physical host, such as an [intrusion prevention system] virtual appliance. The security appliance vendor would leverage policies to determine what to monitor (such as by VM, virtual switch, subnet, etc). With VMsafe, the appliance can connect to any virtual switch by accessing it through the hypervisor; you no longer have to configure a special promiscuous port group for network monitoring,” Wolf said. “With security configured at the host level with no direct attachment to virtual networks, VMs can move freely about the virtual and physical infrastructure and still have their related security policies enforced.”

secure data - Pentagon Freight image

Wolf continued, “VMsafe also provides the framework for offloading many security activities to special-purpose security VMs, including roles such as antivirus monitoring. As we move to an automated or dynamic data center, having special-purpose security appliances that are capable of enforcing security policies at the hypervisor level can ease security management in an environment that will be constantly changing. Sure, it’s possible to enforce security policies with special purpose network-based appliances, but such configurations would be substantially more complex to deploy and manage than comparable solutions based on VMsafe technology.”


February 29, 2008  12:54 PM

Buying servers for virtual machines? Think blades

Rick Vanover Rick Vanover Profile: Rick Vanover

You may be considering new blade servers for your virtual host environments, and you are not alone. Gone are the days of the perception that blade servers have less horsepower than their general purpose counterparts. I recently attended a local virtualization user group meeting, and we talked at length about some new blade server products. Here are some takeaways of what virtualization administrators need to know about the new blade products:

Processor and memory inventory

The newest blade servers can run 4 sockets and 4 cores in one blade, and one model in particular that was favorably discussed is the HP ProLiant BL680c series. This is great for virtualization implementations with an incredibly small footprint. With the BL680c, each blade can house up to 128 GB of RAM. ESX 3 and Microsoft Windows Server 2008 are supported operating systems for virtualization implementations for this series of blades. One important note on the HP blade series is the Virtual Connect product for network connectivity. Fellow TechTarget contributor Scott Lowe covers this well in a recent tip.

You have to love the small footprint

With the momentum of virtualization migrations not slowing, the small footprint is very welcome in crowded data centers. The BL680c can have 80 hosts of the speck above in one 40U rack with four enclosures! Using general purpose servers would take at least double the space to get the same number of virtual hosts.

Limitations?

Given the very small footprint of the blade server, there are some limitations to connectivity. While the BL680c excels in most areas, it is limited to only three expansion interfaces for additional networking and fiber channel connectivity. Most implementations, however will be able to meet their connectivity requirements from the available options.

A smaller issue may be power sources. Blade servers will generally take different power sources compared to standard general purpose servers. The trade off is that in feeding a blade server a L15-30P outlet you may not need a power distribution unit (PDU). The PDU may take the same L15-30P interface, so some planning on your power sources and availability to get the correct sources available.

The verdict

The current generation of blade servers are serious contenders for virtualization hosts. The small footprint only makes the case more compelling. As the blades now are able to offer comparable performance specs of the traditional server counterparts, we should consider them for the host hardware environment.


February 29, 2008  12:51 PM

Getting at internal disk space with Linux distro Openfiler

Rick Vanover Joseph Foran Profile: Joe Foran

Based on the volume of media coverage, explosive growth of iSCSI storage area network (SAN) companies and my own experiences, I think it’s safe to say that NFS and iSCSI SANs have begun to prove their worth as valid storage platforms with VMware’s Infrastructure 3, and both are also a lot cheaper than 2 GB and 4 GB Fibre Channel SANs. This has made it very interesting to those who are in the lower-tier of purchasing power – namely the many small and medium-sized companies that can’t afford the initial acquisition costs (the bulk of my clientele, by the way). That brings in the great equalizer – open source NFS and software iSCSI solutions, though given a choice I prefer iSCSI, because it can support VM partitions.

One of these OSS iSCSI packages is called IET, the iSCSI Enterprise Target, and it’s rolled into a Linux distro called Openfiler, a distribution focused on being a storage platform. Before we all rush out and use Openfiler for all our needs, IET itself has had some problems with supporting VMotion, so it’s earned a bit of a bad rap. In fact, there are often wails of protesters screaming about how IET is unsupported by the good folks at VMware because of the way two SCSI commands are structured and how that can hose a VMotion move. Luckily, there’s some good news on that front.

The truth be told, the good news there is only half-good. IET’s release 0.4.5 addresses those problems. The better news is that this update is rolled into the latest patches for OpenFiler (which has a self-update option, just like any other distro). There’s a nice link in a blog here. The reason it’s only half-good… IET is still not supported, patch or no patch. Still, I’ve been using it in the lab, and it works just fine for me. I’ve done 1, 2, and 10 VMotion moves by manipulating DRS, both manually and automatically, and haven’t hit a snag yet. That said, it isn’t supported.

Hitting an Openfiler target with VI3.5 is easy. First, install OpenFiler, ESX and VirtualCenter and configure them. I will skip most of the details of basic setup, as it’s superflous. Openfiler’s install guide is here, but the Petri IT Knowledgebase has a specific article that covers soup-to-nuts here. I’ll post another below that’s a little shorter and to the point. You’ll need to create the virtual machine on the local disk of the ESX host (otherwise, what’s the point?) and configure it to take up just enough, but not too much of that precious internal disk space. In my example, I built a virtual machine with two virtual disks (you can do it with one, if you want). The trip from an unallocated lump o’ disk to iSCSI-enabled volumes went like this:

Part One: Disk Management

Set up your physical storage from the get-go. During setup, I chose NOT to do anything to the second drive – which is to say, I didn’t let Disk Druid touch it at all. When Openfiler boots, and you have your basic settings done, things like ntp, the admin password, etc., go to the Volumes button, and click on Physical Storage Mgmt. From there, create the extended and logical partitions you will need to get your iSCSI volumes up. The steps, once Openfiler is up, look a little like this:

The basic setup. System, and nothing else:

Creating the Extended Partition:

Creating the Logical (Physical) Partition:

The Partitons on the Second Drive:

During creation, I ran into a problem on both machines I did this build on – in each case I had created the logical partition, but OF didn’t see it. I had to go delete it, and create it again.

Part 2: Creating the LUN

First up, enable the server to act as an iSCSI Target. This is done from the Services tab, under Enable/Disable. In this example, I’m only using iSCSI, but if you wanted to enable NFS, this is where you would do so. Mine looked like this:

Then, go back to Volumes, under Volume Group Mgmt, and create the iSCSI volume group. In this example, I’m using the whole disk, but you don’t have to.

The before and after looks like this:

 

Next, head over to Create New Volume (one tab left). Creating a volume is, like the rest of the process, relatively straightforward. I made one volume, comprising the entirety of my one Volume Group. Having chosen the name”san” (how creative of me) for the process, I kept with the same simplicity for the volume I created.

 

After that, you are forwarded to the List of Existing Volumes tab. Congratulations, you have an iSCSI SAN. Now, about configuring that SAN…

Part 3: Security

There are two ways to lock down your SAN. One is via the network, and choosing only to allow certain hosts or networks access to the SAN. From the List of Existing Volumes tab, select the Edit link under the Properties column. You will then be presented with a screen called Volume’s Properties giving you a number of options for securing the LUN. I won’t dwell too much on the network access restrictions, as my screen captures will be different from other networks. The long and short of it is that in the General tab there is a sub-tab called Local Networks. You can create individual hosts and subnets there, and then allow or deny access to these networks on the Volume’s Properties tab.Using MS-CHAP to lock down who can connect to your target is a much smarter plan.

Part 4: Getting VMware to See Your LUN

This is extremely straightforward, and is no different than any other iSCSI implementation. The only thing I’ll mention here is something that is left out of many, many documents. Open the ports for iSCSI on your ESX hosts! In 3.5, it looks like this:

 

There are a lot of considerations that need to be addressed about what you keep there, chiefly around reliability and performance. For example, if you put virtual machines in the iSCSI SAN on the physical host, when the physcial host dies, it takes the SAN with it, along with all those virtual machines. Another consideration is networking. The long and short of it is this – if you have a whole lot of unused disk space on your ESX hosts, you can use Openfiler to put stuff there. As to what stuff, that’s entirely up to you. I personally prefer iso files and other junk that doesn’t need to go anywhere or do anything, and isn’t going to hurt me when something fails. Of course, you could also use NFS, which Openfiler supports.

The end result, Openfiler gets a solid 8 pokers from me. I give it a ten for being a soup-to-nuts full-featured distro that I can use at almost any client site for almost any purpose. If this were a storage column, it would remain a ten. It drops back because they haven’t yet earned a nod from VMware, which is something I would have expected them to pursue, considering their growth and media attention lately. What makes it frustrating is that Xinit systems, the company behind the project, makes storage appliances. Since they’ve been producing Openfiler since 2003, I think it’s about time they got on the move with this!


February 27, 2008  10:41 AM

VMware’s “rookie” Seminar too lightweight

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

With virtualization adoption teetering on mainstream, I am sure it is difficult for VMware to find the balance between what to explain about the technology and what is considered common knowledge.

Judging by a show of hands, a lot of what the 40-or so IT admins who attended VMware Inc.’s Virtualization Seminar Series at the Hilton Hotel in Providence, RI Tuesday morning heard was the latter. The seminar was a low-level look at VMware technologies on the market and those coming down the pipeline. It also had some case studies supporting virtualization, and a snore-inducing spiel from their sponsor, data networking company Brocade.

The case study that seemed to be of most interest to attendees was about the technology team at IntelliRisk Management Corporation (IRMC), a company with call centers and clients all over the world, deploying VMware’s Virtual Desktop Infrastructure (VDI).

Using VMware’s VDI, IRMC was able to centralize its global data center operations by giving their employees access to applications, operating systems, etc. via virtual desktops.

One unimpressed system administrator at the seminar asked, “And this is different from Citrix how?”

Peter Marcotte, VMware’s Systems Engineering manager, said the good old server based computing (SBC) environments from Citrix Systems, Inc., where each user connects to a remote desktop running on a Microsoft terminal server and/or a Citrix Presentation Server, doesn’t offer the kind of flexibility VDI does. He also said applications don’t run as well in SBC environments as they do in isolated virtual desktop machines.

Independent Technology Analyst and Blogger Brian Madden wrote an analysis of VDI and SBC that weighs their pros and cons and when to use each.

Madden wrote that VDI offers better performance from the users’ standpoint, it doesn’t have application compatibility issues, and offers better security than traditional SBC.

In the case of IRMC, they deployed virtual desktops and can add a new PC image in less than 10 minutes. All of the virtual desktops can be managed from one location through VirtualCenter. After deploying VDI, IRMC saw an annual return on investment (ROI) of 73%, with a payback period of 1.37 years, the case study shows.

On the flip side, Madden wrote that SBCs have the maturity advantage — it’s been around for a decade — and it is easy to manage.

“With SBC you can run 50 to 75 desktop sessions on a single terminal server or Citrix Presentation Server, and that server has one instance of Windows to manage. When you go VDI, your 50 to 75 users have 50 to 75 copies of Windows XP that you need to configure, manage, patch, clean, update, and disinfect. Bummer!” Madden blogged.

Of course, VMware’s Marcotte didn’t mention that Citrix announced its own VDI product, Citrix XenDesktop back in October 2007 to compete with VMware’s VDI offering.

The seminar was helpful to some people I am sure — there were questions here and there – but overall I am a bit annoyed because by definition, seminars are supposed to teach us something and I’m not sure this one accomplished that.

Additionally, I, unlike most attendees, who either left or used the time to catch up on emails via Blackberry, sat through Brocade’s commercial for their Advanced Fabric Services, expecting a “Live Customer Testimonial” to follow as scheduled, but that part of the program never happened.

Hearing an actual user talk about their experiences with virtualization is far more helpful to other users than seeing vendor slide presentations. Users could have asked about snags during deployment, positive results and gotten some good advice.

Hopefully other VMware seminars include the Live Customer Testimonials; it would make the time more worthwhile for attendees.


February 25, 2008  1:33 PM

VMware underwhelmed by Workstation security flaw

Alex Barrett Alex Barrett Profile: Alex Barrett

Security is the VMware topic du jour, what with VMware releasing several security patches for ESX 3.0.2, and with Boston-based Core Security Technologies revealing a vulnerability in VMware Workstation, ACE and Player that exploits the use of shared folders.

On the latter front, it appears that the shared folders vulnerability hasn’t sent shivers down VMware’s spine. According to Core Security CTO Ivan Arce, VMware has known about this vulnerability since last fall.

“We contacted VMware about this and reported it to them on Oct. 16,” of 2007, Arce said. Since then, VMware told Core that it was working to release a fix for the flaw, which was originally scheduled for December. But VMware has delayed the fix multiple times and has now scheduled the fix for this later month.

However, Arce said that Core has received no confirmation that the coming release will actually fix the problem.

Rather than wait any longer for VMware to resolve the problem, Core Security decided to go ahead and alert users to the vulnerability and its simple fix, Arce said.

But even though it’s been over four months since Core first alerted VMware to this vulnerability, know that VMware is by no means the most irresponsible independent software vendor Arce has worked with. “I think that they have been responsive, but they could have been more responsive,” Arce said. “There’s definitely room for improvement” in terms of “improving processes and getting things done faster. But there are companies that are far worse than VMware.”

Well, that’s something.


February 25, 2008  12:27 PM

Review: VMware’s Storage/SAN Compatibility Guide for ESX Server 3.5 and ESX Server 3i

Alex Barrett Joseph Foran Profile: Joe Foran

VMware documentation has traditionally been great — it provides useful how-to and compatibility information, and the plain truth in quick-to-read documents with accurate charts, and the information is kept current. Usually the documentation is even written in such a way that you can get a couple of chuckles.

That said, I was sorely disappointed with the content in the latest Storage/SAN Compatibility Guide for 3.5. I’ll give credit where credit is due, however, and VMware is clearly telling the truth when they say the following right in the introduction:

You will note that this guide is sparsely populated at present. The reason for this is that the storage arrays require re-cerfitification for ESX Server 3.5 and ESX Server 3i, and while many re-certifications are in process or planned, relatively few have been completed to date.

That’s the absolute truth. I suppose the reason I’m so disappointed with this document has little to do with the document itself. Having upgraded to 3.5 in my own shop, replacing 2.5, VMware Server 1.0, and even a GSX box, I’m a bit miffed in general about some of the smaller bugs, documentation omissions, and oddities of the product, but to see such a sparse storage compatibility document is a big disappointment.

Thankfully, my department isn’t off-list, but I have private-practice customers who are. Overall, the document gets six pokers. It’s an easy read, it’s informative and it’s truthful (good for around two pokers each).

What it lacks is content, and that, I have a feeling, was more due to pressure to get a big product release out during the IPO period and first year of trading to keep Wall Street happy. At the risk of getting onto my soapbox for a minute, the fact that VMware has to admit to sparsity in their documentation is brutal — it shows the potential beginnings of a corporate shift away from product focus and onto market focus. While some may argue that market focus is good for business, innovation and the economy, I’m not one of them — I’m all in favor of doing away with quarterly reporting, focusing on the long-term value of public companies and letting the day traders and short sellers eat their own cooking.

I sincerely hope that I’m wrong, and that there were other reasons for putting out a product without first recertifying the MOST important hardware involved in the underlying infrastructure. I’ve been to ex-parent and current majority stakeholder EMC’s lab — in my F500 days I got the grand tour because we bought some very expensive SAN gear, installation and support services. It’s huge. It has everything in it, from mainframes to micros, blades to whiteboxes. I know VMware’s own labs are no small affair either (though I’m still waiting on an invite to go there — which might make a good article if it ever happens).

I just don’t see why such a successful, independently minded, historically thorough company would, simply put, goof it up by not dedicating enough resources to recertifying products.

So six pokers and a soapbox admonition it is.

Can I get a little help getting down from my high horse, please? It’s a bit drafty up here.


February 25, 2008  12:18 PM

P2V migration success, thanks to Robocopy

Alex Barrett Joseph Foran Profile: Joe Foran

…and no thanks to VMware Converter Enterprise or Vizioncore’s vConverter. 

The situation: a very successful physical-to-virtual (P2V) migration, with only two servers to go. Both original equipment manufacturer (OEM) boxes.

One is a Windows 2003 Server File/Print/VMware Server. One is a Windows 2000 domain controller, with accounting and payroll software. The owners have been very reluctant to migrate from stable boxes, which have run reliably, backed-up successfully, and (until recently) have also performed decently.

However, disk space is at an all-time low and prompting alerts from the system’s management console so often that it’s been put on the exclusion list, complete with note taped to the ops board. There’s a plan to upgrade to Exchange 2007 and thus get out of Windows 2000 Native mode in Active Directory.

The players: me, VMware VI3.5, VMware Converter Enterprise (of course), VMware tech support, and Vizioncore’s vConverter.

The end result:  Less than stellar migration with VMware Converter and Vizioncore’s vConverter. On the file server, it went the easiest. After the first (Converter) P2V attempt failed, and vConverter came up empty, I took a hint from the ITIL playbook and implemented the workaround (check the many writings on ITIL and Change Management for more meaty details than I care to post here). That workaround? Robocopy, IP changes, and host name changes.

Robocopy

Robocopy is your friend. It is your dear, dear friend that loves you. It’s a tool of similar functionality to the *nix rsync command, in that it can mirror a directory structure, survive the occasional network interruption, etc. It has fundamental differences, but it comes from the same root – an improved version of the copy command that exists in every operating system ever designed. My favorite part? /SEC, which copies NTFS permissions from host to host (normally, these are destroyed by being replaced by inherited permissions at the target). So, it’s just a simple batch script. That’s right… batch. That old beast of burden, come back to ride high once more.

@ECHO OFF

SETLOCAL

SET _rcsource=\\SOURCEHOST\d$\shared

SET _rctarget=\\TARGETHOST\d$\shared

SET _rcaction=/COPYALL /TBD /ZB /E /SEC

SET _rcopts=/R:20 /W:1 /LOG:FSMIGRATE.log

ROBOCOPY.EXE %_rcsource% %_rctarget% %_rcaction% %_rcopts%

The end result is a complete copy of all directories from the source to the target that can survive network outages, copies NTFS security, retried in-use files 20 times with a one-second delay, and logs it all.

I’ve long since lost the source for that batch, but I’ve used it on countless file servers. After that it was very simple to swap IP addresses and host names, remove the old shares on the source server, and share out the appropriate directories on the target server. World’s easiest P2V not done via P2V tool – mostly because a file server is simple.

Next post, the Windows 2000 Server domain controller, a.k.a. my private OEM hell.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: