The Virtualization Room


February 25, 2008  1:33 PM

VMware underwhelmed by Workstation security flaw

Alex Barrett Alex Barrett Profile: Alex Barrett

Security is the VMware topic du jour, what with VMware releasing several security patches for ESX 3.0.2, and with Boston-based Core Security Technologies revealing a vulnerability in VMware Workstation, ACE and Player that exploits the use of shared folders.

On the latter front, it appears that the shared folders vulnerability hasn’t sent shivers down VMware’s spine. According to Core Security CTO Ivan Arce, VMware has known about this vulnerability since last fall.

“We contacted VMware about this and reported it to them on Oct. 16,” of 2007, Arce said. Since then, VMware told Core that it was working to release a fix for the flaw, which was originally scheduled for December. But VMware has delayed the fix multiple times and has now scheduled the fix for this later month.

However, Arce said that Core has received no confirmation that the coming release will actually fix the problem.

Rather than wait any longer for VMware to resolve the problem, Core Security decided to go ahead and alert users to the vulnerability and its simple fix, Arce said.

But even though it’s been over four months since Core first alerted VMware to this vulnerability, know that VMware is by no means the most irresponsible independent software vendor Arce has worked with. “I think that they have been responsive, but they could have been more responsive,” Arce said. “There’s definitely room for improvement” in terms of “improving processes and getting things done faster. But there are companies that are far worse than VMware.”

Well, that’s something.

February 25, 2008  12:27 PM

Review: VMware’s Storage/SAN Compatibility Guide for ESX Server 3.5 and ESX Server 3i

Joseph Foran Profile: Joe Foran

VMware documentation has traditionally been great — it provides useful how-to and compatibility information, and the plain truth in quick-to-read documents with accurate charts, and the information is kept current. Usually the documentation is even written in such a way that you can get a couple of chuckles.

That said, I was sorely disappointed with the content in the latest Storage/SAN Compatibility Guide for 3.5. I’ll give credit where credit is due, however, and VMware is clearly telling the truth when they say the following right in the introduction:

You will note that this guide is sparsely populated at present. The reason for this is that the storage arrays require re-cerfitification for ESX Server 3.5 and ESX Server 3i, and while many re-certifications are in process or planned, relatively few have been completed to date.

That’s the absolute truth. I suppose the reason I’m so disappointed with this document has little to do with the document itself. Having upgraded to 3.5 in my own shop, replacing 2.5, VMware Server 1.0, and even a GSX box, I’m a bit miffed in general about some of the smaller bugs, documentation omissions, and oddities of the product, but to see such a sparse storage compatibility document is a big disappointment.

Thankfully, my department isn’t off-list, but I have private-practice customers who are. Overall, the document gets six pokers. It’s an easy read, it’s informative and it’s truthful (good for around two pokers each).

What it lacks is content, and that, I have a feeling, was more due to pressure to get a big product release out during the IPO period and first year of trading to keep Wall Street happy. At the risk of getting onto my soapbox for a minute, the fact that VMware has to admit to sparsity in their documentation is brutal — it shows the potential beginnings of a corporate shift away from product focus and onto market focus. While some may argue that market focus is good for business, innovation and the economy, I’m not one of them — I’m all in favor of doing away with quarterly reporting, focusing on the long-term value of public companies and letting the day traders and short sellers eat their own cooking.

I sincerely hope that I’m wrong, and that there were other reasons for putting out a product without first recertifying the MOST important hardware involved in the underlying infrastructure. I’ve been to ex-parent and current majority stakeholder EMC’s lab — in my F500 days I got the grand tour because we bought some very expensive SAN gear, installation and support services. It’s huge. It has everything in it, from mainframes to micros, blades to whiteboxes. I know VMware’s own labs are no small affair either (though I’m still waiting on an invite to go there — which might make a good article if it ever happens).

I just don’t see why such a successful, independently minded, historically thorough company would, simply put, goof it up by not dedicating enough resources to recertifying products.

So six pokers and a soapbox admonition it is.

Can I get a little help getting down from my high horse, please? It’s a bit drafty up here.


February 25, 2008  12:18 PM

P2V migration success, thanks to Robocopy

Joseph Foran Profile: Joe Foran

…and no thanks to VMware Converter Enterprise or Vizioncore’s vConverter.

The situation: a very successful physical-to-virtual (P2V) migration, with only two servers to go. Both original equipment manufacturer (OEM) boxes.

One is a Windows 2003 Server File/Print/VMware Server. One is a Windows 2000 domain controller, with accounting and payroll software. The owners have been very reluctant to migrate from stable boxes, which have run reliably, backed-up successfully, and (until recently) have also performed decently.

However, disk space is at an all-time low and prompting alerts from the system’s management console so often that it’s been put on the exclusion list, complete with note taped to the ops board. There’s a plan to upgrade to Exchange 2007 and thus get out of Windows 2000 Native mode in Active Directory.

The players: me, VMware VI3.5, VMware Converter Enterprise (of course), VMware tech support, and Vizioncore’s vConverter.

The end result:  Less than stellar migration with VMware Converter and Vizioncore’s vConverter. On the file server, it went the easiest. After the first (Converter) P2V attempt failed, and vConverter came up empty, I took a hint from the ITIL playbook and implemented the workaround (check the many writings on ITIL and Change Management for more meaty details than I care to post here). That workaround? Robocopy, IP changes, and host name changes.

Robocopy

Robocopy is your friend. It is your dear, dear friend that loves you. It’s a tool of similar functionality to the *nix rsync command, in that it can mirror a directory structure, survive the occasional network interruption, etc. It has fundamental differences, but it comes from the same root – an improved version of the copy command that exists in every operating system ever designed. My favorite part? /SEC, which copies NTFS permissions from host to host (normally, these are destroyed by being replaced by inherited permissions at the target). So, it’s just a simple batch script. That’s right… batch. That old beast of burden, come back to ride high once more.

@ECHO OFF

SETLOCAL

SET _rcsource=\\SOURCEHOST\d$\shared

SET _rctarget=\\TARGETHOST\d$\shared

SET _rcaction=/COPYALL /TBD /ZB /E /SEC

SET _rcopts=/R:20 /W:1 /LOG:FSMIGRATE.log

ROBOCOPY.EXE %_rcsource% %_rctarget% %_rcaction% %_rcopts%

The end result is a complete copy of all directories from the source to the target that can survive network outages, copies NTFS security, retried in-use files 20 times with a one-second delay, and logs it all.

I’ve long since lost the source for that batch, but I’ve used it on countless file servers. After that it was very simple to swap IP addresses and host names, remove the old shares on the source server, and share out the appropriate directories on the target server. World’s easiest P2V not done via P2V tool – mostly because a file server is simple.

Next post, the Windows 2000 Server domain controller, a.k.a. my private OEM hell.


February 25, 2008  11:39 AM

VMware Server guru discusses ESX, Microsoft Hyper-V

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Author and programmer Eric Hammersley had launched into using VMware Server when he realized virtualization’s space and cost-saving potential. At the time, he was installing multiple server and switch racks on a ship for the U.S. Navy. His book Professional VMware Server: Programmer to Programmer discusses installing and configuring VMware Server, tips for creating base imagines, image library organization best practices, integrating and leveraging VMware for your environment and more.

In this interview, Hammersley shares his thoughts on Microsoft Hyper-V and VMware ESX, explains basic virtualization architectures, and compares the three major VMware offerings: VMware Server, VMware ESX and VMware Workstation.

SearchServerVirtualization.com: Your book Professional VMware, Programmer to Programmer makes it’s clear that you’re a fan of VMware. Do you think Microsoft Hyper-V will affect VMware adoption?

Eric Hammersley: No, not yet. Virtualization is a disruptive technology (to use a rather brainy marketing term), and VMware leads the movement, though Microsoft has two advantages in this battle. First is the observation of trends in the technology. They have the benefit of VMware having been in the virtualization market for many years. No doubt, Microsoft has been following how VMware’s technological advances and has been analyzing this information for years. If you dig deep into Hyper-V you’ll see that its virtualization layer and approach is different than that of VMware’s. Did Microsoft discover a better way? Only time will tell.

The second advantage Microsoft has is sheer market dominance in the products that a very large percentage of VMware customers want to virtualize. If you’ve ever tried to virtualize an Exchange Server, you’ll know what a treat it is. As a virtual server, if Exchange has any kind of load it doesn’t work well enough for production use. Microsoft has a clear and rather major advantage in that regard. It can develop its server platforms and enterprise applications to utilize Hyper-V in a way that no one else can.

Do I want Hyper-V to succeed? Yes, mainly because I’m out here on the front lines virtualizing domain controllers, Exchange Servers (or trying to), SQL and many other Microsoft products. Hyper-V done right will make my job easier; done wrong and I’ll be spending time explaining to people with checkbooks why their virtualization initiative didn’t work well.

We could also see some interesting developments with Citrix’s acquisition of Xen. We should all keep an eye on them. In the end, we all benefit from a diverse and rich selection of products.

You’ve mentioned previously that you didn’t work with VMware ESX because you prefer hosted architecture. Can you explain hosted or operating system virtualization versus the hypervisor or bare-metal approach?

Hammersley: Hosted architecture can be seen in products such as VMware Workstation, Server, and Microsoft Virtual PC, to name a few. A hosted architecture relies on an underlying operating system to provide driver support for the machine’s hardware. This reduces the virtualization software’s job in a sense because it doesn’t need to know how to talk with X or Y piece of hardware, just how to interact with the hardware via the host operating system –whether it’s Windows, Linux, or something else.

A hypervisor is a different beast, more of a bare-metal approach, if you will. You can see a hypervisor approach in VMware ESX Server. While with a hosted architecture you rely on the host operating system to provide driver and hardware support, hypervisor architecture performs this on its own. The drawback of this is obviously hardware support. A typical hypervisor package would provide its own kernel and drivers for select pieces of supported hardware, removing the need for a host operating system to provide the driver support. This offers a big advantage of increased resources for your virtual machines, but it comes at an increased cost.

Has the release of ESX 3.5 prompted you to reconsider VMware’s ESX product suite?

Hammersley: My problems with ESX in the past have always been with the limited amount of approved hardware, along with the high cost. As an IT professional who has spent a large portion of his career in smaller business, the price has always kept me away. On the bright side, at least VMware doesn’t charge per core.

A few weeks ago, I was brought onboard for a new project that will implement ESX Server to thousands of users in many key data centers across the country. The specifics are not important, but my love for VMware has grown even more. ESX Server is a great product — VMware Server on steroids, if you will. It’s sound, stable, and flexible. The scalability, plug-in modules such as VMotion, High Availability and others make it a perfect addition to the bigger business data center looking to modernize, streamline its processes, save some rack space, and cash, once you recoup the initial cost. I’m looking forward to really digging into ESX and plan on sharing my experiences with it, especially with the API.

What are the differences between VMware ESX, VMware Server and VMware Workstation?

Hammersley: Well, they all three squeeze out the virtualized goodness, but in slightly different ways. Right off, ESX Server is what’s called a bare-metal solution. While most applications today run atop the operating system, be it Windows, Linux, or something else, ESX Server is everything rolled into one. ESX provides the operating system and virtualization technology right out of the box. Since ESX provides the operating system it offers a very tight and robust virtualization layer between the hardware and the provided operating system.

In addition the ESX operating system has a very small footprint, only about 32 MB, which allows greater amounts of system resources for your virtual infrastructure to utilize. It is, however, pretty pricey; so ESX is more of the Cadillac, if you will, of the product line.

Next would be VMware Server, which is what my title is based on. VMware Server is really a child born from a previous product called GSX Server. Both ESX and GSX provided ultimately the same thing: server-based virtual machines served out via client applications installed on the desktop. GSX’s reduced price and greater flexibility in terms of hardware made it a populate substitute for ESX.

Now, in 2006, VMware announced the demise of GSX Server in favor of its new, and free I might add, VMware Server product. Why they made the shift away from GSX in favor of a new free product is a question for them, however, the move caused an extraordinary amount of excitement. With the new VMware Server we’d get the same server based virtual machines, still served out via the VMware client application to the desktop, still with the APIs and automation capability, but with a reduced price tag. What more could you want?

Finally there’s VMware Workstation. The workstation flavor is just that: a workstation-only virtualization product. For many years it lacked APIs required for automation, the ability to run virtual machines as services in the background, and the ability to perform centralized management. This made it rather crippled in terms of functionality. Many of the features I mentioned above, however, have made it into the newest VMware Workstation, including API automations, a long-awaited feature. Honestly, I spend more of my time on VMware Workstation than any other product due to its snapshot support and ease of use. This doesn’t mean it’s a better product than VMware Server, it just fits my needs a little better.

Do you prefer VMware on Linux or VMware on Windows?

Hammersley: It depends. I prefer VMware Server on Linux when I really need to squeeze out every piece of performance I can get. Linux, on average, has a greater amount of system resources available because the operating system footprint is considerably smaller than Windows. The catch however is that the configuration and management can be a bit of a trick sometimes, and if you have a mostly Windows shop you’ll have trouble getting staff buy-in.
VMware Server on Windows offers a simple install and configuration. The management of VMware Server on Windows is also pretty easy. The downfall, and this can be huge, is system resources. Your memory and CPU available to virtual machines is drastically lower on Windows than Linux. This means fewer virtual machines can run on a Windows install if you compare it to Linux with similar hardware.

In the end, it’s really about what fits into your environment.


February 22, 2008  5:59 PM

Upgrades, virtualization, and a tantalizing glimpse of the future

mschlack Mark Schlack Profile: mschlack

I’ve recently been through a number of operating system upgrade experiences on the small network I maintain to learn about new technologies, and it’s really made me hunger for an all-virtual future. I just installed Windows Server 2008 in a VM (VMware Server) and contrasting that with the pain of upgrading several machines to Vista, let me just say that I have seen the future, and it’s vastly superior to the present.

The ideal scenario, IMO:

Every machine ships with a virtualization layer, or at least that’s what you install on bare metal.

Every operating system comes as a VM, in a file, with a script to customize it as needed.

Every application comes in a virtual application machine — I’ll call it a virtual application component (VAC) to avoid confusion. Adding it into a VM CANNOT kill the base OS or paralyze other apps in the odd ways that apps sometimes do.

Every OS or app installation is instantly reversible and recoverable.

We’re not very far from that, eh? You can almost see Nirvana over the horizon. I’m sure I’m missing some key ingredient that will stop it all from happening (greed, maybe?). What do you think? Can we get there?

It does make me wonder what Microsoft’s Hyper-V strategy will be. Will Hyper-V be a candidate for that bare-metal layer? Or is that just going to be ESX or Xen, not that there’s anything wrong with them. Well, ESX is a bit much for desktops. Will Microsoft, in a short-sighted attempt at making sure you buy at least one copy of WS08, force you to install Hyper-V and the OS together?

Why I say short-sighted, among other reasons, is that if today’s computers had a Hyper-V layer, Vista would be having a different life. Or at least the possibility of one. There are many reasons why Vista hasn’t taken off, but at least one is the difficulties of upgrading. Here are some I’ve experienced:

1. If you had partitioned your hard drive for XP and created a boot partition with, say, 20GB, you couldn’t directly upgrade to Vista. Vista wants something like 19GB of free space to do an upgrade even though it doesn’t take up nearly that much space. If you had unallocated space on the drive, you probably couldn’t expand the boot volume into it, because for reasons I don’t  understand, most XP boot volumes can’t be grown. Nor does the Vista install routine have any special mojo to accomplish that. So you had to blow away XP and do a clean install, and then reinstall all your apps. Such fun!

2. Then there’s the strange upgrade policies for Vista itself. Now I admit that I haven’t tracked down all the ramifications of enterprise licensing, but at the retail level, it sure is screwy. Let’s say you have a machine with an OEM copy of a  lower version of Vista and you discover you need some of the features of Ultimate (oh yeah, I can’t live without BitLocker). You might think that buying the retail version of Ultimate would allow you to upgrade. Not a chance! Only buying a special upgrade version will do that — the full retail version can only be used to install a clean copy, once again forcing a reinstall of all apps. Who thought that was a good idea?

In both cases,  wouldn’t it have been so much nicer to snapshot your VACs to a network drive, install the new VM appliance, assigning it disk resources from anywhere, reconnect your VACs and go?  Maybe in the near future, your VACs would live on the network, either public or private, and the distinction between locally and remotely hosted apps would be somewhat transparent to the user.

Yes, I know there are all these nasty little issues of hardware compatibility and so on.  And then there’s the occasional “change in the driver model” that renders half your hardware obsolete. But if these problems are sorted and isolated into the right layers, I have to believe life will be easier. Call me a cockeyed optimist.


February 21, 2008  9:34 AM

Antivirus management issues in physical-to-virtual migrations

Rick Vanover Rick Vanover Profile: Rick Vanover

I am well into my company’s physical to virtualization (P2V) migration for most general purpose server systems, and it’s been pretty successful. But as our environment grew, we experienced a problem involving our virtual systems and our antivirus management system. In this blog post, I’ll explain the problem and tell you the solution so you can avoid a similar situation.

For most server systems, regardless of whether they are physical or virtual, maintaining a centrally managed antivirus package is a good strategy. This strategy includes regular definition updates, engine updates, policies for exclusions and scheduled full scans.

Let’s talk about the scheduled full scan. Historically, we regularly ran a full antivirus scan of the local file system on both the physical and virtual servers during off hours. This became a problem as the virtual environment became more populated.

We use the vkernel capacity analyzer and chargeback virtual appliance to monitor the performance of our virtual environment. What I noticed is that during the off time, we had an incredible spike in CPU utilization across all hosts and virtual machines. This spike was about 300% of our average CPU use for about two hours. We initially wanted to blame it on the full backup that happens close to this timeframe, but closer investigation led us elsewhere.

We had noticed that the CPU spike occurred on guest systems that are in isolated networks for stage-configure or isolation test roles. With the isolated systems, it was determined that the spike would not be caused by the full backup, as the the isolated systems were not able to communicate with the backup mechanism.

Avoiding the CPU spike

Once we determined that the scheduled full antivirus scan on the local file system was the culprit, we decided that a staggered set of full scans were required to avoid this massive spike. On physical systems with local processing, this is not a big issue, as they are generally idle. But applied to the virtual environment, this may cause unnecessary virtual machine migrations or performance alerts. So, in your migration strategies, be sure to consider any centrally scheduled activity like this and how it may affect your entire infrastructure — both physical and virtual.


February 20, 2008  3:36 PM

VMware VDI no bargain, analyst says

Alex Barrett Alex Barrett Profile: Alex Barrett

I just ran across an interesting article comparing the cost of VMware Virtual Desktop Infrastructure (VDI) with that of upgrading an existing “rich client” desktop to Vista. According to analyst Barb Goldworm, it ain’t all that pretty:

The simplified bottom-line pricing comparison (using the very rough example numbers given here) is this: Upgrading a physical desktop to Vista might cost $300 to $400 (per desktop) in hardware costs and $200 to $300 in software costs, totaling $500 to $700 per physical desktop. Delivering Vista through a virtual desktop architecture (VMware’s VDI in this example) and continuing to use existing PCs as rich clients accessing virtual desktops might cost $700 per VM desktop in infrastructure costs and $23 per VM desktop, if using VECD, totaling $723 per virtual desktop.

She goes into pretty extensive detail comparing the two models, and I wonder if anyone that’s evaluating VDI would care to comment on her assumptions or has done his or her own math? Alternately, has anyone thought about using a virtualization platform other than ESX for VDI? If you’re serious about VDI, exploring alternate virtualization platforms seems like an obvious place to trim some costs.


February 20, 2008  9:28 AM

FastScale Composer lives to see Version 2.0

Alex Barrett Alex Barrett Profile: Alex Barrett

This summer, SearchServerVirtualization.com wrote about a large VMware shop that uses an application virtualization and provisioning tool from FastScale Technology Inc. to keep its test lab under control. Now there’s news that FastScale will launch version 2.0 of its Composer product, a good sign for the Santa Clara, Calif., startup with just a handful of customers to its name (according to Lynn LeBlanc, the company’s CEO, “more than 10,” to be imprecise).

The company’s goal with FastScale Composer 2.0 is twofold, said LeBlanc: scalability and useability. “With our prior user interface, I felt we were scalable if you were dealing with hundreds of machines — but not if you were dealing with thousands of machines.” To scale up to these large environments, FastScale re-architected its user interface to represent server inventory and configuration details in a browseable, hierarchical format.

Other features include new support for Red Hat Enterprise Linux 5 (previously, FastScale supported only RHEL 4) and provisioning into VMware ESX 3.5 virtual machines.

As more customers get their hands on it, LeBlanc says the principal use cases and benefits of FastScale’s technology are beginning to crystallize. For one thing, the company’s “application blueprinting” technology results in application stacks that are significantly smaller than traditional ones (LeBlanc claims an average size reduction of more than 90% over a traditional image), making it a good way to optimize performance and resources. “The smaller the environment, the fewer resources it uses,” LeBlanc explained. For another, these smaller environments encourage dynamic provisioning — and reprovisioning — of IT environments. Finally, blueprinting automatically detects dependencies between an application and the underlying operating system, eliminating the time-consuming and manual process of creating static “golden images.”

Anyway, it’s interesting technology, somewhere between Scalent Systems provisioning and Ardence application virtualization and is probably worth reading more about.


February 15, 2008  2:54 PM

Stressing the value of virtual test environments

Rick Vanover Rick Vanover Profile: Rick Vanover

We all know that test environments are an important part of the quality process within IT. Unfortunately, we may not always follow through and provide good test environments. Virtualization changes all of that. This tip will share some of the strategies that I have found to truly enable robust test procedures beyond the basic testing on virtual machines.

100% Representative environment

With physical to virtual (P2V) conversion tools, I have been successful in creating test environments that are exact duplicates of what I am testing. A good example is a Windows server functioning as an Active Directory Services domain controller. Generally, I consider it a bad practice to perform a P2V conversion on a domain controller for continued use. But, in the case of a test environment, a P2V conversion is a great way to get a “copy” of your Active Directory domain into your test environment. For those of you wondering about the connectivity, of course the networking needs to be isolated from the live network.

With a P2V performed on a domain controller, I have had a great environment to test top-level security configurations, major group policy changes, and Active Directory schema updates. Outside of this type of test environment, these types of changes are usually difficult to simulate well. Sure we can create a development Active Directory domain, but it would not be fully representative of the live environment.

Performance testing

For many people that are new to virtualized environments, there may be some skepticism on virtual system performance. Providing test environments on virtual systems is nothing new, but our challenge is to make the test environments fully equivalent of the planned configuration. One strategy that can be implemented with VMware Infrastructure 3 is to have a small quantity of ESX hosts that are fully licensed and configured like the rest of the servers in the environment and refer to that as a development cluster or quality environment. In the development cluster, you can showcase high availability functionality, virtual machine migration, and fault tolerance to get the support of the business owners. Further, if the perfomance of the development environment is comparable to that of the live environment, the confidence of the virtual system is increased.

Caution factor

With this added functionality, it is also a little easier to cause issues with the live systems. With the example of having a fully-functional Windows domain controller, serious issues could result if that system is accidentally connected to the live network. Because of this risk, a good practice is to make virtual networks that are completely isolated. This goes beyond simply creating a test network within the virtual environment, but to configure the virtual host system to not permit any virtual machine connectivity to the live network. This makes file exchanges a little more difficult, but there are plenty of tools to assist in this space.

Readers, I invite you to share your virtualization test environment strategies. What has helped you deliver a better test procedure by using a virtual test environment?


February 13, 2008  9:49 AM

Dilbert gets orders to virtualize!

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Scott Adams isn’t the first to create a cartoon about virtualization (see VirtualMan helps IT pros explain virtualization’s benefits). Even so, his short comics that grace yesterday and today’s Dilbert.com homepage highlight a simple truth: for IT managers, getting the green light to virtualize is a lot easier if the higher ups have the idea first. Here’s a thought: If you want to virtualize, and your C-levels aren’t quite paying attention, maybe you should put a virtualization insert in one of his (or her) trade journals?

But, as today’s comic points out, even if your company approves a virtualization project, you still may not get to partake in the fun!


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: