The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog


February 21, 2008  9:34 AM

Antivirus management issues in physical-to-virtual migrations



Posted by: Rick Vanover
P2V, Servers, Virtualization

I am well into my company’s physical to virtualization (P2V) migration for most general purpose server systems, and it’s been pretty successful. But as our environment grew, we experienced a problem involving our virtual systems and our antivirus management system. In this blog post, I’ll explain the problem and tell you the solution so you can avoid a similar situation.

For most server systems, regardless of whether they are physical or virtual, maintaining a centrally managed antivirus package is a good strategy. This strategy includes regular definition updates, engine updates, policies for exclusions and scheduled full scans.

Let’s talk about the scheduled full scan. Historically, we regularly ran a full antivirus scan of the local file system on both the physical and virtual servers during off hours. This became a problem as the virtual environment became more populated.

We use the vkernel capacity analyzer and chargeback virtual appliance to monitor the performance of our virtual environment. What I noticed is that during the off time, we had an incredible spike in CPU utilization across all hosts and virtual machines. This spike was about 300% of our average CPU use for about two hours. We initially wanted to blame it on the full backup that happens close to this timeframe, but closer investigation led us elsewhere.

We had noticed that the CPU spike occurred on guest systems that are in isolated networks for stage-configure or isolation test roles. With the isolated systems, it was determined that the spike would not be caused by the full backup, as the the isolated systems were not able to communicate with the backup mechanism.

Avoiding the CPU spike

Once we determined that the scheduled full antivirus scan on the local file system was the culprit, we decided that a staggered set of full scans were required to avoid this massive spike. On physical systems with local processing, this is not a big issue, as they are generally idle. But applied to the virtual environment, this may cause unnecessary virtual machine migrations or performance alerts. So, in your migration strategies, be sure to consider any centrally scheduled activity like this and how it may affect your entire infrastructure — both physical and virtual.

February 20, 2008  3:36 PM

VMware VDI no bargain, analyst says



Posted by: Alex Barrett
Virtualization

I just ran across an interesting article comparing the cost of VMware Virtual Desktop Infrastructure (VDI) with that of upgrading an existing “rich client” desktop to Vista. According to analyst Barb Goldworm, it ain’t all that pretty:

The simplified bottom-line pricing comparison (using the very rough example numbers given here) is this: Upgrading a physical desktop to Vista might cost $300 to $400 (per desktop) in hardware costs and $200 to $300 in software costs, totaling $500 to $700 per physical desktop. Delivering Vista through a virtual desktop architecture (VMware’s VDI in this example) and continuing to use existing PCs as rich clients accessing virtual desktops might cost $700 per VM desktop in infrastructure costs and $23 per VM desktop, if using VECD, totaling $723 per virtual desktop.

She goes into pretty extensive detail comparing the two models, and I wonder if anyone that’s evaluating VDI would care to comment on her assumptions or has done his or her own math? Alternately, has anyone thought about using a virtualization platform other than ESX for VDI? If you’re serious about VDI, exploring alternate virtualization platforms seems like an obvious place to trim some costs.


February 20, 2008  9:28 AM

FastScale Composer lives to see Version 2.0



Posted by: Alex Barrett
Virtualization

This summer, SearchServerVirtualization.com wrote about a large VMware shop that uses an application virtualization and provisioning tool from FastScale Technology Inc. to keep its test lab under control. Now there’s news that FastScale will launch version 2.0 of its Composer product, a good sign for the Santa Clara, Calif., startup with just a handful of customers to its name (according to Lynn LeBlanc, the company’s CEO, “more than 10,” to be imprecise).

The company’s goal with FastScale Composer 2.0 is twofold, said LeBlanc: scalability and useability. “With our prior user interface, I felt we were scalable if you were dealing with hundreds of machines — but not if you were dealing with thousands of machines.” To scale up to these large environments, FastScale re-architected its user interface to represent server inventory and configuration details in a browseable, hierarchical format.

Other features include new support for Red Hat Enterprise Linux 5 (previously, FastScale supported only RHEL 4) and provisioning into VMware ESX 3.5 virtual machines.

As more customers get their hands on it, LeBlanc says the principal use cases and benefits of FastScale’s technology are beginning to crystallize. For one thing, the company’s “application blueprinting” technology results in application stacks that are significantly smaller than traditional ones (LeBlanc claims an average size reduction of more than 90% over a traditional image), making it a good way to optimize performance and resources. “The smaller the environment, the fewer resources it uses,” LeBlanc explained. For another, these smaller environments encourage dynamic provisioning — and reprovisioning — of IT environments. Finally, blueprinting automatically detects dependencies between an application and the underlying operating system, eliminating the time-consuming and manual process of creating static “golden images.”

Anyway, it’s interesting technology, somewhere between Scalent Systems provisioning and Ardence application virtualization and is probably worth reading more about.


February 15, 2008  2:54 PM

Stressing the value of virtual test environments



Posted by: Rick Vanover
Lab management, Rick Vanover, Virtualization, Virtualization management, Virtualization strategies

We all know that test environments are an important part of the quality process within IT. Unfortunately, we may not always follow through and provide good test environments. Virtualization changes all of that. This tip will share some of the strategies that I have found to truly enable robust test procedures beyond the basic testing on virtual machines.

100% Representative environment

With physical to virtual (P2V) conversion tools, I have been successful in creating test environments that are exact duplicates of what I am testing. A good example is a Windows server functioning as an Active Directory Services domain controller. Generally, I consider it a bad practice to perform a P2V conversion on a domain controller for continued use. But, in the case of a test environment, a P2V conversion is a great way to get a “copy” of your Active Directory domain into your test environment. For those of you wondering about the connectivity, of course the networking needs to be isolated from the live network.

With a P2V performed on a domain controller, I have had a great environment to test top-level security configurations, major group policy changes, and Active Directory schema updates. Outside of this type of test environment, these types of changes are usually difficult to simulate well. Sure we can create a development Active Directory domain, but it would not be fully representative of the live environment.

Performance testing

For many people that are new to virtualized environments, there may be some skepticism on virtual system performance. Providing test environments on virtual systems is nothing new, but our challenge is to make the test environments fully equivalent of the planned configuration. One strategy that can be implemented with VMware Infrastructure 3 is to have a small quantity of ESX hosts that are fully licensed and configured like the rest of the servers in the environment and refer to that as a development cluster or quality environment. In the development cluster, you can showcase high availability functionality, virtual machine migration, and fault tolerance to get the support of the business owners. Further, if the perfomance of the development environment is comparable to that of the live environment, the confidence of the virtual system is increased.

Caution factor

With this added functionality, it is also a little easier to cause issues with the live systems. With the example of having a fully-functional Windows domain controller, serious issues could result if that system is accidentally connected to the live network. Because of this risk, a good practice is to make virtual networks that are completely isolated. This goes beyond simply creating a test network within the virtual environment, but to configure the virtual host system to not permit any virtual machine connectivity to the live network. This makes file exchanges a little more difficult, but there are plenty of tools to assist in this space.

Readers, I invite you to share your virtualization test environment strategies. What has helped you deliver a better test procedure by using a virtual test environment?


February 13, 2008  9:49 AM

Dilbert gets orders to virtualize!



Posted by: Ryan Shopp
Links we like, Virtualization, Why choose server virtualization?

Scott Adams isn’t the first to create a cartoon about virtualization (see VirtualMan helps IT pros explain virtualization’s benefits). Even so, his short comics that grace yesterday and today’s Dilbert.com homepage highlight a simple truth: for IT managers, getting the green light to virtualize is a lot easier if the higher ups have the idea first. Here’s a thought: If you want to virtualize, and your C-levels aren’t quite paying attention, maybe you should put a virtualization insert in one of his (or her) trade journals?

Yesterday’s Dilbert.com comic strip:

But, as today’s comic points out, even if your company approves a virtualization project, you still may not get to partake in the fun!


February 11, 2008  5:11 PM

VirtualMan helps IT pros explain virtualization’s benefits



Posted by: Ryan Shopp
Links we like, Servers, Virtualization, Virtualization strategies, Why choose server virtualization?

VirtualMan blog posts co-authored by Hannah Drake and Matt McDonough.

Trying to grasp the basics of server virtualization? Or, do you face the even more challenging task of explaining and/or pitching server virtualization projects to non-IT execs? Definitions from WhatIs.com or Wikipedia may help, or you could call in VirtualMan.

AccessFlow created an amusing and informative virtualization-based comic series to explain virtualization as a technology. In the first installment, superhero VirtualMan helps frustrated data center manager Ivy Green explain the complicated technology’s benefits to a resistant executive in layman’s terms, saving her from trying to fit yet another physical server into her data center.

Check out this week’s comic and learn how to defeat execs who harbor “a hardware-centric view of the world” with VirtualMan Powers Down.

VirtualMan is not only an amusing diversion that IT professionals will appreciate for its tongue-in-cheek look at the problems that are inherent in today’s data center, but it’s also a valuable educational tool for those that aren’t as familiar with virtualization as they would like. So whether you’re looking for a quick laugh at your desk during work or want to learn more about virtualization with cartoon art accompaniment, AccessFlow’s VirtualMan is definitely worth a peek. Stay tuned; we’ll review more episodes in the coming weeks.


February 8, 2008  1:40 PM

Linus: wake up and smell the coffee



Posted by: Ryan Shopp
grids and mainframes, Virtualization

This post is from Mark Schlack, VP/Editorial for TechTarget.

Linus Torvalds dismisses virtualization in a recent interview with the Linux Foundation:

“It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.”

For the record, virtualization is closer to 40 years old – you can read a fascinating article about the history of mainframe virtualization on wikipedia. More to the point, the attitude that there’s nothing new under the sun (except the innovation being pumped by those saying that) has always puzzled me, and the notion that modern virtualization is just a replay of the mainframe has now started to bug me.  It makes no more sense than to say that because MIT and a fair number of people in the scientific community helped develop mainframe virtualization by sharing code with each other and IBM (who gave them software to pilot), that open source is nothing new.

Of course there’s a link between mainframe and x86 virtualization. Conceptually and practically they have a lot in common, and so on. But it’s the differences that are compelling and that will lead to the radical changes Torvalds discounts.

I’m no expert on mainframe partitioning, but from what I’ve gathered over the years (please, correct me if I’m wrong), here’s what sticks out for me:

  • Mainframes, circa 1970, cost around $100,000 a month to lease. You actually couldn’t buy one if you wanted to. You can virtualize a heck of a lot on a $20,000 box today. Actually, you virtualize several systems on a $1,000 box. There wasn’t much you could buy for a mainframe that didn’t start with five figures.
  • As big an advance as the virtualization built into System 370 was, it only ever worked with IBM operating systems (And I believe, for a time, at least, only with IBM apps before the government forced IBM to open up to third-party software companies.). All of today’s x86 contenders can host multiple Linux distributions, multiple versions of Windows, Solaris and some also handle the Mac OS. Mainframes never even ran the other IBM platform operating systems.
  • The notion of an entire virtual machine contained in a file, portable from machine to machine, regardless of their hardware configuration, is new to the current wave. Also new: dynamically reassigning VM resource on the fly, moving VMs without restarting the hardware, and failover clustering of VMs.

I doubt that’s a comprehensive comparison, but the point is clear. IBM’s mainframe virtualization was certainly a niche feature, used by timesharing providers (the 1970s version of hosting companies). It was also used for niche application — according to wikipedia, mainly by scientists who needed a more interactive environment than the batch-oriented general purpose mainframe operating systems of the time were geared for. But the current crop is exactly the opposite – a generally useful tool that will impact all but niche applications.

Current virtualization’s main link to the mainframe, IMO, is that it is enabling mainframe-style utilization, reliability and ultimately, process-oriented management, on the very commodity platform that rendered that world asunder. If that sounds like back to the future, it isn’t. It may well represent the final triumph of general purpose, commodity-based computing over the highly specialized, batch-oriented world of the 1970s mainframe. It’s actually kind of cool to realize that the first microprocessors were being developed right around the time mainframe virtualization made its appearance, and now the two technologies are converging.

Torvalds goes on, in his interview, to talk about the truly radical developments on the horizon being new form factors. I don’t see it that way, but I’d be very surprised if new form factors don’t ultimately wind up using virtualization as a base technology. Think of a cell phone that can be completely upgraded with new capabilities because its software is a virtual appliance you download wirelessly. Now that’s a radical idea.


February 8, 2008  11:13 AM

Virtualization: Changing the OS game, or not?



Posted by: Ryan Shopp
Virtualization

Every morning, I sign onto my corporate email account and start plowing through emails. This morning, our media group had been alerted to an interesting blog post on The Linux Foundation blog. It’s a transcript of an interview with Linus Torvalds, developer of the Linux kernel.

Torvalds’ opinion on virtualization caught my interest:

Jim Zemlin: Let’s talk to conclude about the future. Where do you see Linux – and I know you don’t think too far ahead about this, but I’m going to prod you to say five years from now.

Is the world Windows and Linux? Does the operating system become irrelevant because everything’s in a browser? Is everything through a mobile device? Is there a new form factor that comes out of mobile tab? Where do you see things going?

Linus Torvalds: I actually think technology in the details may be improving hugely, but if you look at what the big picture is, things don’t really change that quickly. We don’t drive flying cars. And five years from now we still won’t be driving flying cars and I don’t think the desktop market or the OS market in general is going to move very much at all.

I think you will have hugely better hardware and I suspect things will be about the same speed because the software will have grown and you’ll have more bling to just slow the hardware down and it will hopefully be a lot more portable and that may be one reason why performance may not be that much better just because you can’t afford to have a battery pack that is that big.

But I don’t think the OS market will really change.

Jim Zemlin: Virtualization. Game-changer? Not that big of a deal?

Linus Torvalds: Not that big of a deal.

Jim Zemlin: Why do you say that?

Linus Torvalds: It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.

I’d say that the real change comes from new uses, completely new uses of computers and that might just happen because computers get pushed down and become cheaper and that might change the whole picture of operating systems.

But also, I’d actually expect that new form factor is in new input and output devices. If we actually end up getting projection displays on cell phones, that might actually change how people start thinking of hardware and that, in turn, might change how we interact and how we use operating systems. But virtualization will not be it.

Apparently, Torvalds has the exact opposite opinion from one of our writers. Jeff Byrne, senior analyst and consultant at Taneja Group, recently wrote about exactly how virtualization is going to change the operating system game.

Byrne writes:

As its uses continue to grow, server virtualization will pose a major threat to the strategic position that the general-purpose operating system has long held in the x86 software stack. In this situation, Microsoft in particular has a lot to lose. So do Linux and Unix vendors, but these OSes do have advantages in a virtualization setting.

He goes on to suggest that Linux and Unix OSes will likely have increased adoption rates as virtualization puts a large dent in the one-operating-system-to-one-server modus operandi, and because Windows users are becoming frustrated with licensing costs, technical issues in new releases that commonly aren’t resolved until the second or third release, and security vulnerabilities.

IT pros, I’m turning it over to you. Let the debate begin. Has your shop seen increased Linux or Unix adoption with virtualization? Do you think virtualization will change the OS market?


February 7, 2008  1:13 PM

Installing the VMware Server MUI on Centos 5.1 x86_64



Posted by: Joe Foran
Joseph Foran, Linux and virtualization, Servers, Virtualization, Virtualization platforms, VMware

As a follow-up to my prior post on getting CentOS 5.1 (x64) to host VMware server, this is a short instruction on what to do to get the MUI installed on your 64-bit CentOS box. I didn’t mention it last time because it’s a separate download and install, and I don’t personally install the MUI in VMware server unless I have a compelling reason (on Windows or Linux hosts). This is documented on VMware’s website as well, but it bears some simplification from a two-pager to a six-liner.

  1. Download the management interface (VMware-mui-1.0.4-56528.tar.gz) and extract (tar zxf VMware-mui-<xxxx>.tar.gz)
  2. Update your campat-db (yum update compat-db.x86_64 0:4.2.52-5.1 or just yum update compat-db to get ‘em all) if you haven’t already.
  3. Browse to the directory you extracted you mui setup files to and run the installer (./vmware-install.pl)
  4. Accept / Change options as needed.
  5. Start the http daemon (/etc/init.d/httpd.vmware start)
  6. Browse to https://YOUR.HOSTNAME.HERE:8333
  7. Enjoy.

This how to (and my prior post) should also work on other RHEL clones (like Whitebox, when they get WBEL 5 out). It should also work on RHEL.


February 7, 2008  1:10 PM

Virtualization strategies for the SMB



Posted by: Rick Vanover
Microsoft Virtual Server, Rick Vanover, Virtualization, Virtualization management, Virtualization strategies, VMware, Xen

The small and medium businesses (SMB) are unique in their approach to virtualization. In attending a recent VMware Users Group (VMUG) meeting it became clear that virtualization is much easier to embrace for the large enterprise, whereas the smaller IT shops have an entirely different dynamic. Here are three strategies currently employed by the SMB while approaching virtualization:

Free tools

The SMB may not have the money to jump into an enterprise virtualization management suite, so the free tools work nicely in their environment. Free tools such as VMware Server, Microsoft Virtual Server, and Citrix XenServer Express Edition offer virtualization at various levels that will generally meet requirements. The free virtualization packages have their benefits, but there also are limitations. These limitations revolve around storage management, high availability and redundancy.

Cite disaster recovery

The SMBs present at the VMUG frequently used disaster recovery to justify the upfront expense for decent virtualization management software. Disaster recovery tends to present a better argument for funding rather than simply stating that management software for virtualization technologies are superior to that of the free equivalents.

All or none

SMBs tend to shoot for an all or none approach to virtualization. If utilizing virtualization, serious reasoning is required to explain why a server can and should not be a virtual system. This correlates to the hardware purchases. In this situation, all new server purchases must be capable of the virtual host role regardless of the immediate availability of the desired end-state of the virtual environment.

It is tougher for the smaller IT shops

The smaller organizations have a particular challenge taking the virtualization plunge due to smaller budgets. The challenge for SMBs is to find a way to justify the expense and not in using virtualization itself-the benefits are obvious. The most difficult step, however is to get a business buy-in for the potential added expenses, and this is where SMBs should focus their efforts if they are trying to convince executives to use a virtualized system.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: