The Virtualization Room


February 15, 2008  2:54 PM

Stressing the value of virtual test environments

Rick Vanover Rick Vanover Profile: Rick Vanover

We all know that test environments are an important part of the quality process within IT. Unfortunately, we may not always follow through and provide good test environments. Virtualization changes all of that. This tip will share some of the strategies that I have found to truly enable robust test procedures beyond the basic testing on virtual machines.

100% Representative environment

With physical to virtual (P2V) conversion tools, I have been successful in creating test environments that are exact duplicates of what I am testing. A good example is a Windows server functioning as an Active Directory Services domain controller. Generally, I consider it a bad practice to perform a P2V conversion on a domain controller for continued use. But, in the case of a test environment, a P2V conversion is a great way to get a “copy” of your Active Directory domain into your test environment. For those of you wondering about the connectivity, of course the networking needs to be isolated from the live network.

With a P2V performed on a domain controller, I have had a great environment to test top-level security configurations, major group policy changes, and Active Directory schema updates. Outside of this type of test environment, these types of changes are usually difficult to simulate well. Sure we can create a development Active Directory domain, but it would not be fully representative of the live environment.

Performance testing

For many people that are new to virtualized environments, there may be some skepticism on virtual system performance. Providing test environments on virtual systems is nothing new, but our challenge is to make the test environments fully equivalent of the planned configuration. One strategy that can be implemented with VMware Infrastructure 3 is to have a small quantity of ESX hosts that are fully licensed and configured like the rest of the servers in the environment and refer to that as a development cluster or quality environment. In the development cluster, you can showcase high availability functionality, virtual machine migration, and fault tolerance to get the support of the business owners. Further, if the perfomance of the development environment is comparable to that of the live environment, the confidence of the virtual system is increased.

Caution factor

With this added functionality, it is also a little easier to cause issues with the live systems. With the example of having a fully-functional Windows domain controller, serious issues could result if that system is accidentally connected to the live network. Because of this risk, a good practice is to make virtual networks that are completely isolated. This goes beyond simply creating a test network within the virtual environment, but to configure the virtual host system to not permit any virtual machine connectivity to the live network. This makes file exchanges a little more difficult, but there are plenty of tools to assist in this space.

Readers, I invite you to share your virtualization test environment strategies. What has helped you deliver a better test procedure by using a virtual test environment?

February 13, 2008  9:49 AM

Dilbert gets orders to virtualize!

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Scott Adams isn’t the first to create a cartoon about virtualization (see VirtualMan helps IT pros explain virtualization’s benefits). Even so, his short comics that grace yesterday and today’s Dilbert.com homepage highlight a simple truth: for IT managers, getting the green light to virtualize is a lot easier if the higher ups have the idea first. Here’s a thought: If you want to virtualize, and your C-levels aren’t quite paying attention, maybe you should put a virtualization insert in one of his (or her) trade journals?

But, as today’s comic points out, even if your company approves a virtualization project, you still may not get to partake in the fun!


February 11, 2008  5:11 PM

VirtualMan helps IT pros explain virtualization’s benefits

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

VirtualMan blog posts co-authored by Hannah Drake and Matt McDonough.

Trying to grasp the basics of server virtualization? Or, do you face the even more challenging task of explaining and/or pitching server virtualization projects to non-IT execs? Definitions from WhatIs.com or Wikipedia may help, or you could call in VirtualMan.

AccessFlow created an amusing and informative virtualization-based comic series to explain virtualization as a technology. In the first installment, superhero VirtualMan helps frustrated data center manager Ivy Green explain the complicated technology’s benefits to a resistant executive in layman’s terms, saving her from trying to fit yet another physical server into her data center.

Check out this week’s comic and learn how to defeat execs who harbor “a hardware-centric view of the world” with VirtualMan Powers Down.

VirtualMan is not only an amusing diversion that IT professionals will appreciate for its tongue-in-cheek look at the problems that are inherent in today’s data center, but it’s also a valuable educational tool for those that aren’t as familiar with virtualization as they would like. So whether you’re looking for a quick laugh at your desk during work or want to learn more about virtualization with cartoon art accompaniment, AccessFlow’s VirtualMan is definitely worth a peek. Stay tuned; we’ll review more episodes in the coming weeks.


February 8, 2008  1:40 PM

Linus: wake up and smell the coffee

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

This post is from Mark Schlack, VP/Editorial for TechTarget.

Linus Torvalds dismisses virtualization in a recent interview with the Linux Foundation:

“It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.”

For the record, virtualization is closer to 40 years old — you can read a fascinating article about the history of mainframe virtualization on wikipedia. More to the point, the attitude that there’s nothing new under the sun (except the innovation being pumped by those saying that) has always puzzled me, and the notion that modern virtualization is just a replay of the mainframe has now started to bug me.  It makes no more sense than to say that because MIT and a fair number of people in the scientific community helped develop mainframe virtualization by sharing code with each other and IBM (who gave them software to pilot), that open source is nothing new.

Of course there’s a link between mainframe and x86 virtualization. Conceptually and practically they have a lot in common, and so on. But it’s the differences that are compelling and that will lead to the radical changes Torvalds discounts.

I’m no expert on mainframe partitioning, but from what I’ve gathered over the years (please, correct me if I’m wrong), here’s what sticks out for me:

  • Mainframes, circa 1970, cost around $100,000 a month to lease. You actually couldn’t buy one if you wanted to. You can virtualize a heck of a lot on a $20,000 box today. Actually, you virtualize several systems on a $1,000 box. There wasn’t much you could buy for a mainframe that didn’t start with five figures.
  • As big an advance as the virtualization built into System 370 was, it only ever worked with IBM operating systems (And I believe, for a time, at least, only with IBM apps before the government forced IBM to open up to third-party software companies.). All of today’s x86 contenders can host multiple Linux distributions, multiple versions of Windows, Solaris and some also handle the Mac OS. Mainframes never even ran the other IBM platform operating systems.
  • The notion of an entire virtual machine contained in a file, portable from machine to machine, regardless of their hardware configuration, is new to the current wave. Also new: dynamically reassigning VM resource on the fly, moving VMs without restarting the hardware, and failover clustering of VMs.

I doubt that’s a comprehensive comparison, but the point is clear. IBM’s mainframe virtualization was certainly a niche feature, used by timesharing providers (the 1970s version of hosting companies). It was also used for niche application — according to wikipedia, mainly by scientists who needed a more interactive environment than the batch-oriented general purpose mainframe operating systems of the time were geared for. But the current crop is exactly the opposite — a generally useful tool that will impact all but niche applications.

Current virtualization’s main link to the mainframe, IMO, is that it is enabling mainframe-style utilization, reliability and ultimately, process-oriented management, on the very commodity platform that rendered that world asunder. If that sounds like back to the future, it isn’t. It may well represent the final triumph of general purpose, commodity-based computing over the highly specialized, batch-oriented world of the 1970s mainframe. It’s actually kind of cool to realize that the first microprocessors were being developed right around the time mainframe virtualization made its appearance, and now the two technologies are converging.

Torvalds goes on, in his interview, to talk about the truly radical developments on the horizon being new form factors. I don’t see it that way, but I’d be very surprised if new form factors don’t ultimately wind up using virtualization as a base technology. Think of a cell phone that can be completely upgraded with new capabilities because its software is a virtual appliance you download wirelessly. Now that’s a radical idea.


February 8, 2008  11:13 AM

Virtualization: Changing the OS game, or not?

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Every morning, I sign onto my corporate email account and start plowing through emails. This morning, our media group had been alerted to an interesting blog post on The Linux Foundation blog. It’s a transcript of an interview with Linus Torvalds, developer of the Linux kernel.

Torvalds’ opinion on virtualization caught my interest:

Jim Zemlin: Let’s talk to conclude about the future. Where do you see Linux – and I know you don’t think too far ahead about this, but I’m going to prod you to say five years from now.

Is the world Windows and Linux? Does the operating system become irrelevant because everything’s in a browser? Is everything through a mobile device? Is there a new form factor that comes out of mobile tab? Where do you see things going?

Linus Torvalds: I actually think technology in the details may be improving hugely, but if you look at what the big picture is, things don’t really change that quickly. We don’t drive flying cars. And five years from now we still won’t be driving flying cars and I don’t think the desktop market or the OS market in general is going to move very much at all.

I think you will have hugely better hardware and I suspect things will be about the same speed because the software will have grown and you’ll have more bling to just slow the hardware down and it will hopefully be a lot more portable and that may be one reason why performance may not be that much better just because you can’t afford to have a battery pack that is that big.

But I don’t think the OS market will really change.

Jim Zemlin: Virtualization. Game-changer? Not that big of a deal?

Linus Torvalds: Not that big of a deal.

Jim Zemlin: Why do you say that?

Linus Torvalds: It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.

I’d say that the real change comes from new uses, completely new uses of computers and that might just happen because computers get pushed down and become cheaper and that might change the whole picture of operating systems.

But also, I’d actually expect that new form factor is in new input and output devices. If we actually end up getting projection displays on cell phones, that might actually change how people start thinking of hardware and that, in turn, might change how we interact and how we use operating systems. But virtualization will not be it.

Apparently, Torvalds has the exact opposite opinion from one of our writers. Jeff Byrne, senior analyst and consultant at Taneja Group, recently wrote about exactly how virtualization is going to change the operating system game.

Byrne writes:

As its uses continue to grow, server virtualization will pose a major threat to the strategic position that the general-purpose operating system has long held in the x86 software stack. In this situation, Microsoft in particular has a lot to lose. So do Linux and Unix vendors, but these OSes do have advantages in a virtualization setting.

He goes on to suggest that Linux and Unix OSes will likely have increased adoption rates as virtualization puts a large dent in the one-operating-system-to-one-server modus operandi, and because Windows users are becoming frustrated with licensing costs, technical issues in new releases that commonly aren’t resolved until the second or third release, and security vulnerabilities.

IT pros, I’m turning it over to you. Let the debate begin. Has your shop seen increased Linux or Unix adoption with virtualization? Do you think virtualization will change the OS market?


February 7, 2008  1:13 PM

Installing the VMware Server MUI on Centos 5.1 x86_64

Joseph Foran Profile: Joe Foran

As a follow-up to my prior post on getting CentOS 5.1 (x64) to host VMware server, this is a short instruction on what to do to get the MUI installed on your 64-bit CentOS box. I didn’t mention it last time because it’s a separate download and install, and I don’t personally install the MUI in VMware server unless I have a compelling reason (on Windows or Linux hosts). This is documented on VMware’s website as well, but it bears some simplification from a two-pager to a six-liner.

  1. Download the management interface (VMware-mui-1.0.4-56528.tar.gz) and extract (tar zxf VMware-mui-<xxxx>.tar.gz)
  2. Update your campat-db (yum update compat-db.x86_64 0:4.2.52-5.1 or just yum update compat-db to get ’em all) if you haven’t already.
  3. Browse to the directory you extracted you mui setup files to and run the installer (./vmware-install.pl)
  4. Accept / Change options as needed.
  5. Start the http daemon (/etc/init.d/httpd.vmware start)
  6. Browse to https://YOUR.HOSTNAME.HERE:8333
  7. Enjoy.

This how to (and my prior post) should also work on other RHEL clones (like Whitebox, when they get WBEL 5 out). It should also work on RHEL.


February 7, 2008  1:10 PM

Virtualization strategies for the SMB

Rick Vanover Rick Vanover Profile: Rick Vanover

The small and medium businesses (SMB) are unique in their approach to virtualization. In attending a recent VMware Users Group (VMUG) meeting it became clear that virtualization is much easier to embrace for the large enterprise, whereas the smaller IT shops have an entirely different dynamic. Here are three strategies currently employed by the SMB while approaching virtualization:

Free tools

The SMB may not have the money to jump into an enterprise virtualization management suite, so the free tools work nicely in their environment. Free tools such as VMware Server, Microsoft Virtual Server, and Citrix XenServer Express Edition offer virtualization at various levels that will generally meet requirements. The free virtualization packages have their benefits, but there also are limitations. These limitations revolve around storage management, high availability and redundancy.

Cite disaster recovery

The SMBs present at the VMUG frequently used disaster recovery to justify the upfront expense for decent virtualization management software. Disaster recovery tends to present a better argument for funding rather than simply stating that management software for virtualization technologies are superior to that of the free equivalents.

All or none

SMBs tend to shoot for an all or none approach to virtualization. If utilizing virtualization, serious reasoning is required to explain why a server can and should not be a virtual system. This correlates to the hardware purchases. In this situation, all new server purchases must be capable of the virtual host role regardless of the immediate availability of the desired end-state of the virtual environment.

It is tougher for the smaller IT shops

The smaller organizations have a particular challenge taking the virtualization plunge due to smaller budgets. The challenge for SMBs is to find a way to justify the expense and not in using virtualization itself-the benefits are obvious. The most difficult step, however is to get a business buy-in for the potential added expenses, and this is where SMBs should focus their efforts if they are trying to convince executives to use a virtualized system.


February 6, 2008  1:04 PM

Fixing the network in Dom-Us for Ubuntu hosts running Xen

Kutz Profile: Akutz

I ran into problem recently where the network would not come up in a Dom-U after the initial domain creation. All subsequent reboots would result in an unprivileged domain without network connectivity. Here is an example of the error that would occur if I attempted to restart the network manually:

root@odin:~# /etc/init.d/networking restart
* Reconfiguring network interfaces...
eth0: ERROR while getting interface flags: No such device
SIOCSIFADDR: No such device
eth0: ERROR while getting interface flags: No such device
SIOCSIFNETMASK: No such device
eth0: ERROR while getting interface flags: No such device
Failed to bring up eth0.

So what is going on? Well, it turns out that udev was binding the domain’s NIC to the initial MAC address of the vswif (which changes on reboot by default). Hence when the domain comes online for a second (and third, and so forth) time with a new MAC address, the domain does not recognize the NIC.

The solution is quite simple: edit the udev rules that bind NICs to MAC addresses. In Ubuntu this is in two places. You need to edit “/etc/udev/rules.d/70-persistent-net.rules” and comment out any lines that look like this:

SUBSYSTEM=="net", DRIVERS=="?*", ATTRS{address}=="00:16:3e:2b:2e:7b", NAME="eth0"

And because the aforementioned file is automatically generated, you also need to edit “/etc/udev/rules.d/75-persistent-net-generator.rules” and comment out any lines that look like this:

SUBSYSTEMS=="xen", ENV{COMMENT}="Xen virtual device"

I also commented out the following line for good measure:

ACTION=="add", SUBSYSTEM=="net", KERNEL=="eth*|ath*|wlan*|ra*|sta*" NAME!="?*", DRIVERS=="?*", GOTO="persistent_net_generator_do"

This will prevent *any* generation. Anyway, once you make these changes and restart the Dom-U, the network will come up every time without a hitch.

Hope this helps!


February 6, 2008  12:49 PM

Virtuozzo 4.0: Worth considering

Joseph Foran Profile: Joe Foran

If your company is buying into the virtualization game, you may want to consider Virtuozzo 4.0.

I work in a VMware shop (one of these days I’ll post on my 3.5 experiences) but I follow the virtualization market, and since SWsoft/Parallels released Virtuozzo 4.0, I think there’s  room for Parallels in the market.

Right now, SWsoft Parallels/Virtuozzo owns virtualization in the Web-hosting provider space. Their other products have a lot of traction in that space too (think Plesk and SiteBuilder). And of course, there’s the Mac virtualization product to beat – Parallels Desktop, the gold-medal standard that runs a few laps around poor Fusion, and the forthcoming Parallels Server which will let people virtualize OS X Server (as long as it’s all on Mac hardware!).

From their recent market moves, the company seems to be trying to take on Citrix’s Xen and Microsoft’s Virtual Server, perhaps even make a run on some of VMware’s market share by making some bold moves in the virtualization space. Once again, they are touting their OS-encapsulation variation on virtualization with Virtuozzo, which just released version 4.o.

From the Parallels blog:

Parallels Virtuozzo Containers is different because is virtualizes the OPERATING SYSTEM, not the hardware. This means that you can install an OS (Windows or Linux) and then run workloads off that single kernel in isolated execution environments that we call “containers.” Because all of the containers are working in direct contact with real hardware and are all working off that one OS install, performance is exceptional…about 97-99% of native, regardless of how many containers are running. And, container footprints are tiny – only 10 MB of RAM and 45 MB of disk space required at the bare minimum.

From a product feature view, you get many of the same features that one finds in other high-end products like VMware and XenServer:

  1. Management Interface – Groups virtualized systems logically making them easier to manage. You can also assign role-based administrative and reporting rights to users.
  2. P2V Tool – Allows you to migration from your old virtualization platform of choice to Virtuozzo. Allows for upgrading of Win2K servers to Win2K3 as part of the migration!
  3. Backup – Allows you to take a virtual machine and make a backup while the machine is running, and then stores the backup on another host.
  4. Templates – Allows you to install a virtual machine, make it into a template, and deploy new machines based on that template.
  5. CPU restriction – Since this is not a true virtual machine, the guests typically see all of the CPUs. This can now be restricted on Windows systems so that guests see only a set number of processors.

Some problematic areas in the past were with OS-level clustering and network load balancing in the virtual machines (now called “virtual environments,” since they really aren’t seperate machines).

The new version appears to address these issues and improves upon the handling of multi-NIC hosts and how particular virtual environments see and use those NICs (as well as other devices such as USB external drives, USB product key fobs, etc.). Virtuozzo containers, like most virtualization products, support both 32-bit and 64-bit virtual environments.

In a Web-hosting environment, this is a great tool because of the massive number of sites that can be provided to clients. Considering that the average corporate data center is not entirely different from a hosting provider (especially when you start talking about chargeback), Virtuozzo may work out well, but the cons and pros must both be considered.

Virtuozzo won’t do much in file/print or in in Terminal Services, but in putting out Web-based applications to users, or even standard client-server apps, Virtuozzo has a lot of the same advantages of VMware, Xen, Virtual Server, VirtualBox and so on in regards to server consolidation and controlling hardware growth.

I wouldn’t count on the thousands-to-one ratio often touted in the Web-hosting space because of the very small footprint required per Web site, but there is undoubtedly a much higher container-per-host ratio than traditional hypervisor-based virtualization. There is a risk of failure if something hoses on the operating system (kernel panic on a Linux box, BSOD on a Windows box, driver dropout on either, etc.,) because that OS runs the entire show – but that’s the same on any platform: lose the host OS, lose all the guests.

The risk may be higher on Virtuozzo hosts because of the difficulties of a single-OS that gets put into a container – things like “DLL hell,” missing dependencies, etc. that are less pervasive in hypervisor hosts (but that remains to be seen).

Also, one Virtuozzo server of a given operating system can run only other servers of that same operating system (or in the case of Linux, of that exact same kernel). Lastly, there is more risk involved if your hardware isn’t redundant, and this is where the business models differ:

  • Traditional Hypervisor: Cheaper (commodity) hardware front-end, expensive storage, smaller ratio of virtualized machines.
  • Container Virtualization: Expensive (clustered commodity or other redundant HW) front end, expensive storage, higher ratio of virtualized machines.

A good analysis of a virtualization project proposal will include Virtuozzo as a candidate not only for the features, but because it is important to review the overall costs. As a simplified example (one that I deliberately am making equal out so as not to show any bias) without any licensing networking, operations, and soft-costs included may look like this:

Proposal 1: Virtuozzo – Virtualize 100 servers

  • 2 x (Passive-Active) Clustered Server: $50,000
  • 1 x New SAN: $100,000
  • Total: $150,000

Proposal 2: VMware / Xen / Etc. – Virtualize 100 servers

  • 10 x Generic Server: $50,000
  • 1 x New SAN: $100,000
  • Total: $150,000

Bearing all of this in mind, it’s time to add Virtuozzo to the watchlists when virtualization comes up.


February 6, 2008  12:36 PM

Staffing and virtualization, a response

Joseph Foran Profile: Joe Foran

I read an article here on SSV entitled “Virtualization and Staffing” the other day, and decided I had to add my two cents in and comment. I was going to do a sound-off and comment on the actual page, but I decided that this may be a better forum for the length of my response. It’s a great article, and it hits the major point of staffing and the virtualized data center well, although I partially disagree with the statement that:

“Virtualization does not reduce the number of logical servers; it only changes their location and nature. Staff is still needed to manage every virtual machine instance, associated application and database. So while the skills required to manage the servers change, the actual workload doesn’t necessarily decline.”

I agree with the last sentence 50% – while I beleive that the skills do change, I also have seen that the workload does go down, quite dramatically. The workload is reduced, particularly in lage data centers, for hardware technicians. There are less JBODs and DADs to fail, less processors to burn out, less RAM sticks to go bad, less motherboards to fail, etc. etc. Where these are handled in-house with onsite staff, the needs for this kind of work are reduced (not eliminated). When these hardware support services are handled by an outside agency (as is often the case), the reduced staff required to manage hardware failures are often translated differently – contracts are often summed up as “client pays X dollars per group of X machines”. This price reflects the vendors estimates on having to staff for the average failure rates of the hardware being supported. Thus, as the hardware count goes down, the contracts are renewed for fewer machines, the client pays less, and the vendor needs fewer staff. Somewhere, there will be a cost reduction through workforce reduction.

Then there is the often overlooked area of  lights-on indirect support. I remember well the change control meetings where every meeting started off with a rubber stamp on “Clean data center – vacuum server spaces, vacuum rack tops, vacuum floor”. There were also the frequent “Wire new racks xxx, yyy, zzz for Ethernet, power, cooling, fibre, KVM.” With less need to rack up new servers, there’s less need for electricians, cablers, HVAC techs, cleaning staff, etc.

Overall, great article – it gets the point when it comes to programmers, server admins, and other computer-focussed jobs that aren’t going away, and may even go up as sprawl increases. That said, we can’t overlook the savings on the other side – the operations behind the operating systems.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: