The Virtualization Room

February 8, 2008  11:13 AM

Virtualization: Changing the OS game, or not?

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Every morning, I sign onto my corporate email account and start plowing through emails. This morning, our media group had been alerted to an interesting blog post on The Linux Foundation blog. It’s a transcript of an interview with Linus Torvalds, developer of the Linux kernel.

Torvalds’ opinion on virtualization caught my interest:

Jim Zemlin: Let’s talk to conclude about the future. Where do you see Linux – and I know you don’t think too far ahead about this, but I’m going to prod you to say five years from now.

Is the world Windows and Linux? Does the operating system become irrelevant because everything’s in a browser? Is everything through a mobile device? Is there a new form factor that comes out of mobile tab? Where do you see things going?

Linus Torvalds: I actually think technology in the details may be improving hugely, but if you look at what the big picture is, things don’t really change that quickly. We don’t drive flying cars. And five years from now we still won’t be driving flying cars and I don’t think the desktop market or the OS market in general is going to move very much at all.

I think you will have hugely better hardware and I suspect things will be about the same speed because the software will have grown and you’ll have more bling to just slow the hardware down and it will hopefully be a lot more portable and that may be one reason why performance may not be that much better just because you can’t afford to have a battery pack that is that big.

But I don’t think the OS market will really change.

Jim Zemlin: Virtualization. Game-changer? Not that big of a deal?

Linus Torvalds: Not that big of a deal.

Jim Zemlin: Why do you say that?

Linus Torvalds: It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.

I’d say that the real change comes from new uses, completely new uses of computers and that might just happen because computers get pushed down and become cheaper and that might change the whole picture of operating systems.

But also, I’d actually expect that new form factor is in new input and output devices. If we actually end up getting projection displays on cell phones, that might actually change how people start thinking of hardware and that, in turn, might change how we interact and how we use operating systems. But virtualization will not be it.

Apparently, Torvalds has the exact opposite opinion from one of our writers. Jeff Byrne, senior analyst and consultant at Taneja Group, recently wrote about exactly how virtualization is going to change the operating system game.

Byrne writes:

As its uses continue to grow, server virtualization will pose a major threat to the strategic position that the general-purpose operating system has long held in the x86 software stack. In this situation, Microsoft in particular has a lot to lose. So do Linux and Unix vendors, but these OSes do have advantages in a virtualization setting.

He goes on to suggest that Linux and Unix OSes will likely have increased adoption rates as virtualization puts a large dent in the one-operating-system-to-one-server modus operandi, and because Windows users are becoming frustrated with licensing costs, technical issues in new releases that commonly aren’t resolved until the second or third release, and security vulnerabilities.

IT pros, I’m turning it over to you. Let the debate begin. Has your shop seen increased Linux or Unix adoption with virtualization? Do you think virtualization will change the OS market?

February 7, 2008  1:13 PM

Installing the VMware Server MUI on Centos 5.1 x86_64

Joseph Foran Profile: Joe Foran

As a follow-up to my prior post on getting CentOS 5.1 (x64) to host VMware server, this is a short instruction on what to do to get the MUI installed on your 64-bit CentOS box. I didn’t mention it last time because it’s a separate download and install, and I don’t personally install the MUI in VMware server unless I have a compelling reason (on Windows or Linux hosts). This is documented on VMware’s website as well, but it bears some simplification from a two-pager to a six-liner.

  1. Download the management interface (VMware-mui-1.0.4-56528.tar.gz) and extract (tar zxf VMware-mui-<xxxx>.tar.gz)
  2. Update your campat-db (yum update compat-db.x86_64 0:4.2.52-5.1 or just yum update compat-db to get ’em all) if you haven’t already.
  3. Browse to the directory you extracted you mui setup files to and run the installer (./
  4. Accept / Change options as needed.
  5. Start the http daemon (/etc/init.d/httpd.vmware start)
  6. Browse to https://YOUR.HOSTNAME.HERE:8333
  7. Enjoy.

This how to (and my prior post) should also work on other RHEL clones (like Whitebox, when they get WBEL 5 out). It should also work on RHEL.

February 7, 2008  1:10 PM

Virtualization strategies for the SMB

Rick Vanover Rick Vanover Profile: Rick Vanover

The small and medium businesses (SMB) are unique in their approach to virtualization. In attending a recent VMware Users Group (VMUG) meeting it became clear that virtualization is much easier to embrace for the large enterprise, whereas the smaller IT shops have an entirely different dynamic. Here are three strategies currently employed by the SMB while approaching virtualization:

Free tools

The SMB may not have the money to jump into an enterprise virtualization management suite, so the free tools work nicely in their environment. Free tools such as VMware Server, Microsoft Virtual Server, and Citrix XenServer Express Edition offer virtualization at various levels that will generally meet requirements. The free virtualization packages have their benefits, but there also are limitations. These limitations revolve around storage management, high availability and redundancy.

Cite disaster recovery

The SMBs present at the VMUG frequently used disaster recovery to justify the upfront expense for decent virtualization management software. Disaster recovery tends to present a better argument for funding rather than simply stating that management software for virtualization technologies are superior to that of the free equivalents.

All or none

SMBs tend to shoot for an all or none approach to virtualization. If utilizing virtualization, serious reasoning is required to explain why a server can and should not be a virtual system. This correlates to the hardware purchases. In this situation, all new server purchases must be capable of the virtual host role regardless of the immediate availability of the desired end-state of the virtual environment.

It is tougher for the smaller IT shops

The smaller organizations have a particular challenge taking the virtualization plunge due to smaller budgets. The challenge for SMBs is to find a way to justify the expense and not in using virtualization itself-the benefits are obvious. The most difficult step, however is to get a business buy-in for the potential added expenses, and this is where SMBs should focus their efforts if they are trying to convince executives to use a virtualized system.

February 6, 2008  1:04 PM

Fixing the network in Dom-Us for Ubuntu hosts running Xen

Kutz Profile: Akutz

I ran into problem recently where the network would not come up in a Dom-U after the initial domain creation. All subsequent reboots would result in an unprivileged domain without network connectivity. Here is an example of the error that would occur if I attempted to restart the network manually:

root@odin:~# /etc/init.d/networking restart
* Reconfiguring network interfaces...
eth0: ERROR while getting interface flags: No such device
SIOCSIFADDR: No such device
eth0: ERROR while getting interface flags: No such device
SIOCSIFNETMASK: No such device
eth0: ERROR while getting interface flags: No such device
Failed to bring up eth0.

So what is going on? Well, it turns out that udev was binding the domain’s NIC to the initial MAC address of the vswif (which changes on reboot by default). Hence when the domain comes online for a second (and third, and so forth) time with a new MAC address, the domain does not recognize the NIC.

The solution is quite simple: edit the udev rules that bind NICs to MAC addresses. In Ubuntu this is in two places. You need to edit “/etc/udev/rules.d/70-persistent-net.rules” and comment out any lines that look like this:

SUBSYSTEM=="net", DRIVERS=="?*", ATTRS{address}=="00:16:3e:2b:2e:7b", NAME="eth0"

And because the aforementioned file is automatically generated, you also need to edit “/etc/udev/rules.d/75-persistent-net-generator.rules” and comment out any lines that look like this:

SUBSYSTEMS=="xen", ENV{COMMENT}="Xen virtual device"

I also commented out the following line for good measure:

ACTION=="add", SUBSYSTEM=="net", KERNEL=="eth*|ath*|wlan*|ra*|sta*" NAME!="?*", DRIVERS=="?*", GOTO="persistent_net_generator_do"

This will prevent *any* generation. Anyway, once you make these changes and restart the Dom-U, the network will come up every time without a hitch.

Hope this helps!

February 6, 2008  12:49 PM

Virtuozzo 4.0: Worth considering

Joseph Foran Profile: Joe Foran

If your company is buying into the virtualization game, you may want to consider Virtuozzo 4.0.

I work in a VMware shop (one of these days I’ll post on my 3.5 experiences) but I follow the virtualization market, and since SWsoft/Parallels released Virtuozzo 4.0, I think there’s  room for Parallels in the market.

Right now, SWsoft Parallels/Virtuozzo owns virtualization in the Web-hosting provider space. Their other products have a lot of traction in that space too (think Plesk and SiteBuilder). And of course, there’s the Mac virtualization product to beat – Parallels Desktop, the gold-medal standard that runs a few laps around poor Fusion, and the forthcoming Parallels Server which will let people virtualize OS X Server (as long as it’s all on Mac hardware!).

From their recent market moves, the company seems to be trying to take on Citrix’s Xen and Microsoft’s Virtual Server, perhaps even make a run on some of VMware’s market share by making some bold moves in the virtualization space. Once again, they are touting their OS-encapsulation variation on virtualization with Virtuozzo, which just released version 4.o.

From the Parallels blog:

Parallels Virtuozzo Containers is different because is virtualizes the OPERATING SYSTEM, not the hardware. This means that you can install an OS (Windows or Linux) and then run workloads off that single kernel in isolated execution environments that we call “containers.” Because all of the containers are working in direct contact with real hardware and are all working off that one OS install, performance is exceptional…about 97-99% of native, regardless of how many containers are running. And, container footprints are tiny – only 10 MB of RAM and 45 MB of disk space required at the bare minimum.

From a product feature view, you get many of the same features that one finds in other high-end products like VMware and XenServer:

  1. Management Interface – Groups virtualized systems logically making them easier to manage. You can also assign role-based administrative and reporting rights to users.
  2. P2V Tool – Allows you to migration from your old virtualization platform of choice to Virtuozzo. Allows for upgrading of Win2K servers to Win2K3 as part of the migration!
  3. Backup – Allows you to take a virtual machine and make a backup while the machine is running, and then stores the backup on another host.
  4. Templates – Allows you to install a virtual machine, make it into a template, and deploy new machines based on that template.
  5. CPU restriction – Since this is not a true virtual machine, the guests typically see all of the CPUs. This can now be restricted on Windows systems so that guests see only a set number of processors.

Some problematic areas in the past were with OS-level clustering and network load balancing in the virtual machines (now called “virtual environments,” since they really aren’t seperate machines).

The new version appears to address these issues and improves upon the handling of multi-NIC hosts and how particular virtual environments see and use those NICs (as well as other devices such as USB external drives, USB product key fobs, etc.). Virtuozzo containers, like most virtualization products, support both 32-bit and 64-bit virtual environments.

In a Web-hosting environment, this is a great tool because of the massive number of sites that can be provided to clients. Considering that the average corporate data center is not entirely different from a hosting provider (especially when you start talking about chargeback), Virtuozzo may work out well, but the cons and pros must both be considered.

Virtuozzo won’t do much in file/print or in in Terminal Services, but in putting out Web-based applications to users, or even standard client-server apps, Virtuozzo has a lot of the same advantages of VMware, Xen, Virtual Server, VirtualBox and so on in regards to server consolidation and controlling hardware growth.

I wouldn’t count on the thousands-to-one ratio often touted in the Web-hosting space because of the very small footprint required per Web site, but there is undoubtedly a much higher container-per-host ratio than traditional hypervisor-based virtualization. There is a risk of failure if something hoses on the operating system (kernel panic on a Linux box, BSOD on a Windows box, driver dropout on either, etc.,) because that OS runs the entire show – but that’s the same on any platform: lose the host OS, lose all the guests.

The risk may be higher on Virtuozzo hosts because of the difficulties of a single-OS that gets put into a container – things like “DLL hell,” missing dependencies, etc. that are less pervasive in hypervisor hosts (but that remains to be seen).

Also, one Virtuozzo server of a given operating system can run only other servers of that same operating system (or in the case of Linux, of that exact same kernel). Lastly, there is more risk involved if your hardware isn’t redundant, and this is where the business models differ:

  • Traditional Hypervisor: Cheaper (commodity) hardware front-end, expensive storage, smaller ratio of virtualized machines.
  • Container Virtualization: Expensive (clustered commodity or other redundant HW) front end, expensive storage, higher ratio of virtualized machines.

A good analysis of a virtualization project proposal will include Virtuozzo as a candidate not only for the features, but because it is important to review the overall costs. As a simplified example (one that I deliberately am making equal out so as not to show any bias) without any licensing networking, operations, and soft-costs included may look like this:

Proposal 1: Virtuozzo – Virtualize 100 servers

  • 2 x (Passive-Active) Clustered Server: $50,000
  • 1 x New SAN: $100,000
  • Total: $150,000

Proposal 2: VMware / Xen / Etc. – Virtualize 100 servers

  • 10 x Generic Server: $50,000
  • 1 x New SAN: $100,000
  • Total: $150,000

Bearing all of this in mind, it’s time to add Virtuozzo to the watchlists when virtualization comes up.

February 6, 2008  12:36 PM

Staffing and virtualization, a response

Joseph Foran Profile: Joe Foran

I read an article here on SSV entitled “Virtualization and Staffing” the other day, and decided I had to add my two cents in and comment. I was going to do a sound-off and comment on the actual page, but I decided that this may be a better forum for the length of my response. It’s a great article, and it hits the major point of staffing and the virtualized data center well, although I partially disagree with the statement that:

“Virtualization does not reduce the number of logical servers; it only changes their location and nature. Staff is still needed to manage every virtual machine instance, associated application and database. So while the skills required to manage the servers change, the actual workload doesn’t necessarily decline.”

I agree with the last sentence 50% – while I beleive that the skills do change, I also have seen that the workload does go down, quite dramatically. The workload is reduced, particularly in lage data centers, for hardware technicians. There are less JBODs and DADs to fail, less processors to burn out, less RAM sticks to go bad, less motherboards to fail, etc. etc. Where these are handled in-house with onsite staff, the needs for this kind of work are reduced (not eliminated). When these hardware support services are handled by an outside agency (as is often the case), the reduced staff required to manage hardware failures are often translated differently – contracts are often summed up as “client pays X dollars per group of X machines”. This price reflects the vendors estimates on having to staff for the average failure rates of the hardware being supported. Thus, as the hardware count goes down, the contracts are renewed for fewer machines, the client pays less, and the vendor needs fewer staff. Somewhere, there will be a cost reduction through workforce reduction.

Then there is the often overlooked area of  lights-on indirect support. I remember well the change control meetings where every meeting started off with a rubber stamp on “Clean data center – vacuum server spaces, vacuum rack tops, vacuum floor”. There were also the frequent “Wire new racks xxx, yyy, zzz for Ethernet, power, cooling, fibre, KVM.” With less need to rack up new servers, there’s less need for electricians, cablers, HVAC techs, cleaning staff, etc.

Overall, great article – it gets the point when it comes to programmers, server admins, and other computer-focussed jobs that aren’t going away, and may even go up as sprawl increases. That said, we can’t overlook the savings on the other side – the operations behind the operating systems.

February 6, 2008  12:34 PM

The new virtual desktop: G.HO.ST in the Shell

Joseph Foran Profile: Joe Foran

There is a a product called the G.HO.ST Desktop that has been in alpha for some time. What this is, in a nutshell, is a desktop operating system (OS) that loads in the browser using Flash. It gives you 3 GB of space that are accessible not only through the GUI, but also via FTP. It includes a groupware application (Zimbra-based) that allows you to access your POP email, a Web broswer, and an IM client. It has integration with ZoHo and other tools for office-suite functionality. It can play your MP3s. It can do flickr. It can… well… make soup for you! No, not really.

Granted, it’s one of many competing projects that are out there (check Wikipedia and mashable for others), but it’s by far the most advanced and the most integrated product so far, and it bears watching for where this interesting turn in the desktop will take IT over the next few years. Coupled with the upcoming AIR from Adobe and decent application virtualization there can be some real winning opportunities here. As an alpha, it could use a performance tweak and more integration (Citrix and/or 2X client, anyone?), but for a first view, it’s great.

Another Web-desktop that I was playing with is ZK Desktop (aka Zero Kelvin Desktop). This one’s interesting because it’s locally hostable, meaning that you can load it inside your network and run it on your servers without traversing the firewall to the big-bad Internet. The demo is literally a click-to-run app that I loaded on my XP Pro machine. That said, I already had the Java SDK installed and path environment variables defined, so don’t forget to download the Java SDK if you don’t have it before you launch the demo (the readme file will tell you this!). Once it loaded up (in nine seconds), I used their browser to check my mail, do some more work, and edit a short document that became this post. Easy as pie. Still no shell though… although one could be added without much difficulty.

In the same vein is the popular EyeOS, also a locally hostable Web-desktop, and one with quite a bit of contributor support. The photo below is me, editing this blog, looking at the config page of my OpenFiler NAS/SAN box, and using SSHTerm to get into a remote linux box.

In each case, I was able to log in, work via web-apps or native apps, and maintain some decent level of productivity. For the typical knowlege worker, say a marketing or finance person, this sort of Web-desktop can be ideal provided that the right applications can be run. In fact, if I may be so bold, this may be the actual future of desktop virtualization – it will have less to do with hosting a complete desktop on a virtualized piece of hardware and more about hosting a desktop on a Web server (perhaps on a virtual guest).

Hurdles? Lots. Security, client-server apps, a Windows-Centric World, etc. Opportunities? Lots. Cross-platform application virtualization, Web-based application growth, etc.

January 30, 2008  10:29 PM

Technosium 2008: Virtualization and regulatory compliance, time to revisit?

Rick Vanover Rick Vanover Profile: Rick Vanover

I am blogging from the first annual Technosium 2008 Global Conference and Expo in Santa Clara, Calif., where I have gotten some clarification on regulatory compliance, a topic that perplexes me and other virtualization managers and administrators I know.

Have we bent the rules of regulatory compliance to get virtualized systems online? With the configuration involved in putting systems on DMZ networks, Internet-facing networks, and customer or vendor networks that handle regulatory-sensitive material (like HIPAA, Sarbanes-Oxley and others) on virtualized systems, we may have created a compliance issue.

After talking to consultants and vendors, I came to the conclusion that it may be time to check and double check the regulatory compliance aspects of  any work on our virtualized implementations. There’s  the possibility that, from a compliance perspective, we may not have segmented all of our regulatory-protected systems adequately.

Naturally, vendors have seen the problem rearing its head and are offering automated tools. At the show today, for instance, I met with Joao Ambra, security product manager for Modulo, which specializes in IT governance, risk and compliance management. Modulo’s Risk Manager Software product assesses regulatory compliance as well as risk assessment and audit services for organizations.

Besides describing Modulo’s product, Ambra gave advice on four key areas for determining if regulatory compliance is met:

Technology: This includes the infrastructure components as the network environment, databases, servers, computers and other physical elements.

Processes: Procedures such as backup, restore, disaster recovery, password policies and internal change control management make up a processes assessment.

People: Staff training levels on the technologies used and regulations applicable to the organization are important parts of the employee inventory.

Environment: The environment consists of physical access, facilities and risks associated with physical presence of computing resources (and protected data).

According to Ambra, the key strategy to collecting data for compliance-measurable items includes identifying where virtualization fits into the components. When a virtualized system hosts critical elements or regulatory-sensitive material such as databases, access to protected healthcare data, or fixed asset systems, the virtual host and all of its elements are subject to the same scrutiny as the underlying systems. This includes the hardware, database and security configurations for the virtual environment.

Virtualization, in principle, protects from server systems running too many roles while accessing protected data. However, this is all contingent upon the implementation of the virtual environment.

January 30, 2008  10:18 PM

Technosium 2008: FastScale Composer’s cool virtual server deployment

Rick Vanover Rick Vanover Profile: Rick Vanover

Today at the Technosium Global Conference and Expo in Santa Clara, Calif., I saw a cool demonstration of FastScale Composer at work deploying virtual (and physical) servers.

FastScale Technologies Inc., maker of Composer, is a VMware alliance partner. FastScale Composer is a tool that facilitiates building physical and virtual servers from bare metal with a configurable inventory of operating systems, applications and updates. FastScale Composer is suited for data centers with 250 servers or more.

I met with FastScale CEO Lynn LeBlanc and Richard Offer, vice president of engineering, who discussed FastScale Composer’s key feature:  a software component repository that contains operating system binaries, software packages, updates and user-configurable material available for systems.

Within the Composer interface, systems are allocated with your configured inventory to be used when they boot. The underlying technology for arriving new systems is a pre-boot execution (PXE) environment that will have the configuration for the system delivered. Composer excels at this step because the package that arrives to the new system is just what’s needed. For example, in a demo I saw, a base Linux install for a Red Hat system arrived to the system as only an 8 MB image via PXE. While that is not an entire installation, the full inventory is made available to the servers via the respository.

What impressed me is this: Should any element of the system need something from the repository, it is automatically retrieved. Also, servers can be built without the need to retrieve from the repository if you want everything available locally or the repository not be available.

FastScale also has an interface into VMware. While you can perform traditional PXE builds on virtual systems as you would on physical systems, FastScale Composer’s Virtual Manager plug-in will populate new servers directly to VMware ESX. The Virtual Manager option to Composer will allow a virtual machine to be created as VMDK files and imported to ESX or VMware server. A small agent is required on at least one ESX server to receive the VMDK from Composer.

LeBlanc and Offer told me that a new version of FastScale Composer, coming soon, will incorporate Microsft Windows version support and an improved interface.  For more information or to arrange a demo, visit the FastScale web site.

January 30, 2008  2:27 PM

Technosium 2008: Virtualization takeaways for business continuity

Rick Vanover Rick Vanover Profile: Rick Vanover

I am blogging from the inaugural Technosium Global Conference and Expo at the Santa Clara Convention Center. I’ll be signing on intermittently to provide you with everything I consider beneficial to the virtualization space.

First off is storage business continuity. I had an opportunity to attend a breakout hosted by Eric Herzog, the vice president of operations for Asempra Technologies. While business continuity is a topic we all are familiar with, attendees had the chance to look at the building blocks of a successful strategy for continuity, and how it applies to storage for virtualized systems. What I took away from this breakout was that there needs to be clearly defined goals that the business requirements define the following within an organization’s service level agreement (SLA):

  • Ability to measure availability and uptime
  • Data loss tolerance for your business needs (financial and operational)
  • Disaster recovery time frames made specific to your environment
  • Solutions that reduce costs with combination of technologies with reduced complexity
  • Ensuring that the SLA leverages the existing infrastructure (hardware, software, networks) as much as possible
  • Ensuring that there will be usable data on first recovery

Traditional approaches to data continuity to virtualization systems tend to respond with multiple technologies, various products, limited manageability and control, increased costs and expense, and a cumbersome process that limits successful execution of the SLA. While each storage business continuity strategy has positives and negatives, the right solution will depend on your virtualization availability requirements. Among the newer storage technologies are drive snapshots, data deduplication, and remote replication. Some solutions address actual virtual machines, where some address the shared storage systems that host virtual environments. Remote replication provides the fastest recovery time for a virtualized storage system, but also at the highest expense.

In summary, the business continuity strategy for virtualized systems needs to resolve primarily around the technology behind the storage systems in use. The other challenge to virtualization management is to define the goals of virtualization continuity via an SLA.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: