The Virtualization Room


February 25, 2008  11:39 AM

VMware Server guru discusses ESX, Microsoft Hyper-V

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Author and programmer Eric Hammersley had launched into using VMware Server when he realized virtualization’s space and cost-saving potential. At the time, he was installing multiple server and switch racks on a ship for the U.S. Navy. His book Professional VMware Server: Programmer to Programmer discusses installing and configuring VMware Server, tips for creating base imagines, image library organization best practices, integrating and leveraging VMware for your environment and more.

In this interview, Hammersley shares his thoughts on Microsoft Hyper-V and VMware ESX, explains basic virtualization architectures, and compares the three major VMware offerings: VMware Server, VMware ESX and VMware Workstation.

SearchServerVirtualization.com: Your book Professional VMware, Programmer to Programmer makes it’s clear that you’re a fan of VMware. Do you think Microsoft Hyper-V will affect VMware adoption?

Eric Hammersley: No, not yet. Virtualization is a disruptive technology (to use a rather brainy marketing term), and VMware leads the movement, though Microsoft has two advantages in this battle. First is the observation of trends in the technology. They have the benefit of VMware having been in the virtualization market for many years. No doubt, Microsoft has been following how VMware’s technological advances and has been analyzing this information for years. If you dig deep into Hyper-V you’ll see that its virtualization layer and approach is different than that of VMware’s. Did Microsoft discover a better way? Only time will tell.

The second advantage Microsoft has is sheer market dominance in the products that a very large percentage of VMware customers want to virtualize. If you’ve ever tried to virtualize an Exchange Server, you’ll know what a treat it is. As a virtual server, if Exchange has any kind of load it doesn’t work well enough for production use. Microsoft has a clear and rather major advantage in that regard. It can develop its server platforms and enterprise applications to utilize Hyper-V in a way that no one else can.

Do I want Hyper-V to succeed? Yes, mainly because I’m out here on the front lines virtualizing domain controllers, Exchange Servers (or trying to), SQL and many other Microsoft products. Hyper-V done right will make my job easier; done wrong and I’ll be spending time explaining to people with checkbooks why their virtualization initiative didn’t work well.

We could also see some interesting developments with Citrix’s acquisition of Xen. We should all keep an eye on them. In the end, we all benefit from a diverse and rich selection of products.

You’ve mentioned previously that you didn’t work with VMware ESX because you prefer hosted architecture. Can you explain hosted or operating system virtualization versus the hypervisor or bare-metal approach?

Hammersley: Hosted architecture can be seen in products such as VMware Workstation, Server, and Microsoft Virtual PC, to name a few. A hosted architecture relies on an underlying operating system to provide driver support for the machine’s hardware. This reduces the virtualization software’s job in a sense because it doesn’t need to know how to talk with X or Y piece of hardware, just how to interact with the hardware via the host operating system –whether it’s Windows, Linux, or something else.

A hypervisor is a different beast, more of a bare-metal approach, if you will. You can see a hypervisor approach in VMware ESX Server. While with a hosted architecture you rely on the host operating system to provide driver and hardware support, hypervisor architecture performs this on its own. The drawback of this is obviously hardware support. A typical hypervisor package would provide its own kernel and drivers for select pieces of supported hardware, removing the need for a host operating system to provide the driver support. This offers a big advantage of increased resources for your virtual machines, but it comes at an increased cost.

Has the release of ESX 3.5 prompted you to reconsider VMware’s ESX product suite?

Hammersley: My problems with ESX in the past have always been with the limited amount of approved hardware, along with the high cost. As an IT professional who has spent a large portion of his career in smaller business, the price has always kept me away. On the bright side, at least VMware doesn’t charge per core.

A few weeks ago, I was brought onboard for a new project that will implement ESX Server to thousands of users in many key data centers across the country. The specifics are not important, but my love for VMware has grown even more. ESX Server is a great product — VMware Server on steroids, if you will. It’s sound, stable, and flexible. The scalability, plug-in modules such as VMotion, High Availability and others make it a perfect addition to the bigger business data center looking to modernize, streamline its processes, save some rack space, and cash, once you recoup the initial cost. I’m looking forward to really digging into ESX and plan on sharing my experiences with it, especially with the API.

What are the differences between VMware ESX, VMware Server and VMware Workstation?

Hammersley: Well, they all three squeeze out the virtualized goodness, but in slightly different ways. Right off, ESX Server is what’s called a bare-metal solution. While most applications today run atop the operating system, be it Windows, Linux, or something else, ESX Server is everything rolled into one. ESX provides the operating system and virtualization technology right out of the box. Since ESX provides the operating system it offers a very tight and robust virtualization layer between the hardware and the provided operating system.

In addition the ESX operating system has a very small footprint, only about 32 MB, which allows greater amounts of system resources for your virtual infrastructure to utilize. It is, however, pretty pricey; so ESX is more of the Cadillac, if you will, of the product line.

Next would be VMware Server, which is what my title is based on. VMware Server is really a child born from a previous product called GSX Server. Both ESX and GSX provided ultimately the same thing: server-based virtual machines served out via client applications installed on the desktop. GSX’s reduced price and greater flexibility in terms of hardware made it a populate substitute for ESX.

Now, in 2006, VMware announced the demise of GSX Server in favor of its new, and free I might add, VMware Server product. Why they made the shift away from GSX in favor of a new free product is a question for them, however, the move caused an extraordinary amount of excitement. With the new VMware Server we’d get the same server based virtual machines, still served out via the VMware client application to the desktop, still with the APIs and automation capability, but with a reduced price tag. What more could you want?

Finally there’s VMware Workstation. The workstation flavor is just that: a workstation-only virtualization product. For many years it lacked APIs required for automation, the ability to run virtual machines as services in the background, and the ability to perform centralized management. This made it rather crippled in terms of functionality. Many of the features I mentioned above, however, have made it into the newest VMware Workstation, including API automations, a long-awaited feature. Honestly, I spend more of my time on VMware Workstation than any other product due to its snapshot support and ease of use. This doesn’t mean it’s a better product than VMware Server, it just fits my needs a little better.

Do you prefer VMware on Linux or VMware on Windows?

Hammersley: It depends. I prefer VMware Server on Linux when I really need to squeeze out every piece of performance I can get. Linux, on average, has a greater amount of system resources available because the operating system footprint is considerably smaller than Windows. The catch however is that the configuration and management can be a bit of a trick sometimes, and if you have a mostly Windows shop you’ll have trouble getting staff buy-in.
VMware Server on Windows offers a simple install and configuration. The management of VMware Server on Windows is also pretty easy. The downfall, and this can be huge, is system resources. Your memory and CPU available to virtual machines is drastically lower on Windows than Linux. This means fewer virtual machines can run on a Windows install if you compare it to Linux with similar hardware.

In the end, it’s really about what fits into your environment.

February 22, 2008  5:59 PM

Upgrades, virtualization, and a tantalizing glimpse of the future

mschlack Mark Schlack Profile: mschlack

I’ve recently been through a number of operating system upgrade experiences on the small network I maintain to learn about new technologies, and it’s really made me hunger for an all-virtual future. I just installed Windows Server 2008 in a VM (VMware Server) and contrasting that with the pain of upgrading several machines to Vista, let me just say that I have seen the future, and it’s vastly superior to the present.

The ideal scenario, IMO:

Every machine ships with a virtualization layer, or at least that’s what you install on bare metal.

Every operating system comes as a VM, in a file, with a script to customize it as needed.

Every application comes in a virtual application machine — I’ll call it a virtual application component (VAC) to avoid confusion. Adding it into a VM CANNOT kill the base OS or paralyze other apps in the odd ways that apps sometimes do.

Every OS or app installation is instantly reversible and recoverable.

We’re not very far from that, eh? You can almost see Nirvana over the horizon. I’m sure I’m missing some key ingredient that will stop it all from happening (greed, maybe?). What do you think? Can we get there?

It does make me wonder what Microsoft’s Hyper-V strategy will be. Will Hyper-V be a candidate for that bare-metal layer? Or is that just going to be ESX or Xen, not that there’s anything wrong with them. Well, ESX is a bit much for desktops. Will Microsoft, in a short-sighted attempt at making sure you buy at least one copy of WS08, force you to install Hyper-V and the OS together?

Why I say short-sighted, among other reasons, is that if today’s computers had a Hyper-V layer, Vista would be having a different life. Or at least the possibility of one. There are many reasons why Vista hasn’t taken off, but at least one is the difficulties of upgrading. Here are some I’ve experienced:

1. If you had partitioned your hard drive for XP and created a boot partition with, say, 20GB, you couldn’t directly upgrade to Vista. Vista wants something like 19GB of free space to do an upgrade even though it doesn’t take up nearly that much space. If you had unallocated space on the drive, you probably couldn’t expand the boot volume into it, because for reasons I don’t  understand, most XP boot volumes can’t be grown. Nor does the Vista install routine have any special mojo to accomplish that. So you had to blow away XP and do a clean install, and then reinstall all your apps. Such fun!

2. Then there’s the strange upgrade policies for Vista itself. Now I admit that I haven’t tracked down all the ramifications of enterprise licensing, but at the retail level, it sure is screwy. Let’s say you have a machine with an OEM copy of a  lower version of Vista and you discover you need some of the features of Ultimate (oh yeah, I can’t live without BitLocker). You might think that buying the retail version of Ultimate would allow you to upgrade. Not a chance! Only buying a special upgrade version will do that — the full retail version can only be used to install a clean copy, once again forcing a reinstall of all apps. Who thought that was a good idea?

In both cases,  wouldn’t it have been so much nicer to snapshot your VACs to a network drive, install the new VM appliance, assigning it disk resources from anywhere, reconnect your VACs and go?  Maybe in the near future, your VACs would live on the network, either public or private, and the distinction between locally and remotely hosted apps would be somewhat transparent to the user.

Yes, I know there are all these nasty little issues of hardware compatibility and so on.  And then there’s the occasional “change in the driver model” that renders half your hardware obsolete. But if these problems are sorted and isolated into the right layers, I have to believe life will be easier. Call me a cockeyed optimist.


February 21, 2008  9:34 AM

Antivirus management issues in physical-to-virtual migrations

Rick Vanover Rick Vanover Profile: Rick Vanover

I am well into my company’s physical to virtualization (P2V) migration for most general purpose server systems, and it’s been pretty successful. But as our environment grew, we experienced a problem involving our virtual systems and our antivirus management system. In this blog post, I’ll explain the problem and tell you the solution so you can avoid a similar situation.

For most server systems, regardless of whether they are physical or virtual, maintaining a centrally managed antivirus package is a good strategy. This strategy includes regular definition updates, engine updates, policies for exclusions and scheduled full scans.

Let’s talk about the scheduled full scan. Historically, we regularly ran a full antivirus scan of the local file system on both the physical and virtual servers during off hours. This became a problem as the virtual environment became more populated.

We use the vkernel capacity analyzer and chargeback virtual appliance to monitor the performance of our virtual environment. What I noticed is that during the off time, we had an incredible spike in CPU utilization across all hosts and virtual machines. This spike was about 300% of our average CPU use for about two hours. We initially wanted to blame it on the full backup that happens close to this timeframe, but closer investigation led us elsewhere.

We had noticed that the CPU spike occurred on guest systems that are in isolated networks for stage-configure or isolation test roles. With the isolated systems, it was determined that the spike would not be caused by the full backup, as the the isolated systems were not able to communicate with the backup mechanism.

Avoiding the CPU spike

Once we determined that the scheduled full antivirus scan on the local file system was the culprit, we decided that a staggered set of full scans were required to avoid this massive spike. On physical systems with local processing, this is not a big issue, as they are generally idle. But applied to the virtual environment, this may cause unnecessary virtual machine migrations or performance alerts. So, in your migration strategies, be sure to consider any centrally scheduled activity like this and how it may affect your entire infrastructure — both physical and virtual.


February 20, 2008  3:36 PM

VMware VDI no bargain, analyst says

Alex Barrett Alex Barrett Profile: Alex Barrett

I just ran across an interesting article comparing the cost of VMware Virtual Desktop Infrastructure (VDI) with that of upgrading an existing “rich client” desktop to Vista. According to analyst Barb Goldworm, it ain’t all that pretty:

The simplified bottom-line pricing comparison (using the very rough example numbers given here) is this: Upgrading a physical desktop to Vista might cost $300 to $400 (per desktop) in hardware costs and $200 to $300 in software costs, totaling $500 to $700 per physical desktop. Delivering Vista through a virtual desktop architecture (VMware’s VDI in this example) and continuing to use existing PCs as rich clients accessing virtual desktops might cost $700 per VM desktop in infrastructure costs and $23 per VM desktop, if using VECD, totaling $723 per virtual desktop.

She goes into pretty extensive detail comparing the two models, and I wonder if anyone that’s evaluating VDI would care to comment on her assumptions or has done his or her own math? Alternately, has anyone thought about using a virtualization platform other than ESX for VDI? If you’re serious about VDI, exploring alternate virtualization platforms seems like an obvious place to trim some costs.


February 20, 2008  9:28 AM

FastScale Composer lives to see Version 2.0

Alex Barrett Alex Barrett Profile: Alex Barrett

This summer, SearchServerVirtualization.com wrote about a large VMware shop that uses an application virtualization and provisioning tool from FastScale Technology Inc. to keep its test lab under control. Now there’s news that FastScale will launch version 2.0 of its Composer product, a good sign for the Santa Clara, Calif., startup with just a handful of customers to its name (according to Lynn LeBlanc, the company’s CEO, “more than 10,” to be imprecise).

The company’s goal with FastScale Composer 2.0 is twofold, said LeBlanc: scalability and useability. “With our prior user interface, I felt we were scalable if you were dealing with hundreds of machines — but not if you were dealing with thousands of machines.” To scale up to these large environments, FastScale re-architected its user interface to represent server inventory and configuration details in a browseable, hierarchical format.

Other features include new support for Red Hat Enterprise Linux 5 (previously, FastScale supported only RHEL 4) and provisioning into VMware ESX 3.5 virtual machines.

As more customers get their hands on it, LeBlanc says the principal use cases and benefits of FastScale’s technology are beginning to crystallize. For one thing, the company’s “application blueprinting” technology results in application stacks that are significantly smaller than traditional ones (LeBlanc claims an average size reduction of more than 90% over a traditional image), making it a good way to optimize performance and resources. “The smaller the environment, the fewer resources it uses,” LeBlanc explained. For another, these smaller environments encourage dynamic provisioning — and reprovisioning — of IT environments. Finally, blueprinting automatically detects dependencies between an application and the underlying operating system, eliminating the time-consuming and manual process of creating static “golden images.”

Anyway, it’s interesting technology, somewhere between Scalent Systems provisioning and Ardence application virtualization and is probably worth reading more about.


February 15, 2008  2:54 PM

Stressing the value of virtual test environments

Rick Vanover Rick Vanover Profile: Rick Vanover

We all know that test environments are an important part of the quality process within IT. Unfortunately, we may not always follow through and provide good test environments. Virtualization changes all of that. This tip will share some of the strategies that I have found to truly enable robust test procedures beyond the basic testing on virtual machines.

100% Representative environment

With physical to virtual (P2V) conversion tools, I have been successful in creating test environments that are exact duplicates of what I am testing. A good example is a Windows server functioning as an Active Directory Services domain controller. Generally, I consider it a bad practice to perform a P2V conversion on a domain controller for continued use. But, in the case of a test environment, a P2V conversion is a great way to get a “copy” of your Active Directory domain into your test environment. For those of you wondering about the connectivity, of course the networking needs to be isolated from the live network.

With a P2V performed on a domain controller, I have had a great environment to test top-level security configurations, major group policy changes, and Active Directory schema updates. Outside of this type of test environment, these types of changes are usually difficult to simulate well. Sure we can create a development Active Directory domain, but it would not be fully representative of the live environment.

Performance testing

For many people that are new to virtualized environments, there may be some skepticism on virtual system performance. Providing test environments on virtual systems is nothing new, but our challenge is to make the test environments fully equivalent of the planned configuration. One strategy that can be implemented with VMware Infrastructure 3 is to have a small quantity of ESX hosts that are fully licensed and configured like the rest of the servers in the environment and refer to that as a development cluster or quality environment. In the development cluster, you can showcase high availability functionality, virtual machine migration, and fault tolerance to get the support of the business owners. Further, if the perfomance of the development environment is comparable to that of the live environment, the confidence of the virtual system is increased.

Caution factor

With this added functionality, it is also a little easier to cause issues with the live systems. With the example of having a fully-functional Windows domain controller, serious issues could result if that system is accidentally connected to the live network. Because of this risk, a good practice is to make virtual networks that are completely isolated. This goes beyond simply creating a test network within the virtual environment, but to configure the virtual host system to not permit any virtual machine connectivity to the live network. This makes file exchanges a little more difficult, but there are plenty of tools to assist in this space.

Readers, I invite you to share your virtualization test environment strategies. What has helped you deliver a better test procedure by using a virtual test environment?


February 13, 2008  9:49 AM

Dilbert gets orders to virtualize!

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Scott Adams isn’t the first to create a cartoon about virtualization (see VirtualMan helps IT pros explain virtualization’s benefits). Even so, his short comics that grace yesterday and today’s Dilbert.com homepage highlight a simple truth: for IT managers, getting the green light to virtualize is a lot easier if the higher ups have the idea first. Here’s a thought: If you want to virtualize, and your C-levels aren’t quite paying attention, maybe you should put a virtualization insert in one of his (or her) trade journals?

Yesterday’s Dilbert.com comic strip:

But, as today’s comic points out, even if your company approves a virtualization project, you still may not get to partake in the fun!


February 11, 2008  5:11 PM

VirtualMan helps IT pros explain virtualization’s benefits

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

VirtualMan blog posts co-authored by Hannah Drake and Matt McDonough.

Trying to grasp the basics of server virtualization? Or, do you face the even more challenging task of explaining and/or pitching server virtualization projects to non-IT execs? Definitions from WhatIs.com or Wikipedia may help, or you could call in VirtualMan.

AccessFlow created an amusing and informative virtualization-based comic series to explain virtualization as a technology. In the first installment, superhero VirtualMan helps frustrated data center manager Ivy Green explain the complicated technology’s benefits to a resistant executive in layman’s terms, saving her from trying to fit yet another physical server into her data center.

Check out this week’s comic and learn how to defeat execs who harbor “a hardware-centric view of the world” with VirtualMan Powers Down.

VirtualMan is not only an amusing diversion that IT professionals will appreciate for its tongue-in-cheek look at the problems that are inherent in today’s data center, but it’s also a valuable educational tool for those that aren’t as familiar with virtualization as they would like. So whether you’re looking for a quick laugh at your desk during work or want to learn more about virtualization with cartoon art accompaniment, AccessFlow’s VirtualMan is definitely worth a peek. Stay tuned; we’ll review more episodes in the coming weeks.


February 8, 2008  1:40 PM

Linus: wake up and smell the coffee

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

This post is from Mark Schlack, VP/Editorial for TechTarget.

Linus Torvalds dismisses virtualization in a recent interview with the Linux Foundation:

“It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.”

For the record, virtualization is closer to 40 years old – you can read a fascinating article about the history of mainframe virtualization on wikipedia. More to the point, the attitude that there’s nothing new under the sun (except the innovation being pumped by those saying that) has always puzzled me, and the notion that modern virtualization is just a replay of the mainframe has now started to bug me.  It makes no more sense than to say that because MIT and a fair number of people in the scientific community helped develop mainframe virtualization by sharing code with each other and IBM (who gave them software to pilot), that open source is nothing new.

Of course there’s a link between mainframe and x86 virtualization. Conceptually and practically they have a lot in common, and so on. But it’s the differences that are compelling and that will lead to the radical changes Torvalds discounts.

I’m no expert on mainframe partitioning, but from what I’ve gathered over the years (please, correct me if I’m wrong), here’s what sticks out for me:

  • Mainframes, circa 1970, cost around $100,000 a month to lease. You actually couldn’t buy one if you wanted to. You can virtualize a heck of a lot on a $20,000 box today. Actually, you virtualize several systems on a $1,000 box. There wasn’t much you could buy for a mainframe that didn’t start with five figures.
  • As big an advance as the virtualization built into System 370 was, it only ever worked with IBM operating systems (And I believe, for a time, at least, only with IBM apps before the government forced IBM to open up to third-party software companies.). All of today’s x86 contenders can host multiple Linux distributions, multiple versions of Windows, Solaris and some also handle the Mac OS. Mainframes never even ran the other IBM platform operating systems.
  • The notion of an entire virtual machine contained in a file, portable from machine to machine, regardless of their hardware configuration, is new to the current wave. Also new: dynamically reassigning VM resource on the fly, moving VMs without restarting the hardware, and failover clustering of VMs.

I doubt that’s a comprehensive comparison, but the point is clear. IBM’s mainframe virtualization was certainly a niche feature, used by timesharing providers (the 1970s version of hosting companies). It was also used for niche application — according to wikipedia, mainly by scientists who needed a more interactive environment than the batch-oriented general purpose mainframe operating systems of the time were geared for. But the current crop is exactly the opposite – a generally useful tool that will impact all but niche applications.

Current virtualization’s main link to the mainframe, IMO, is that it is enabling mainframe-style utilization, reliability and ultimately, process-oriented management, on the very commodity platform that rendered that world asunder. If that sounds like back to the future, it isn’t. It may well represent the final triumph of general purpose, commodity-based computing over the highly specialized, batch-oriented world of the 1970s mainframe. It’s actually kind of cool to realize that the first microprocessors were being developed right around the time mainframe virtualization made its appearance, and now the two technologies are converging.

Torvalds goes on, in his interview, to talk about the truly radical developments on the horizon being new form factors. I don’t see it that way, but I’d be very surprised if new form factors don’t ultimately wind up using virtualization as a base technology. Think of a cell phone that can be completely upgraded with new capabilities because its software is a virtual appliance you download wirelessly. Now that’s a radical idea.


February 8, 2008  11:13 AM

Virtualization: Changing the OS game, or not?

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Every morning, I sign onto my corporate email account and start plowing through emails. This morning, our media group had been alerted to an interesting blog post on The Linux Foundation blog. It’s a transcript of an interview with Linus Torvalds, developer of the Linux kernel.

Torvalds’ opinion on virtualization caught my interest:

Jim Zemlin: Let’s talk to conclude about the future. Where do you see Linux – and I know you don’t think too far ahead about this, but I’m going to prod you to say five years from now.

Is the world Windows and Linux? Does the operating system become irrelevant because everything’s in a browser? Is everything through a mobile device? Is there a new form factor that comes out of mobile tab? Where do you see things going?

Linus Torvalds: I actually think technology in the details may be improving hugely, but if you look at what the big picture is, things don’t really change that quickly. We don’t drive flying cars. And five years from now we still won’t be driving flying cars and I don’t think the desktop market or the OS market in general is going to move very much at all.

I think you will have hugely better hardware and I suspect things will be about the same speed because the software will have grown and you’ll have more bling to just slow the hardware down and it will hopefully be a lot more portable and that may be one reason why performance may not be that much better just because you can’t afford to have a battery pack that is that big.

But I don’t think the OS market will really change.

Jim Zemlin: Virtualization. Game-changer? Not that big of a deal?

Linus Torvalds: Not that big of a deal.

Jim Zemlin: Why do you say that?

Linus Torvalds: It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.

I’d say that the real change comes from new uses, completely new uses of computers and that might just happen because computers get pushed down and become cheaper and that might change the whole picture of operating systems.

But also, I’d actually expect that new form factor is in new input and output devices. If we actually end up getting projection displays on cell phones, that might actually change how people start thinking of hardware and that, in turn, might change how we interact and how we use operating systems. But virtualization will not be it.

Apparently, Torvalds has the exact opposite opinion from one of our writers. Jeff Byrne, senior analyst and consultant at Taneja Group, recently wrote about exactly how virtualization is going to change the operating system game.

Byrne writes:

As its uses continue to grow, server virtualization will pose a major threat to the strategic position that the general-purpose operating system has long held in the x86 software stack. In this situation, Microsoft in particular has a lot to lose. So do Linux and Unix vendors, but these OSes do have advantages in a virtualization setting.

He goes on to suggest that Linux and Unix OSes will likely have increased adoption rates as virtualization puts a large dent in the one-operating-system-to-one-server modus operandi, and because Windows users are becoming frustrated with licensing costs, technical issues in new releases that commonly aren’t resolved until the second or third release, and security vulnerabilities.

IT pros, I’m turning it over to you. Let the debate begin. Has your shop seen increased Linux or Unix adoption with virtualization? Do you think virtualization will change the OS market?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: