The Virtualization Room

January 18, 2008  4:27 PM

GParted: Save yourself from virtual test screw-ups

Joseph Foran Profile: Joe Foran

Todays episode is … screwing up your test environment and fixing it. As many know, I test management tools like some people switch socks. I have a simple theory on it: If what I’m using could be better, and there’s no business impact in switching, I’m going to switch. I went from Groundwork Open Source to Spiceworks 1.0 (see my prior post here) to Hyperic. Spiceworks 2.0 is out (and has been for some time) and it’s even spicier than the last version, although the Chili Peppers were missing for a while. So time to put it into the demo lab, right? Absolutely! A little short on time? Of course! In so being, I hosed that Spiceworks 2.0 demo box from sheer stupidity. And I mean sheer stupidity. I didn’t follow any best practices. The worst thing I did was putting the app on a default XP build image. It didn’t need to be on a server OS for the initial testing phase, and I had a blank XP image ready to go. To channel my Inner Yoda: Saved myself some setup time I did. Hosed myself in the long run I had.

The default image for an XP Virtual Desktop is 8 GB in size. This is just enough space for the applications to be installed and to have room for updates, patches and whatnot. It makes maximum use of disk space on our NAS box without leaving the XP machine’s disk too stripped for future expansion. Or so I thought.

As it turns out, Spiceworks eats up disk space, about 4 GB for the one subnet I have it scanning. I didn’t have the helpdesk module in use except for my own testing, so that space count doesn’t include tickets with attachments. Considering that this machine had just under 4 GB of free space, I soon found myself with a problem, one that caused my Spiceworks install to fail, and the box to slow to a crawl.

What to do … should I rebuild? I could, but then I’d lose the data this thing had accumulated over the few days’ time it took to fill the disk, and I liked the results of the test. It was performing on par with Hyperic in terms of general scanning and reporting, although the drill-down detail was somewhat less, and the ability to customize access to a given resource was less (Spiceworks supports configuring only one Windows domain account and one *nix account, and uses SSH and WMI, as opposed to local agents reporting back in the way that Hyperic does). The alerting system was solid. The inventories were solid. Everything about the software was flying high and showing happy, except, of course, the one problem I was having – keeping it from eating the “entire” disk drive. Also, it was running great on XP, and with VMware, there’s no real difference in managing disaster recovery or backup for an operating system. I’m still ticked that there’s no version to install on a Linux box … but that’s for a different forum than this.

Should I recover as much disk space as I can and hope for the best? Sure, why not give it a try? I’ve already hosed myself here, in spite of being addicted to documentation and following templated procedures whenever possible. A short while later, no real luck. As much as I remove, Spiceworks continues to grow as it records scanning results and archives historical data.

Time to fix the problem at the disk level. I did this not too long ago on my XP machine in Parallels, based on this blog post, so I just modified the process to fit the VMware Server 1.04 environment. First off, to expand this disk size, I went to the terminal on my Macbook, remoted to my VMware Server box via SSH, and executed the following command from within my virtual machine’s directory (on my boxes, this is /vmdir rather than the default) :

vmware-vdiskmanager -x 14GB “My Disk Name Redacted.vmdk”

After a short time the disk was successfully expanded. Now, of course, just because the disk was expanded doesn’t mean the space is usable, which is where GParted comes in. Grab a copy from the wonderful folks who host it (SourceForge). GParted makes you go through some configuration when you boot from it, just so that you get the best display resolution and so that it boots properly, but most anyone with experience in IT can follow the process without me outlining it here (especially since it only involved pressing enter twice, and has been outlined elsewhere ad infinitum). I switched the removable drive over to my GParted iso file, booted and configured. At this point, as you can see from the images, I had a full view of my disks and could manipulate to my heart’s content:

I simply right-clicked, selected Resize/Move, and slid the slider from the current position to the end of the drive. The steps, graphically, are as follows:

Strangely enough, it errored out on me (and I didn’t get a cap … dangit). Turns out it wasn’t a show-stopper, but it was strange. When I rebooted, it loaded into Windows normally, which threw me for a loop — it is *supposed* to do a disk check. When the check completes, Windows should see the expanded drive size. As this pic shows, there was a decided difference in opinion between what the drive’s capacity was and what was available in reality.

Not deterred, I manually ran the chkdsk for the next reboot, whereupon nothing changed. Since I had an error before, I decided to try again. I resized the drive to add 1 MB of unallocated space at the end, applied (this time with no errors), and rebooted.

Windows came up, ran a quick scan of the disk, and voila! It found the extra space just fine.

Was the error just an anomaly? I checked Google but nothing really popped. I checked the GParted forums — nothing really popped. Chalk it up to a slight hiccup at the wrong time, perhaps a bug as yet unseen. I posted the info and moved on.

So, as you can see, GParted helped me pull my test bacon out of the test fire before it test-burned. This is definitely a utility I will be keeping in my toolbag from now on.

January 15, 2008  3:48 PM

VMware ESX 3.5 upgrade path: Where to start

Rick Vanover Rick Vanover Profile: Rick Vanover

VMware’s flagship products ESX 3.5 and Virtual Center 2.5 have been available for a little over a month now. When the upgrades were made available, there was much excitement on the newly touted features. So, many IT professionals quickly hurried off and downloaded their product updates and then came to a collective stopping point. How do we upgrade ESX while in use? Sure we upgraded from ESX 3.01 to 3.02 with very little impact. But the change from 3.0x to 3.5 may seem worthy of more preparation because the scope of the change is larger with some of the new features, like Storage VMotion. With the release, here is a simple upgrade strategy that many are adapting:

  • Allocate two ESX 3.0x systems as 3.5 candidates (not everyone will be able to do this, I realize).
  • Carve these two systems into their own cluster or data center.
  • Make sure all existing VMware DRS rules would be okay with two systems removed.
  • Upgrade or fresh install one of the systems to ESX 3.5.
  • Test migration from ESX 3.0x to the new ESX 3.5 system.
  • Test VMware tools versioning and test any upgrade virtual machine tasks.

This strategy will replicate what you will likely face in a real upgrade situation, as you may not be able to. Because you may only be able to have a limited number of systems available for maintenance at any given time, it is good to be able to replicate that in a test datacenter or cluster. In smaller implementations, this could be repeated with one host where the migration test would be from a live ESX host. Evaluation software may also be a consideration to make available the correct number of hosts to simulate the co-existence of ESX 3.0x and 3.5.

Keep it moving

Should your configuration allow seamless migration between your ESX 3.0x and 3.5 hosts – that should not be a crutch for undefined periods of mixed versions. A good practice would be to have all hosts on the same version of ESX within a cluster. Larger environments may have difficulty moving all systems to the new version, but strategize within your Virtual Center configuration to determine the best configuration for temporary mixed versioning. The goal should be to get all systems on the same version enterprise wide – but only after you are completely comfortable with 3.5 in your infrastructure.

The horse’s mouth

VMware provides many quality resources online, I’ve saved some work for you and collected some of the highlighted pieces here for review in relation to ESX 3.5 upgrades:

ESX 3.5 upgrade guide

ESX 3.5 compatibility matrix

What’s new for storage in ESX 3.5

VMware Consolidated Backup improvements in ESX 3.5

These resources are a good strategy in being well informed for the what your plan for ESX 3.5 will entail. Simply installing without preparation is surely a recipe for mis-configuration or incorrectly applying your configurations as intendend. And the test upgrade procedure to become familiar with a mixed environment will allow you to clear the way for an end-state configuration of a single version of ESX.

January 14, 2008  4:44 PM Products of the Year – Not without their share of snubs

cwolf Profile: cwolf

Fortunately for me, my job never requires me to determine vendor awards. However, Alex Barrett and the staff aren’t so lucky. While it’s great to have the power to name Products of the Year, it also means that you’re stuck hearing complaints from everyone that wasn’t named. In case you missed it, Alex recently published the SearchServerVirtualization 2007 Products of the Year.

I think that Alex and the editorial staff did a great job with selecting products, but thought that I would take a moment to highlight some vendors with excellent products that did not make the list. After all, it’s just as much fun to debate the vendors that were not recognized as it is for those who were.


Yes, VMware’s on the list, but at the same time they’re not on the list. If you didn’t notice, VMware ESX Server 3.5 is nowhere to be found in the article. The editors informed me that ESX 3.5 missed the cutoff date for award consideration (November 30th), and therefore wasn’t eligible. Editors do need time to work with a released product in order to make a fair judgment, so I understand the reasoning for the cutoff. Still, ESX 3.5 was a significant release from VMware, with features such as Storage VMotion adding significant value to VMware deployments.


Novell quietly had a great 2007, from a virtualization product perspective. Novell was right behind Citrix/XenSource in achieving Microsoft support for their Xen-based virtualization platform, and was pushing the innovation envelope throughout the year. Novell was the very first virtualization vendor to demonstrate N_Port ID virtualization (NPIV) on their Xen platform. Novell was even showing their work with open virtual machine format (OVF) last September at their booth at VMWorld. When you factor in Novell’s work with their heterogeneous virtualization platform management tool, ZENworks Virtual Machine Manager, you’re left with a pretty nice virtualization package. The vendors mentioned in the virtualization platform category (VMware, Citrix/XenSource, SWsoft) are all worthy of recognition, and I think it’s equally fair to recognize Novell’s work in 2007 as well. Perhaps Novell’s heavy lifting in 2007 will result in recognition in 2008; however, it’s safe to say that Novell is going to have some stiff competition from VMware, Citrix/XenSource, Microsoft, Sun, Parallels, and Virtual Iron.


I thinks it’s hard to leave Symantec Veritas NetBackup 6.5 out of the discussion. In fact, amongst backup products, I’d list them as first, right alongside CommVault. Symantec was the first major backup vendor to announce support for Citrix XenServer backup, while all other backup products officially supported one virtualization platform – VMware ESX Server. The NetBackup team was also very innovative with VMware Consolidated Backup (VCB), as NetBackup 6.5 includes the capability to perform file level recoveries of VCB image level backups. Typically, a backup product performs two VCB backup jobs – an image level backup for DR purposes, and a file level backup for day-to-day recovery tasks. NetBackup 6.5 provides the ability to do this in a single pass, which I found to be pretty innovative. Factor in Data-deduplication (extremely valuable considering the high degree of file redundancy on VM host systems), also available in NetBackup 6.5, and it’s hard to see how NetBackup could be ignored.


SteelEye is another vendor in the data protection category that I’m surprised did not make the list. VMware HA by itself will not detect an application failure and initiate a failover job as a result, as it’s primarily designed to monitor and react to hardware failures and some failures within the guest OS. SteelEye LifeKeeper, on the other hand, provides automated VM failover in response to application and service failures (in addition to guest OS and physical server failures). Many failures are software-specific, and products that can automate VM failover or restarts in response to software failures go far to improve the availability of VMs in production.I’m limiting my comments only to the award categories, hence I’m only listing some of the products I’ve worked with in 2007 that fit into one of the SSV categories. I hope that for the 2008 awards, we’ll see a higher number of award categories, so all products in the virtualization ecosystem are represented.

Do you agree with editors’ choice of winners? Which deserving vendors do you feel were left off the list? I’d love to hear your thoughts.

January 9, 2008  10:26 AM

Parallels Server beta goes public; supports new Apple Mac Pro, Xserve

Alex Barrett Alex Barrett Profile: Alex Barrett

Looking for an alternative to VMware ESX, Xen, or Microsoft Hyper-V? SWsoft, soon to be renamed Parallels, announced the public beta of its Parallels Server hypervisor today, a few short weeks since the hypervisor entered into private beta.

In addition, Apple and Parallels have worked out a deal to allow Parallels Server to run on the new Apple Xserve and Mac Pro hardware, which use the latest generation of Intel chips featuring Intel Virtualization Technology for Directed I/O (Intel VT-d). Parallels has promised experimental support for VT-d systems in beta and full support when Parallels Server becomes generally available.

This announcement makes Parallels Server the first bare-metal hypervisor to run on Apple hardware. Apple shops will be able to run Mac OS X Leopard as well as Windows, Sun Solaris and Linux virtual machines on their high-end Apple hardware. Parallels Server is also notable in that it can be installed either on bare metal à la VMware ESX Server or as a “lightweight hypervisor” running on top of a host operating system. At installation, users can choose how they want to deploy Parallels Server.

Last fall, Apple paved the way for this announcement when it altered its end-user license agreement (EULA) to enable the virtualization of Mac OS X. Of course, Parallels Server also runs on non-Apple x86 hardware, although Apple prohibits Mac OS X from running there.

According to Parallels, Parallels Server beta includes the following features:

  • support for more than 50 different varieties of x86 and x64 guest operating systems;
  • remote control of the virtual machines via the Parallels Management Console;
  • support for up to 64 GB of RAM on the host computer;
  • two-way symmetric multiprocessing (SMP) support in virtual machines, to go to four-way SMP in the final version;
  • multi-user access to the same virtual machine;
  • support for ACPI (Advanced Configuration and Power Interface) in virtual machines; and
  • open, fully scriptable APIs for customized management* Full support for Intel VT-x, and experimental support for Intel VT-d.

For more information or to participate in the beta, visit the Parallels Web site.

January 7, 2008  10:11 AM

Is VMware underpriced?

Alex Barrett Alex Barrett Profile: Alex Barrett

Of late we’ve written a lot about VMware pricing (see VMware pricing draws large enterprises’ ire, and VMware costs inspire Virtual Iron purchase), which in turn prompted a conversation with VMware Certified Professional (VCP) and former customer Michael Tharp, who is now a server virtualization practice lead at Mainland Information Systems, a VMware Authorized Consultant (VAC) in Calgary, Alta.

Tharp has an interesting perspective on VMware: Given the value that most firms derive from it, VMware is “fairly priced, or perhaps even a bit underpriced.”

Back in the day when Tharp worked in IT operations, he was stunned by the low cost of VMware ESX Server. “I would have paid twice as much for VMware without blinking an eye,” he said.

Today, as a VAC, “The worst ROI I’ve ever seen was for a customer with just 14 servers that needed to buy a new SAN [storage area network]. They saw ROI in less than a year and saved $180,000 over three years,” he said. “That’s still pretty compelling in my book.” Shops with more servers and a pre-existing SAN tend to see dramatically better numbers.

But setting its pricing so low was “a mistake” on VMware’s part, Tharp argued. “It gave them no wiggle room” a situation that manifests today as a “reluctance to negotiate any kind of discount.”

This intransigence may rub some potential customers the wrong way; but for the time being, Tharp sees no valid alternatives. When it comes to VMware competitors like Citrix Systems’ XenServer or Virtual Iron, “I don’t see the value and don’t think I could find enough customers to make it worth the effort,” he said.

January 4, 2008  10:04 PM

Virtualization and high availability: User ponders products, path

Jan Stafford Jan Stafford Profile: Jan Stafford

Systems admin Michael Gildersleeve wishes for 100% uptime and wonders whether virtualization will bring him closer or further away from that goal. It seems to him that virtualization options only cover one server at a time. “What if I need to do an OS update or patch, or what if some critical hardware fails?” he asks. In that case, he feels a bit more comfortable with a cluster than with virtualization.

Gildersleeve is evaluating high-availability options for virtual machines. VMware’s High Availability (VMware HA) is on his list, but he’s not sure whether that product will work well with his legacy software. He’s also not sure whether HA is as mature and robust as other products on the market.

I’m answering his call for more information. I hope that you will too, either by commenting on this post or emailing me at

Gildersleeve works for a company that has a Progress database running on a Unix server. Hundreds of Windows clients and Web applications are attached to that database and server through Progress Brokers via service file ports. “I need to provide 365 by 24 by 7 uptime,” Gildersleeve said. “With our new Web business, East and West Coast facilities, and vendors managing our stock and replenishment, we need to be available all of the time.”

He wants to run his database across at least two servers, in a setup like an Oracle Real Application Cluster. He continued:

This would allow me to upgrade the OS, reboot a server or take a server down for maintenance without affecting the database or the users. So far I have only found solutions that will give me a two- to five-minute downtime between switching from one server to another.

Yes, Gildersleeve has looked a little at server virtualization. He’s evaluating server virtualization options and VMware HA to see whether he can reduce the downtime to nil.

What I have seen so far is that if I upgrade my Progress app to v10 (Progress OpenEdge), and then move to two Integrity servers running High Availability, that if one server fails or if we need to do maintenance on a server, we can manually switch to the second server; but the problem with this is that my users will feel the switch because I will need to bring one server down. They will need to log out and in again to the app, or whatever needs to be done to bring the ready server into production mode.

Gildersleeve is willing to evaluate Sun Microsystems options, if they are truly viable for running Progress. Microsoft operating systems are out of the question.

In his evaluations, Gildersleeve has come up with a lot of questions, and he’s looking for advice from HA experts. Can you provide some advice and share your experiences by commenting on this post or emailing me a

January 2, 2008  4:49 PM

VMware goes on the offensive

cwolf Profile: cwolf

Note: Reposted with the author’s permission from Burton Group’s Data Center Strategies blog.

If you haven’t seen Mike DiPetrillo’s latest blog, “VMware Patch Tuesday,” it’s definitely worth a few minutes of your time. Mike’s post contrasts patch management on the ESX hypervisor with that of competing platforms. I think the picture DiPetrillo paints is much darker than reality (at least with Windows hosts) being that a given Windows Server 2003 host will not require every available patch (many are service-specific) and since not all updates require a reboot. The patch reboot requirements will further diminish in Windows Server 2008 thanks to hot patching support.

That being said, Mike’s latest post is about much more than VMware’s patch management strategy. Instead, consider it the start of the VMware Offensive. In 2007, VMware for the most part smiled and waved at their competition. That’s not going to be the case in 2008. Citrix, Microsoft, Novell, SWsoft, Sun, Oracle, and Virtual Iron all have plans to chip away at VMware’s market share, and rather than ignoring their competitors, I expect VMware to be much more aggressive at highlighting what makes their approach to virtualization different from the competition.

Read the rest of this post at Burton Group’s Data Center Strategies blog.

December 26, 2007  12:15 PM

VMware ESX 3.5, VirtualCenter 2.5: To upgrade or not to upgrade?

Rick Vanover Rick Vanover Profile: Rick Vanover

The virtualization world was very excited for the release of VMware ESX 3.5 and Virtual Center 2.5 last week. However, should everyone jump on to the new platforms quickly? I say no. To be fair to myself, I have performed a limited set of upgrades already within the week, and some are planned over the next few weeks. Yes there was a beta process. Yes VMware generally publishes good software. Yes I know these are not Microsoft products. But here is why I say no to ‘jumping’ onto an upgrade immediately for virtualization products:

Larger Scope

The inherent nature of virtualization reaches scope farther than one system as it has historically. With a single ESX server hosting upwards of 30 virtual machines, the magnitude is amplified should there be an issue. So, as with anything critical – a test environment is a must. The development environment may be a small number of ESX servers that hold non-critical virtual machines so you can accept any risks that may arise in your upgrade.

Cover Your Bases

Be sure you are able to execute all scenarios with great confidence before proceeding into the upgrades. One example I will deal with soon is I will have a large number of critical virtual machines hosted on ESX 3.02. If I take one server into maintenance mode, then upgrade it to ESX 3.5 can I migrate from the 3.02 to the 3.5 systems without issue while online? I did upgrade Virtual Center to 2.5, so that was a good starting point for my 3.02 to 3.5 upgrades. VMware has put out release notes with a list of Known Issues with ESX 3.5 and Virtual Center 2.5 that are a good starting point to identify your migration upward in the ESX and Virtual Center versioning.

Upgrade or New Install? You Decide

ESX is released as a full/new install (as a CD ISO) or an upgrade (tar file) installation. I personally will go for the new install mechanism rather than the upgrade. This is because I find the ESX install quite straightforward and easy and rebuilding an ESX host can be done in very little time at all. With the rebuild process very quick for ESX and most management and configuration elements configured from Virtual Center, ESX is unique in build time requirements.

Old School Wait and See

Many people offer the old adage “Wait six months before upgrading” or some other variable time frame when core updates or service packs are available. This is to let other people “work out the bugs” in software before you have to deal with them. There is little basis in virtualization for this logic, but many people have adopted it as a policy related to updates. VMware is unique as new core functionality like Storage VMotion is available with ESX 3.5 and I know I am very excited for this new functionality. This ultimately is your call, but the best advice before anything is to get informed on the product releases and the known issues instead of starting a blind installation.

December 19, 2007  5:23 PM

‘Twas the week before Christmas: Virtualization Log

Alex Barrett Alex Barrett Profile: Alex Barrett

Despite the impending holiday and our stubbornly long shopping list, and keep pumping out the virtualization content. Highlights of the week include the following:

One last thing: A story on finds that, despite growing industry concern about saving power, no one actually shuts down servers over the holidays. Or do they? If you’re planning on using VMware’s new Distributed Power Management (DPM) next week and shutting down some systems, let us know.

December 19, 2007  10:15 AM

Thoughts on the ‘top five’ trends in virtualization

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

I recently received a press release from London-based TechNavio, the creator of a Web-based information and research tool, that outlines the top five virtualization trends. Here they are, along with my own thoughts on these trends:

1. Business process automation.
TechNavio’s take. “Virtualization is expected to speed up the wider movement toward business process automation and remote collaboration. The TechNavio findings appear to indicate that the market in general is expecting a major investment in this area within the next two to three years.”
My thoughts. On the subject of business process automation, if TechNavio means “scripting,” I can agree with this trend. contributor Andrew Kutz has received a few questions from readers about automation, which suggests that there are plenty of other IT pros with similar questions. Also, he increasingly writes tips about scripting for X or Y, often concerning disaster recovery or hot backups. Most recently I’ve seen questions about scripting virtual machines (VMs) to power on and off at a certain time.
Food for thought. If scripting VMs advances, what will happen to the number of system admins and data center managers needed to run a data center? Perhaps all you IT programmers should slow down the scripting process before you script yourself right out of a job!

On the subject of remote collaboration, I definitely agree with TechNavio. I wrote an article on emerging client-side desktop virtualization technologies. In response, I received comments from readers who said that they had found a surprising number of companies that are exploring client-side virtual desktop infrastructure (VDI) technologies for implementation in 2008. I think it’s due time for VDI; just consider the number of stolen or misplaced laptops, or CDs that went missing in the mail containing personal information. . . .I don’t know about you, but identity theft certainly isn’t on my holiday wish list. And I certainly would appreciate company investment in this kind of technology, considering the increasing mobility of technology.

2. Network-delivered computing.
TechNavio’s take. “Virtualization is also expected to boost the move toward network delivered computing or what is being termed PC-over-IP. This in turn will place vendors such as Cisco, NEC and Sun at the heart of the market, but interestingly leaves the door open for a host of innovative start-ups.” <br>
My thoughts. I would agree here as well. My aforementioned article discusses vThere, which focuses on primarily providing client-side virtual desktops via their own (i.e., third-party) servers that a client notebook would connect to when opening the virtual desktop. During interviews, my subjects all mentioned the trend of software vendors moving to providing their software via virtual machine. We have already seen a few virtualization companies provide beta versions of newer software via VM. As virtualization continues to grow in adoption, I can easily see all kinds of independent software vendors providing their products via virtual machine download.

3. Legacy applications and virtualization.
TechNavio’s take. “As application virtualization speeds up, applications development and maintenance or ADM, vendors have a real opportunity to grow into a new market defined as optimizing legacy applications for virtualization.”
My thoughts. We haven’t focused much on application virtualization on and, so I don’t have an informed opinion on this subject. Readers, do you?

4. Small and midsized businesses (SMBs).
TechNavio’s take. “The biggest long-term opportunity for virtualization vendors lies in the SMB space, specifically end-to-end solutions that allow SMBs to outsource and virtualize their entire network.”
My thoughts. I disagree here. Clearly. there is opportunity and space for virtualization in the SMB market, but to say it’s the biggest long-term opportunity? That’s a stretch. I doubt that larger businesses, once virtualized, will stop virtualizing. I think that a more accurate statement would be that virtualization vendors should target SMBs to further extend virtualization.

5. Labor market and skills.
TechNavio’s take. “As the market for server virtualization heats up, finding people with the right skills is set to get harder. With this environment TechNavio predicts that there will be increased opportunities for IT services companies as well as for IT staffing solutions providers.”
My thoughts. I don’t know if I agree that finding people with the right skills will become more difficult; it depends on the IT workers and their drive to stay on top of certifications that prove their worth. (Cough, the VMware Certified Professional (VCP) exam, cough, cough.) And whenever technology advances, desired skill sets change, so this prediction isn’t all that impressive. As far as increased opportunities for IT services companies, yes. It’s easier to go to a business and say, “Get me a sys admin with a VCP stamp of approval!” than it is to shuffle through résumés looking for those who are VCPs. And I definitely think that those who have the right credentials will find themselves in increasing demand: So stay on top of what you’re worth salary-wise given the move toward virtualizing mission-critical servers. Just because your current company doesn’t realize your worth, it doesn’t mean that Company Y — which has more virtualized servers and a greater need for those with virtual environment management experience — doesn’t.

TechNavio’s press release also included a quote after these “top five trends.” Co-founder of Chicago-based Infiniti Research S. Chand (who conducted the research for this report) said, “Currently the biggest beneficiaries of server virtualization are the enterprise users whose businesses tend to be dependent on running compute-heavy, high availability, application intensive data centers. These include: ISPs, hosting and managed service providers, bank’s trading divisions, gaming, online retailers and the like.”

So if you are looking to get the most (read: more money) from your virtualization experience, check job offers with companies that deal with these types of services.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: