The Virtualization Room


April 28, 2008  2:16 PM

Is hypervisor-based virtualization doomed?

Keith Harrell Profile: SAS70ExPERT

The following is a guest blog written by Schorschi Decker, an IT professional specializing in virtualization and enterprise-level management with over 25 years of experience in the industry.

Operating system isolation or hypervisor-based virtualization remains popular, but are we settling for less than what we should? Hiding its limitations in modest incremental effectiveness, hypervisor-based virtualization persists because it continues to hide an ugly secret: poor quality code.

Many who have worked with hypervisor-based virtualization may already knows this, but anyone who has attempted implementation of application instancing undoubtedly see where hypervisors fail. Replication of the operating system within a virtual instance is waste, waste driven by bad code. Faster cores, more cores per package, limited improvement in memory and device bus design, marginal increases in mechanical drive design and shared storage models have all contributed to mask how hypervisors inefficiently utilize processors.

If customer adoption rates are an indicator of success, past attempts at application instancing have not been successful to any consistent degree (there are no buzzwords for an application instance method.) To be clear, homogeneous applications have benefited, such as Microsoft SQL and ISS, Oracle and even Citrix. However, in the case of Citrix, application instancing has been environment-dependent to a degree.

Resource management within a common operating instance has not significantly changed since the introduction of mainframe logical partitions (LPARs). Solaris zones is a container-based model, whereas AIX micro-partitions follow a truer application instancing model. Even Apple computer introduced simple memory partitioning in the Macintosh Operating System 7.x. DEC (yes, Digital Equipment Corporation) leveraged Microsoft Job Engine API, effectively a processor affinity layer, in a ground breaking concept product that Compaq buried. Does anyone remember that product?

The hypervisor foundation resulted from heterogeneous application partitioning failures. For Windows, application instancing has stalled at times or has otherwise been over shadowed by operating system instance isolation techniques. Windows SRM is a weak attempt to crack the hypervisor foundation, but it is so immature at this point it is useless. Microsoft SoftGrid, now Microsoft Application Virtualization has a greater potential but is just not well accepted at this point. Should Microsoft provide it for free to drive acceptance?

The technology industry has attempted some rather interesting implementations to offset the impact of operating system instance isolation, for example, thin-disking and image-sharing which are based on eliminating disk partition under utilized space. Several attempts at addressing the DLL and .Net issues (e.g. Microsoft SoftGrid as well as Citrix) have been implemented to support heterogeneous application instancing but have masked the true issue that has always existed, the lack of quality code.

Why do I make this point? Because the hypervisor is essentially a band-aid on the boo-boo of bad coding. Quality code makes for stable environments. With a stable and predicable environment, applications can be run without fear of crashing, and it is this fear that gives hypervisor virtualization its strength.

Did someone just say “operating system isolation”? Case in point, the recent Symantec Antivirus issue with VMware ESX OS. Code quality is going to become a green issue, just as watts per core and total power consumption has in the data center. Enterprise customers who purchase significant code-based products will demand better code as a way to reduce non-hardware oriented costs. Just how many lines of executed code is redundant processing when hypervisor-based virtualization is leveraged? Billions? Wake up and smell the binary-generated ozone! Those cycles cost real money and introduce a very big surface area for bug discovery.

Poor software quality makes hypervisor-based virtualization more expensive than it should be and the publishers of operating systems love it. After all, the total number of operating system licenses purchased has not gone down with hypervisor virtualization. The industry has known for years that poor quality software has been an issue. One radical solution is to hold software publishers to a higher standard, but that idea has not gained enough grassroots support – yet. When it does, the hypervisor will be history.

April 24, 2008  10:13 AM

Choosing your next virtualization project

Rick Vanover Rick Vanover Profile: Rick Vanover

For organizations with an established server virtualization environment, future virtualization projects are looming on the horizon. Whether it is desktop or application virtualization, much deliberating will undoubtly be given to the best product for the new virtualization endeavor — as it should.

The next wave of virtualization projects should always be best of breed for the requirements and functionality you require for your particular environment. For example, say you’re an organization with a successful VMware-based server virtualization environment using VirtualCenter and ESX 3. Does this mean that VMware Virtual Desktop Infrastructure (VDI) is the default selection for a virtualized desktop project? Don’t be fooled into thinking that a single-vendor environment is going to translate into an efficient one.

Identify the best solution, even if you can’t afford it. That also includes your host environment hardware for the next virtualization project. Your next virtualization project may require a decision between blades versus general purpose servers for virtual hosts. Taking the time and effort to identify the best solution after making full comparisons for of potential environments will also prepares you for any unforeseen element in post-implementation inquiry.

Make no mistake, there are plenty of advantages to going with what’s familiar: Price discounts, vendor relationships and non-disclosure access are all strong reasons to select the same vendor, but only after due diligence in your decision process should you make another commitment.


April 22, 2008  1:50 PM

Attend the best of VMworld (virtually)

Eric Siebert Eric Siebert Profile: Eric Siebert

Didn’t attend last year’s VMworld? Don’t worry: You can download many of the sessions for free from the VMworld website.

VMware has been gradually releasing the sessions on the website. After last year’s conference, VMware chose not to release all of the sessions to non-attendees since many of the sessions would be reused at VMworld Europe 2008. Currently 133 out of the 261 total sessions are available to watch online with 20 more sessions being offered each month up until VMworld 2008 in Las Vegas. VMware-enthusiasts can purchase a “virtual conference pass” for this year’s VMworld, which will provide full online access to sessions and labs.

The sessions reflect VMworld’s focus on enterprise virtualization, but several sessions are available on products like Workstation, Server, VDI and ACE. While some of the sessions are of technical benefit to system administrators, other sessions address topics such as business continuity, planning, business metrics and software lifecycle automation.

Of the free sessions available I’ve noted those I would recommend to system administrators who want to expand their technical knowledge. The sessions below are very good resources on understanding, troubleshooting, securing and tuning ESX and VirtualCenter.

To access the sessions, simply go to the VMworld website and create a free account. Once you have registered, click on the sessions and labs link to access the free sessions. You can even access sessions from previous years. Although they are dated, these sessions still have some good, applicable information. Once you click on a session, you can download the audio as an .mp3 file, the slides as a .pdf file or you can watch them together as a flash video.


April 21, 2008  2:16 PM

VMware Certified Professionals command higher salaries, report shows

Eric Siebert Joseph Foran Profile: Joe Foran

It’s been six months since I posted about the value of the VMware Certified Professional (VCP) certificate, and I thought I’d provide an update.

As the image shows, courtesy of indeed.com, the VCP is as hot as ever.


Since I last covered this topic, the following shifts occured:

  • The VCP gained $3,000
  • The A+ climbed $6,000
  • Network+ declined $1,000
  • MCP gained $1,000
  • MCSE gained $2,000
  • CCA lost $2,000
  • CCEA picked up $2,000
  • RHCT picked up $3,000
  • RHCE picked up$2,000
  • RHCA lost $1,000

The big gain in VCP salaries over a period of less than six months shows that this technology is still very much an in-demand skill set and a hot certification to show off. It’s a new year and salaries did jump overall, so this is reflected in the data. As before, the international trend is also continuing, as the next two images (from itjobswatch.co.uk) show, in terms of salary and demand.

I intend to keep tracking these statistics every few quarters, so stay tuned. I’m also keeping my eye out for Citrix-sponsored Xen certifications and will be bringing an analysis of those to the blogosphere as soon as there’s some quantifiable information available.  And with VMware ramping up its certification programs, I expect to be adding second and third-tier VMware certifications.

What other certifications do you think should be compared? I’ve included a broad list of non-developer certs to show the variety and range in entry-level (MCP, A+) system admin certs through top-tier (CCEA, RHCA) certs to compare the VCP’s placement as a hot-technology. I’ve left off network, storage and many specialty certs because they may not be pervasive enough in the enterprise or may not be relevent topically. Since I’m one person with one view, I hope our readers will comment below and dictate to me what should be compared. So please fire away.


April 21, 2008  12:20 PM

VMware’s ESX 3i for free?

Eric Siebert Joseph Foran Profile: Joe Foran

The Inquirer recently published a story on how Dell is considering giving away VMware VI 3i licensing on its PowerEdge servers. While I won’t rehash the details of the rumor here, I’ll add my opinion and analysis on why this bold move is being made, since it appears VMware is actively supporting this tactic after having said that hardware vendors will be free to choose what fees to charge customers for 3i, if any.

Hypervisors are destined to become a commodity item, even more so than other software, because everyone will be utilizing virtualization within the next few years. Dell and VMware aren’t reacting so much to competition from Virtual Iron, Hyper-Hype (oops, I mean Hyper-V) or Virtuozzo as they are to Phoenix’s Hyperspace and Xen’s Embedded offering. Hyperspace is the big target here, as it’s embedded virtualization from a BIOS manufacturer.

Putting a hypervisor at that level takes the old “I need to dual boot” equation from power-users who need access to Linux and Windows or Mac and Windows, etc., to another level in addition to being another virtualization offering. Virtualization originally took that need away for most people. I for one stopped dual booting and ran Linux in Windows, Windows in Linux and Windows and Linux in Mac via virtualization products from the minute I got my mitts on VMware Workstation all the way through Parallels Desktop. Now, Phoenix is turning the tables, and taking virtualization away from being built on top of the BIOS like a conventional OS and making a run to own the space from the board-level up. Phoenix is virtually going from the niche unthought-of product to an enteprise contender (no pun intended.)

VMware saw this coming. It anticipated the inevitable, embedded hypervisor, which is why 3i came out in the first place. It also knows that we, the computer-consuming public, don’t really consider the BIOS when we buy a computer (be it a personal machine or a server.) We don’t even realize that we pay for the BIOS, because BIOS builders charge chip- and board-level makers a licensing fee. That licensing fee per machine is passed on to us in the cost of the board, and it’s minimal.

I am convinced this is where virtualization is headed: Virtualization will be a commodity, practically free for all without needing much to install or configure after the fact. VMware is betting on this core hypervisor as a lead-in to its flagship products. I expect VMware to focus this new strategy of transitioning customers from base-level embedded hypervisor to high-end pricing on management, replication, storage, etc.

Dell is also wise to this trend. They see the advantage of the embedded hypervisor as much as they saw the advantage of selling VMware’s ESX product line pre-installed on their hardware. They see that sooner or later everything they sell will have virtualization built-in. I expect them to sell Hyperspace alongside 3i. I also expect that will need to make the price points equivalent, lest there be howls from customers buying Hyperspace wanting to upgrade to a higher-level of virtualization management.

Advanced features like VMotion, DRS, HA, CB, etc. are licensed at the license server level, making 3i as good a choice for virtualization as ESX 3.5. This comes from VMware’s own 3i announcement:

“VMware ESX Server 3i is the new architectural foundation for VMware Infrastructure 3, the most widely deployed virtualization software suite for optimizing and managing industry-standard IT environments. VMware customers will be able to easily implement the entire suite of VMware Infrastructure 3 products on top of this foundation, including VirtualCenter, VMotion, Distributed Resource Scheduler (DRS), High Availability (HA) and VMware Consolidated Backup (VCB).”

As it stands, 3i is cheap at around $500, so don’t expect this shift in pricing to impact VMware’s bottom line.


April 18, 2008  11:05 AM

Virtualization of Citrix Presentation Server in VMware calculations

Rick Vanover Rick Vanover Profile: Rick Vanover

In following with Joe Foran’s recent blog about virtualizing Citrix Presentation Server (PS) systems, I too have had success with this practice. I took the approach that, for certain PS configurations, there can be great virtualization candidates depending on how you use Citrix. A web interface for PS is a great candidate for a virtual system if it is on its own server, but additional criteria determine what can be configured for a virtualized Citrix environment.

Based on my experience, the deciding factor for virtualizing PS systems is how many sessions will be concurrent for your published applications. For published applications that are rarely used or will not have very many sessions, this is a good starting point for virtualized PS systems. An example would be a line of business published applications that would not expect more than four concurrent users. A few of these types of applications on a virtual machine in ESX can work very well.

The biggest question becomes virtual machine provisioning from the memory and processor standpoint. If you have a baseline of your current Citrix usage, that is a good starting point for estimating the concurrent session usage. Take the following observations of a Citrix environment:

  • Each PS session takes 16 MB of RAM
  • Each published application within that environment requires 11 MB of RAM
  • There are 4 published applications on a server, that have not exceeded 5 concurrent sessions

Just under 3.5 GB of RAM is required to meet the same environment requirements from the Citrix session perspective. By adding the base server and Citrix PS memory requirements to this calculated amount, you have identified the provisioning requirements of the Citrix server for the virtual role. From the processor standpoint, I generally provision the frequency limit at the rate of the physical system processor.

The good news is that Citrix is licensed by client connection and not the number of servers. Therefore, distributing virtualized Citrix servers in a VMware environment is well poised to meet performance and availability requirements.


April 18, 2008  10:40 AM

VMware white paper review: “Comparison of storage protocol performance”

Rick Vanover Joseph Foran Profile: Joe Foran

Since I just love to read white papers (n.b., sarcasm), I grabbed a copy of VMware’s Comparison of Storage Protocol Performance. Actually, I found it to be a good read. It’s short and to the point. This sums it up quite nicely:

“This paper demonstrates that the four network storage connection options available to ESX Server are all capable of reaching a level of performance limited only by the media and storage devices. And even with multiple virtual machines running concurrently on the same ESX Server host, the high performance is maintained. ”

The big four storage connections are:

  • Fibre Channel (2 GB was tested)
  • Software iSCSI
  • Hardware iSCSI
  • NFS NAS

The paper infers that network file system (NFS) is perfectly valid for virtual machine (VM) storage, performing in all of the tests at a level comparable with software iSCSI, very close to hardware iSCSI and lagging behind 2 GB Fibre Channel (FC). This doesn’t surprise me one bit: I like NFS network-attached storage (NAS) for VM storage. I prefer storage area network, or SAN-based storage because I prefer to store on a virtual machine file system; but for low-criticality VMs, NAS’s price is right (well, as long as you don’t count Openfiler, IET, etc.) Also, it’s plausible to build out a virtual infrastructure storage architecture using nothing but Fedora Core and be supported.

I was particularly interested in the FC vs. iSCSI performance results presented in this VMware white paper. At the lowest end of the scale, iSCSI beat FC. Granted, the low end of the scale isn’t what will be seen in most production environments but it is  interesting data. What I liked most was that nowhere did 2GB FC truly outclass 1Gb iSCSI. It was faster in most of the higher I/O testing, but never did it double the performance. 2 GB FC did show a big performance improvement in the multiple VM scalability test but not double (about 185 MB per second vs. about 117 MB per second).

On to what I didn’t like in this white paper:

  • No 4 GB FC comparisons. 4 GB FC is the sweet spot for high-performance enterprise SANs being put in place to support the big iron now being virtualized. It should have been covered, even if it is still a little bit of a nascient technology (well, not in terms of maturity but in terms of it’s market segment.)
  • No 1 GB FC connections. (There are still plenty out there.)
  • No NIC Teaming comparisons. I want to know how much additional CPU overhead is involved. I want to know how much performance is improved if you team NICs on your software iSCSI targets and initiators.
  • No multipathed comparisons. This should have been done. Mutipathing is a way of life for anything as mission critical as a server that hosts multiple servers.
  • No 10 GB Ethernet iSCSI comparisons. VI 3.5 is out. 10 GB Ethernet support is built into VI 3.5 (see the HCL, page 29.) Not to test this is a big oversight.
  • No internal-disk storage was tested. Ok, maybe it’s not reasonable for me to expect this to be tested. Maybe I’m just grouching now.

I was surprised to see that software iSCSI got its tail handed to it in CPU workload testing. I’ve never done this testing but I knew there was a big overhead involved. I just didn’t expect it to be that big, especially compared to NAS, which I expected to be right there with iSCSI rather than much more CPU efficient (FC was the 1.0 baseline, NAS scored about 1.8-ish 1.9-ish, and SW iSCSI was about 2.75.) This means one thing: while performance is great across all protocols, plan on extra CPU power for software iSCSI.

I was pleasently surprised to see hardware iSCSI dead-even with 2 GB FC. I had expected some additional overhead even with dedicated hardware, but that wasn’t the case. I would expect to find that in a dedicated iSCSI solution–unless you’re using really cheap equpment like hooking up a couple of big drives to that old desktop–you won’t hit the CPU-use ceiling unless you fail badly at planning.

All of these protocols are perfectly valid. There could have been more meat in the paper, but it did a good job of accurately testing four of the most common storage architectures used with VMware’s products.

Overall, I give this white paper seven “pokers.” Why pokers? Because stars and 1-10 ratings are common. Pokers are mine. Because fireplace pokers can jar you into action if you get bit by one, seven pokers means you should read this paper if you have any responsibility for virtualization.


April 16, 2008  1:48 PM

Saving money by using virtualization

Eric Siebert Eric Siebert Profile: Eric Siebert

As part of a business case to justify our server consolidation/virtualization project, I had to show the benefits of what the project would provide. Virtualization provides a lot of “soft” benefits like reduced administration, maintenance costs, head count, etc. but one of the “hard” benefits is from the reduced power and cooling costs. I put together a little spreadsheet of all my servers and the wattage of their power supplies to help calculate how much money we would save in that area. The end result was real numbers I could take to management to show them the ROI that virtualization provided.

In today’s world the cost of just about everything has been on the rise. Fuel costs in particular have a ripple effect on just about everything we buy which also affects computers. That’s why virtualization is a great way to offset those increased costs. Providing power and cooling to a data center can be a very big expense, virtualizing servers can dramatically reduce this cost. PlateSpin provides a nice power savings calculator on their website. If we plug in the following numbers:

  • 200 physical servers
  • average usage of 750 watts per server
  • average processor utilization of 10% before virtualization
  • target processor utilization of 60% after virtualization

The average power and cooling savings a year comes out to $219,000 with a consolidation ratio of 5:1 based on a cost per kilowatt hour of 10 cents. As the cost of power increases the savings become even greater, at 12 cents the cost savings become $262,800 per year and at 15 cents the cost savings become $328,500 per year.

Of course savings will vary based on a number of factors including how well utilized your physical servers are before virtualization, your consolidation ratio which can sometimes be as high as 15:1 and also your location. Different parts of the country average different costs per kilowatt hour, California and New York tend to be the highest at 12 – 15 cents per kilowatt hour where Idaho and Wyoming are the cheapest at about 5 cents per kilowatt hour. Power costs tend to rise a lot more then they go down so the argument for virtualization from a cost perspective becomes much easier when you factor in the potential savings.

Some power companies like PG&E even offer incentives and rebates for virtualizing your data center and reducing power consumption. A greener data center benefits everyone and besides reducing costs also helps the environment. Virtualization is one of the key technology’s to help make this possible.


April 16, 2008  10:50 AM

Citrix Presentation Server 4.5 and VMware VI3.5: A happy cohabitation

Eric Siebert Joseph Foran Profile: Joe Foran

xen_and_vmware

I have a confession to make: When it comes to Citrix’s XenServer/Presentation Server/MetaFrame/WinFrame product line, I’ve always been biased. I simply love it. I remember giving people JDE software in a midsize manufacturing company (that has since been swallowed by a large imperial juggernaut.) When I was a server admin there, I had only to deploy the Citrix client and some configs and the desktop admins loved me for it. At my prior company, I remember going desktop to desktop putting nasty, frequently-updated-due-to-crappy-design applications on hundreds of clients’ desktops. Thanks to Citrix, I was the office hero because staff could work at home when they needed to.

Then I discovered VMware and fell in love all over again. Now, I could deploy rich desktops without granting server access to the desktop. I could consolidate hundreds of servers, roll out emergency desktops in half the time, deploy servers from cloned templates with ease and backup entire systems without any agents. The only problem was that I never had Citrix run well on ESX 2 and 2.5, even though I was being told that Citrix and VMware go together like PB and J. If I were to anthropomorphize the whole thing (and I will) I’d say the two were jealous of each other and vied for my love and affection.

Putting Citrix on VMware
I had been advising people against using Citrix and VMware together; but should one insist, I have always recommended that they do some serious testing first. Then one day I broke down: After having read the VMware Performance Study and a great VMTN post, I figured it was about time I did my own testing. And like the aforementioned references, I got great results.

I officially rolled out Citrix Presentation Server on VI3.5 and the performance has been stellar. I don’t have a lot of users on the Presentation Servers, but I run them alongside other production servers hosting the server side of some medical applications (billing applications, etc.), effectively putting the client and server on the same hosts. I’ve done this for my own office and for a couple of clients now. You could say that I am officially backpedaling now and embracing Citrix on VMware.

Here are my suggestions if you decide to try this for yourself:

Disaster recovery services (DRS) – Use anti-affinity rules to keep your Citrix servers from bunching up together if you allow automatic allocations. While it’s unlikely that a large farm will wind up with all of its Citrix eggs in a few baskets and then lose all of those baskets, it’s a possibility that should be planned for.

Storage – Use the fastest storage you can use. Citrix directly affects the user experience and shouldn’t be skimped on. Slow Citrix equals unhappy masses, which equals poor perception of IT, which equals job troubles for you. If you have multiple storage area networks (SANs) to connect to, or even multiple logical unit numbers (LUNs) on the same SAN in different RAID groups, separate out the virtual machine disk files across your storage infrastructure to minimize the amount of disk I/O that Citrix boxes can generate (this is a good thing to do in any Citrix environment, not just a virtualized one.) Granted, I’m talking out of the side of my head here, since I run one of my Citrix farms on an iSCSI SAN, and it performs very well, but scalability may be an issue I don’t have to address to the same degree as the largest enterprises.

Benefits of a Citrix on VMware system
The net result I’m seeing is an average of 18-20 users on each Citrix box before performance starts to tank. I was getting the same performance on my physical boxes. I don’t need to schedule a reboot as frequently due to memory leaks (though we also redid the base Win2K3 install for R2, so I can’t definitely point to virtualization as benefactor here), and when I do reboot, the reboot time is, like any virtual system, much faster than a hardware reboot.

Since I now can put my Citrix disk on the SAN, I can do block-level backup of data stored on Citrix servers (which, as any Citrix admin can tell you, happens no matter what you do since users always find a way.) Having templates makes it easy to roll out new Citrix boxes as well, especially since PS4.5 makes adding a new Citrix box to your farm a breeze. Then there’s my favorite: snapshots. I’d take a lower user/server ration if I had to just to have this feature. Luckily, I don’t have to. I can take snapshots just before and after every new application is installed for publishing; before and after every app is patched; before and after updating Windows; and before and after updating Citrix. Being able to roll back with such ease is what makes me truly, deeply happy with Citrix on VMware.

So, same user/server ratio, shorter downtime periods, quicker deployment and snapshots: I call this a win-win.


April 15, 2008  12:37 PM

VKernel Capacity Bottleneck Analyzer for ESX virtualization available

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Portsmouth, NH-based VKernel announced availability of its Capacity Bottleneck Analyzer Virtual Appliance, which allows system administrators to see capacity issues in VMware ESX Server-based environments so they can make necessary changes for optimum performance.

Network bottlenecks are issues in virtual environments due to increased capacity from virtual machines. A number of networking vendors have developed network products specifically for virtual environments to alleviate these issues.

A newer vendor called Altor Networks Inc. introduced a Virtual Network Security Analyzer last month that also lets IT view what is happening in virtual environment.

VKernel’s software monitors CPU, memory and storage utilization trends in VMware ESX environments across hosts, clusters and resource pools. The virtual appliance gives users a single-screen management dashboard that displays all of the details on capacity to help plan for new hosts, clusters and resource pools. Users can also receive alerts via email and SNMP.

The VKernel Capacity Bottleneck Analyzer Virtual Appliance is currently available with pricing starting at $199 per CPU socket.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: