The Virtualization Room


April 18, 2008  11:05 AM

Virtualization of Citrix Presentation Server in VMware calculations

Rick Vanover Rick Vanover Profile: Rick Vanover

In following with Joe Foran’s recent blog about virtualizing Citrix Presentation Server (PS) systems, I too have had success with this practice. I took the approach that, for certain PS configurations, there can be great virtualization candidates depending on how you use Citrix. A web interface for PS is a great candidate for a virtual system if it is on its own server, but additional criteria determine what can be configured for a virtualized Citrix environment.

Based on my experience, the deciding factor for virtualizing PS systems is how many sessions will be concurrent for your published applications. For published applications that are rarely used or will not have very many sessions, this is a good starting point for virtualized PS systems. An example would be a line of business published applications that would not expect more than four concurrent users. A few of these types of applications on a virtual machine in ESX can work very well.

The biggest question becomes virtual machine provisioning from the memory and processor standpoint. If you have a baseline of your current Citrix usage, that is a good starting point for estimating the concurrent session usage. Take the following observations of a Citrix environment:

  • Each PS session takes 16 MB of RAM
  • Each published application within that environment requires 11 MB of RAM
  • There are 4 published applications on a server, that have not exceeded 5 concurrent sessions

Just under 3.5 GB of RAM is required to meet the same environment requirements from the Citrix session perspective. By adding the base server and Citrix PS memory requirements to this calculated amount, you have identified the provisioning requirements of the Citrix server for the virtual role. From the processor standpoint, I generally provision the frequency limit at the rate of the physical system processor.

The good news is that Citrix is licensed by client connection and not the number of servers. Therefore, distributing virtualized Citrix servers in a VMware environment is well poised to meet performance and availability requirements.

April 18, 2008  10:40 AM

VMware white paper review: “Comparison of storage protocol performance”

Joseph Foran Profile: Joe Foran

Since I just love to read white papers (n.b., sarcasm), I grabbed a copy of VMware’s Comparison of Storage Protocol Performance. Actually, I found it to be a good read. It’s short and to the point. This sums it up quite nicely:

“This paper demonstrates that the four network storage connection options available to ESX Server are all capable of reaching a level of performance limited only by the media and storage devices. And even with multiple virtual machines running concurrently on the same ESX Server host, the high performance is maintained. ”

The big four storage connections are:

  • Fibre Channel (2 GB was tested)
  • Software iSCSI
  • Hardware iSCSI
  • NFS NAS

The paper infers that network file system (NFS) is perfectly valid for virtual machine (VM) storage, performing in all of the tests at a level comparable with software iSCSI, very close to hardware iSCSI and lagging behind 2 GB Fibre Channel (FC). This doesn’t surprise me one bit: I like NFS network-attached storage (NAS) for VM storage. I prefer storage area network, or SAN-based storage because I prefer to store on a virtual machine file system; but for low-criticality VMs, NAS’s price is right (well, as long as you don’t count Openfiler, IET, etc.) Also, it’s plausible to build out a virtual infrastructure storage architecture using nothing but Fedora Core and be supported.

I was particularly interested in the FC vs. iSCSI performance results presented in this VMware white paper. At the lowest end of the scale, iSCSI beat FC. Granted, the low end of the scale isn’t what will be seen in most production environments but it is  interesting data. What I liked most was that nowhere did 2GB FC truly outclass 1Gb iSCSI. It was faster in most of the higher I/O testing, but never did it double the performance. 2 GB FC did show a big performance improvement in the multiple VM scalability test but not double (about 185 MB per second vs. about 117 MB per second).

On to what I didn’t like in this white paper:

  • No 4 GB FC comparisons. 4 GB FC is the sweet spot for high-performance enterprise SANs being put in place to support the big iron now being virtualized. It should have been covered, even if it is still a little bit of a nascient technology (well, not in terms of maturity but in terms of it’s market segment.)
  • No 1 GB FC connections. (There are still plenty out there.)
  • No NIC Teaming comparisons. I want to know how much additional CPU overhead is involved. I want to know how much performance is improved if you team NICs on your software iSCSI targets and initiators.
  • No multipathed comparisons. This should have been done. Mutipathing is a way of life for anything as mission critical as a server that hosts multiple servers.
  • No 10 GB Ethernet iSCSI comparisons. VI 3.5 is out. 10 GB Ethernet support is built into VI 3.5 (see the HCL, page 29.) Not to test this is a big oversight.
  • No internal-disk storage was tested. Ok, maybe it’s not reasonable for me to expect this to be tested. Maybe I’m just grouching now.

I was surprised to see that software iSCSI got its tail handed to it in CPU workload testing. I’ve never done this testing but I knew there was a big overhead involved. I just didn’t expect it to be that big, especially compared to NAS, which I expected to be right there with iSCSI rather than much more CPU efficient (FC was the 1.0 baseline, NAS scored about 1.8-ish 1.9-ish, and SW iSCSI was about 2.75.) This means one thing: while performance is great across all protocols, plan on extra CPU power for software iSCSI.

I was pleasently surprised to see hardware iSCSI dead-even with 2 GB FC. I had expected some additional overhead even with dedicated hardware, but that wasn’t the case. I would expect to find that in a dedicated iSCSI solution–unless you’re using really cheap equpment like hooking up a couple of big drives to that old desktop–you won’t hit the CPU-use ceiling unless you fail badly at planning.

All of these protocols are perfectly valid. There could have been more meat in the paper, but it did a good job of accurately testing four of the most common storage architectures used with VMware’s products.

Overall, I give this white paper seven “pokers.” Why pokers? Because stars and 1-10 ratings are common. Pokers are mine. Because fireplace pokers can jar you into action if you get bit by one, seven pokers means you should read this paper if you have any responsibility for virtualization.


April 16, 2008  1:48 PM

Saving money by using virtualization

Eric Siebert Eric Siebert Profile: Eric Siebert

As part of a business case to justify our server consolidation/virtualization project, I had to show the benefits of what the project would provide. Virtualization provides a lot of “soft” benefits like reduced administration, maintenance costs, head count, etc. but one of the “hard” benefits is from the reduced power and cooling costs. I put together a little spreadsheet of all my servers and the wattage of their power supplies to help calculate how much money we would save in that area. The end result was real numbers I could take to management to show them the ROI that virtualization provided.

In today’s world the cost of just about everything has been on the rise. Fuel costs in particular have a ripple effect on just about everything we buy which also affects computers. That’s why virtualization is a great way to offset those increased costs. Providing power and cooling to a data center can be a very big expense, virtualizing servers can dramatically reduce this cost. PlateSpin provides a nice power savings calculator on their website. If we plug in the following numbers:

  • 200 physical servers
  • average usage of 750 watts per server
  • average processor utilization of 10% before virtualization
  • target processor utilization of 60% after virtualization

The average power and cooling savings a year comes out to $219,000 with a consolidation ratio of 5:1 based on a cost per kilowatt hour of 10 cents. As the cost of power increases the savings become even greater, at 12 cents the cost savings become $262,800 per year and at 15 cents the cost savings become $328,500 per year.

Of course savings will vary based on a number of factors including how well utilized your physical servers are before virtualization, your consolidation ratio which can sometimes be as high as 15:1 and also your location. Different parts of the country average different costs per kilowatt hour, California and New York tend to be the highest at 12 – 15 cents per kilowatt hour where Idaho and Wyoming are the cheapest at about 5 cents per kilowatt hour. Power costs tend to rise a lot more then they go down so the argument for virtualization from a cost perspective becomes much easier when you factor in the potential savings.

Some power companies like PG&E even offer incentives and rebates for virtualizing your data center and reducing power consumption. A greener data center benefits everyone and besides reducing costs also helps the environment. Virtualization is one of the key technology’s to help make this possible.


April 16, 2008  10:50 AM

Citrix Presentation Server 4.5 and VMware VI3.5: A happy cohabitation

Joseph Foran Profile: Joe Foran

xen_and_vmware

I have a confession to make: When it comes to Citrix’s XenServer/Presentation Server/MetaFrame/WinFrame product line, I’ve always been biased. I simply love it. I remember giving people JDE software in a midsize manufacturing company (that has since been swallowed by a large imperial juggernaut.) When I was a server admin there, I had only to deploy the Citrix client and some configs and the desktop admins loved me for it. At my prior company, I remember going desktop to desktop putting nasty, frequently-updated-due-to-crappy-design applications on hundreds of clients’ desktops. Thanks to Citrix, I was the office hero because staff could work at home when they needed to.

Then I discovered VMware and fell in love all over again. Now, I could deploy rich desktops without granting server access to the desktop. I could consolidate hundreds of servers, roll out emergency desktops in half the time, deploy servers from cloned templates with ease and backup entire systems without any agents. The only problem was that I never had Citrix run well on ESX 2 and 2.5, even though I was being told that Citrix and VMware go together like PB and J. If I were to anthropomorphize the whole thing (and I will) I’d say the two were jealous of each other and vied for my love and affection.

Putting Citrix on VMware
I had been advising people against using Citrix and VMware together; but should one insist, I have always recommended that they do some serious testing first. Then one day I broke down: After having read the VMware Performance Study and a great VMTN post, I figured it was about time I did my own testing. And like the aforementioned references, I got great results.

I officially rolled out Citrix Presentation Server on VI3.5 and the performance has been stellar. I don’t have a lot of users on the Presentation Servers, but I run them alongside other production servers hosting the server side of some medical applications (billing applications, etc.), effectively putting the client and server on the same hosts. I’ve done this for my own office and for a couple of clients now. You could say that I am officially backpedaling now and embracing Citrix on VMware.

Here are my suggestions if you decide to try this for yourself:

Disaster recovery services (DRS) – Use anti-affinity rules to keep your Citrix servers from bunching up together if you allow automatic allocations. While it’s unlikely that a large farm will wind up with all of its Citrix eggs in a few baskets and then lose all of those baskets, it’s a possibility that should be planned for.

Storage – Use the fastest storage you can use. Citrix directly affects the user experience and shouldn’t be skimped on. Slow Citrix equals unhappy masses, which equals poor perception of IT, which equals job troubles for you. If you have multiple storage area networks (SANs) to connect to, or even multiple logical unit numbers (LUNs) on the same SAN in different RAID groups, separate out the virtual machine disk files across your storage infrastructure to minimize the amount of disk I/O that Citrix boxes can generate (this is a good thing to do in any Citrix environment, not just a virtualized one.) Granted, I’m talking out of the side of my head here, since I run one of my Citrix farms on an iSCSI SAN, and it performs very well, but scalability may be an issue I don’t have to address to the same degree as the largest enterprises.

Benefits of a Citrix on VMware system
The net result I’m seeing is an average of 18-20 users on each Citrix box before performance starts to tank. I was getting the same performance on my physical boxes. I don’t need to schedule a reboot as frequently due to memory leaks (though we also redid the base Win2K3 install for R2, so I can’t definitely point to virtualization as benefactor here), and when I do reboot, the reboot time is, like any virtual system, much faster than a hardware reboot.

Since I now can put my Citrix disk on the SAN, I can do block-level backup of data stored on Citrix servers (which, as any Citrix admin can tell you, happens no matter what you do since users always find a way.) Having templates makes it easy to roll out new Citrix boxes as well, especially since PS4.5 makes adding a new Citrix box to your farm a breeze. Then there’s my favorite: snapshots. I’d take a lower user/server ration if I had to just to have this feature. Luckily, I don’t have to. I can take snapshots just before and after every new application is installed for publishing; before and after every app is patched; before and after updating Windows; and before and after updating Citrix. Being able to roll back with such ease is what makes me truly, deeply happy with Citrix on VMware.

So, same user/server ratio, shorter downtime periods, quicker deployment and snapshots: I call this a win-win.


April 15, 2008  12:37 PM

VKernel Capacity Bottleneck Analyzer for ESX virtualization available

Bridget Botelho Profile: Bridget Botelho

Portsmouth, NH-based VKernel announced availability of its Capacity Bottleneck Analyzer Virtual Appliance, which allows system administrators to see capacity issues in VMware ESX Server-based environments so they can make necessary changes for optimum performance.

Network bottlenecks are issues in virtual environments due to increased capacity from virtual machines. A number of networking vendors have developed network products specifically for virtual environments to alleviate these issues.

A newer vendor called Altor Networks Inc. introduced a Virtual Network Security Analyzer last month that also lets IT view what is happening in virtual environment.

VKernel’s software monitors CPU, memory and storage utilization trends in VMware ESX environments across hosts, clusters and resource pools. The virtual appliance gives users a single-screen management dashboard that displays all of the details on capacity to help plan for new hosts, clusters and resource pools. Users can also receive alerts via email and SNMP.

The VKernel Capacity Bottleneck Analyzer Virtual Appliance is currently available with pricing starting at $199 per CPU socket.


April 8, 2008  1:46 AM

VKernel virtual appliance identifies issues with new dashboards

Rick Vanover Rick Vanover Profile: Rick Vanover

Portsmouth, NH-based VKernel is deep into the beta stages for their new capacity bottleneck analyzer virtual appliance (VA). I had a chance to preview a pre-release candidate this week and was very happy the product. The capacity analyzer, a follow up to VKernel’s chargeback product that launched last year, plugs into a VMware VirtualCenter or ESX host for data based on storage, networking, processor and memory usage for elements of the virtual environment. The capacity analyzer also includes new features that work well with the growing virtualization environment.

Categorized dashboards
Downloading the virtual appliance, assigning an IP address and pointing it to the VirtualCenter server was a breeze and I was looking at a base dashboard in less than 20 minutes. Probably the most notable feature of this VA is the main dashboard that provides a snapshot view of the environment. Everyone can learn something from their virtual environment from looking at the dashboard from the cumulative view. Upon first look, I learned some new things about my environment. For starters, I have one virtual machine that has an abnormally large storage requirement and I did not know it was as big as it was now.

The dashboard is element-sensitive, so if I have multiple hosts, datacenters or VMware ESX clusters , the bottleneck dashboard can display the status relevant to that object. For example, the figure below shows the dashboard from the summary for one particular datacenter:

Dashboard

One thing I find very beneficial to the virtualization admininstrator is the storage details. The for the storage dashboards you can get very detailed within the VA. For example, when looking at the dashboard’s datastore statistics, important information about the environment is displayed. Storage is usually one of the virtualization administrator’s biggest pain points, and any tool that increases visibility to the storage usage and trending is welcome. I mentioned a large virtual machine earlier, this figure shows that virtual machine at the top of the storage resource consumer’s list with 81 GB allocated for its storage:

Storage

Note also that the LUNs are enumerated and their serial numbers are presented. Determining the LUN serial number is not possible through the Virtual Infrastructure Client, so I have had to frequently use the esxcfg-mpath command to get the serial number. Depending on your storage environment, you may need to reconcile LUNs by the serial number. In the scenario of many LUNs with the same geometry, the serial number is the only true way to identify the drive to the virtual environment. This is important in the case of a LUN return, as you do not want the incorrect LUN being removed from the storage system while in use.

Release candidate coming soon
The capacity bottleneck analyzer VA is soon to go into release candidate mode, so be sure to check out the VKernel evaluation site for the standard edition evaluation download.


April 7, 2008  10:35 AM

User interface management tools may jolt virtualization ecosystem

Kutz Profile: Akutz

Hypervisors and VMs are becoming commoditized, resulting in a shifting emphasis towards user interface and management tools. In other words, anyone can make a virtualization platform but the platform that survives Hyper-V’s entry into the virtualization space is the one that develops stand-out management features.

Virtualization pro Andrew Kutz discusses the components of an evolving virtualization ecosystem.


April 3, 2008  10:16 AM

Multiple OS images potentially hinder desktop virtualization

Keith Harrell Profile: SAS70ExPERT

System administrators can’t seem to stop gushing about virtualization benefits. Data center folks reduce hardware footprints and lower power and cooling costs by consolidating servers. When IT pros take virtualization to the next level, such as implementing desktop and application virtualization, the benefits seem to expand exponentially.

However, system admins beware: some management issues cannot be glossed over, such as what to do about multiple OS images. In this video, virtualization expert Barb Goldworm discusses some potential risks when extending virtualization to the desktop and how to avoid them.


April 3, 2008  7:33 AM

The pros and cons of Virtuozzo: A user review

Keith Harrell Profile: SAS70ExPERT

Atlanta-based virtualization pro Mark Dean shares his thoughts in a guest blog for SearchServerVirtualization.com

One of the more popular products in the growing virtualization market is Parallels Virtuozzo Containers. Virtuozzo Containers provide a stable, high performance virtualization platform. However, this same technology also has some drawbacks in restricting the operating systems that can be used for both hosts and VMs.

I deployed Virtuozzo for a customer that wanted to leverage virtualization but was uneasy about the performance of their database server. I suggested Virtuozzo since it straddles the physical-virtual line with OS containers, it is similar to the technology that Sun Solaris uses.

The strengths of Virtuozzo Containers are really in that blurred line between the physical and virtual platforms. Instead of a hypervisor between the OS and the hardware, Virtuozzo Containers virtualize the OS by sharing OS code, files, memory and cache from the root OS, which is called the Hardware Node (and is represented as Container ID 0) in Virtuozzo terms. This means that the VMs are using the hardware directly rather than having calls to the hardware trapped by a hypervisor and then executed which translates into better performance for I/O workloads. This increase in performance is one of the main reasons companies will deploy Virtuozzo. High I/O workloads such as heavy transaction based databases benefit from the shared code nature of the containers.

But as hardware advances with the option of CPU hardware assistance from the CPU manufacturers (AMD-V and Intel-VT), I see Virtuozzo’s technology becoming irrelevant. Since I can now run unmodified VMs using Xen or KVM on Linux and no longer have the (over-exaggerated) performance hit of the binary translation hypervisor (as in ESX), why go with a limiting technology?

Virtuozzo imposes some limitations on what you can run in your farm. Since the VMs are basically sharing code form the root Container OS, you can only run that type of VM on the host. Virtuozzo Containers currently only supports Windows 2003 Server and main Linux distributions. You cannot run Solaris, BSD or NetWare. Now, for some IT shops that may not be a problem but just about all the places I’ve seen, there is a mix of Unix, Windows and for many government places, NetWare.

If a virtualization vendor does not enable live migration, host/VM isolation or embrace the concept of a farm, it is not good for production workloads. Virtuozzo does have some of them, but their isolation is not as good as I like to see. I find that right now only VMware VI3 ESX 3.5 Server has all those concepts down. Xen Enterprise, when coupled with CPU assisted virtualization is the best contender to challenge VMware’s space right now.

Mark Dean is a VMware Professional Partner and a Microsoft Certified Partner with certifications in VMware, Microsoft, Novell, Citrix, IBM and HP along with HP and IBM hardware. Dean has his own virtualization consulting company, VM Computing.


April 1, 2008  10:59 AM

A first look at Hyper-V’s Virtual Machine Manager

Rick Vanover Rick Vanover Profile: Rick Vanover

Continuing our review of Hyper-V, the recently released Microsoft virtualization product for Windows Server 2008, we focus on the management aspect of the hypervisor. In two other recent blogs, I took a quick look at the Hyper-V Manager  and the simple creation of a virtual machine. Also on SearchServerVirtualization.com, fellow contributor Anil Desai gave advice on using the Hyper-V Manager. Now we’ll take a closer look at System Center Virtual Machine Manager.

I installed the System Center Virtual Machine Manager, or VMM, for the management of virtual machines within Windows Server 2008. Installing the VMM is fairly straightforward, but it is worth noting the following pre-requisites:

  • .NET framework 2.0 to start the install, which is automatically upgraded to version 3.0
  • PowerShell from the Windows Server 2008 release environment
  • A SQL Server database instance (can be a local SQL Server 2005 Express edition or hosted database)
  • Domain membership

After some quick iterations of the VMM install, you will have the console application available. From there, you can add a host to the console as I have done below with the HYPERV-TEST-RWV host:

VMM Console

Inside of the VMM console, I added the single host to the same server that the console is installed and spent most of my time afterwards navigating the help file. Surprisingly, the VMM help file (C:\Program Files\Microsoft System Center Virtual Machine Manager 2007\help\Help.chm) is quite comprehensive and provides a good starting point for most configuration tasks in a single resource. One of the best initial observations of the VMM console is the Windows integration for delegated security and rights options. Within the VMM, this is referred to as a self-service policy. These policies offer one option above the competition in that you can assign users to be local administrators through the self-service policy. In other products, this is easily addressed by group policy. Below is the configuration of a self-service policy:

Self-Service Policy Screen Shot

Like other virtualization platforms, the management layer is the key to an enterprise implementation. System Center VMM is no exception, and this initial release has support for migrating virtual machines and shared storage. However, this part of my evaluation of VMM will not focus on these components just yet, so stay tuned for my next entry on Hyper-V.

Microsoft resources for VMM
System Center is a comprehensive set of products that can manage components including Systems Management Server (SMS) and other pieces. VMM is a piece that can function as a stand alone or with the rest of the pieces in use. Microsoft has a nice video that showcases how VMM will fit into the management space for the virtual machines. If you are familiar with virtual machine management, this will be somewhat introductory. But towards the end of the video there are some quick tasks performed on-screen to give a feel for the process of managing the virtual machines in the VMM environment.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: