The Virtualization Room


May 5, 2008  7:51 AM

VMware now officially supports single CPU licensing

Eric Siebert Eric Siebert Profile: Eric Siebert

VMware announced that they will begin support for running ESX on a single physical, multi-core (up to 4) processor.

Previously, the VMware end user license agreement was unclear about whether this was supported, creating much debate over this subject in the VMware forums (see ESX Pricing and VMware Planning). Customers were getting different responses from VMware with some representatives saying it was OK to do this and others saying it was not. One response from VMware on this issue was that it technically would work but that it was not officially supported.

Despite recent support, ESX is only being sold in two CPU increments while only being supported if a physical server is in on the VMware approved Hardware Compatibility List (HCL). This can be advantageous to customers who want to buy lower cost servers with a single processor and use them for less intensive applications. It also allows for smaller customers to buy two single processor, multi-core servers and split a ESX license between them, taking advantage of redundant hardware and features like High Availability and vMotion.

It’s good to see VMware changing their licensing policies to better adapt to customers needs. Multi-core processors have caused many other vendors to change their licensing policies to be more restrictive but VMware has stuck with their per socket licensing model. VMware currently only allows up to four cores per processor but with eight-core processors on the horizion, it’s probably inevitable that VMware will eventually change their licensing.

You can click on these links to read through VMware’s Multi-core and Single Processor licensing policies.

May 1, 2008  11:21 AM

Virtualization tools, advice focus on ROI

Bridget Botelho Profile: Bridget Botelho

The decision whether to adopt virtualization often comes down to the corporate bottom line. CFOs want to know how long it will be before they see return on investment from virtualization, and there are many considerations in determining ROI.

Yesterday, I spoke with Stephen Fink, senior infrastructure architect for the global IT consultancy Avanade, about a comprehensive tool he created that takes just about every inch of data centers under consideration to determine what the ROI for virtualization will be.

Fink has 14 years of experience as a consultant and created the virtualization model for ROI as a tool for his own clients, but it made its way around the company and is now used as the way to determine ROI by Avande consultants, he said.

There are 125 inputs in the Microsoft Excel-based tool – such as power and cooling, cabling, network, CPU, servers, floor space, and staffing costs – and each helps determine the impact of implementing virtualization at a customer’s location, he said.

“There will never be a one-size-fits-all solution, and there has to be a business case for virtualization; I look at their environment from a high-level approach and asses the inventory. We look at their apps, their network, the annual power costs, licensing costs for software, etc., to see what they pay for their environment, and we can now give a really good idea of the ROI with Microsoft Hyper-V and VMware,” Fink said.

Avande, which is partially owned by Microsoft, has the benchmark information on Hyper-V from the most recent release candidates and uses that to determine Hyper-V ROI. Hyper-V is scheduled for release in August.

“We look at the net costs of the environment without virtualization versus what they would pay if they virtualized, with specific server types, running ESX or Hyper-V. We can tell you how many systems can be virtualized, and you can see the cost of your virtual servers, the cost per OS and the cost of your virtual hosts, to determine your annual cost reduction from virtualized guests,” Fink explained.

Fink said consultants like him are often used to determine whether virtualization is worth the initial acquisition and licensing costs, which depends on businesses’ expectations when it comes to ROI. “If a company already operates efficiently and has a portfolio of apps that make them a poor candidate for virtualization – like very high CPU and high memory consuming apps or data base severs, virtualization may not be the answer for them,” Fink said.

Avanade uses the tool as part of its consultancy, and it is only available through Avande consultants – which, of course, comes at a cost to businesses.

Other virtualization calculator tools are available for free, like the one from VMware, but these aren’t as precise as Fink’s tool from what I can tell.

There are also plenty of experts offering advice on determining virtualization ROI that won’t cost you anything.

According to IT security and virtualization technology analyst Alessandro Perilli , to calculate ROI, “you need to apply simple math to the costs your company could mitigate or eliminate by adopting virtualization.”

He reported that virtualization can reduce some of the following direct costs:

* Cost of space (leased or owned) for physical servers
* Energy to power physical servers
* Air conditioning to cool the server room
* Hardware cost of physical servers
* Hardware cost of networking devices (including expensive gears like switches and fibre channel host bus adapters)
* Software cost for operating system licenses
* Annual support contracts costs for purchased hardware and software
* Hardware parts for expected failures
* Downtime cost for expected hardware failures
* Service hours of maintenance cost for every physical server and networking device

Scott Feuless, a senior consultant with Compass, based in Texas, wrote about how to quantify virtualization ROI recently, and IT consultant John Hayes of Avnet Technology Solutions also had some advice on figuring out the cost of virtualization that could help make the case for virtualization.


April 29, 2008  12:49 PM

No virtualization-specific requirements for PCI audits

Eric Siebert Eric Siebert Profile: Eric Siebert

If your company deals with credit cards, you are required to follow the Payment Card Industry’s data security standards (PCI DSS). The major credit card players — Visa, Mastercard, American Express and Discover — set forth these requirements in order to protect credit card data. If audits reveal that these regulations are not followed, fines or revocation of credit card processing privileges can result. Often, these audits force companies to implement basic security practices that should have already been in place; however, no virtualization-specific requirements have yet been put into practice.

Having just survived another annual PCI compliance audit, I was again surprised that the strict standards for securing servers that must be followed contain nothing specific concerning virtual hosts and networks. Our auditor focused on guest virtual machines (VMs), ensuring they had up-to-date patches, locked-down security settings and current anti-virus definitions. But ironically, the host server that the virtual machines were running on went completely ignored. If the host server was compromised, it wouldn’t matter how secure the VMs were because they could be easily accessed. Host servers should always be securely locked down to protect the VMs which are running on them.

It seems that much of the IT industry has yet to react to the virtualization trend, having been slow in changing procedures to adjust to some of the unconventional concepts that virtualization introduces. When I told our auditor that the servers were virtual, the only thing he wanted to see was some documentation stating that the remote console sessions to the VMs were secure. It’s probably just a matter of time before specific requirements for virtual servers are introduced. In fact, a recent webinar takes up this issue of whether or not virtualized servers can be considered compliant, addressing section 2.2.1 of the PCI DSS which states, “Implement only one primary function per server”; that is to say, web servers, database servers and DNS should be implemented on separate servers. Virtual servers typically have many functions running on a single physical server, which would make them noncompliant.

Looking at the PCI Knowledgebase, it seems many companies are confused on this and some are not implementing virtualization until this is cleared up. We’ll have to wait and see what develops and how the specification is modified to allow for virtual servers. It would be in the best interest of companies like VMware and Microsoft to work with the PCI to get this sorted out as soon as possible.

You can read the current PCI Compliance 1.1 specification here.


April 29, 2008  12:20 PM

Using ISO files with virtual machines

Eric Siebert Eric Siebert Profile: Eric Siebert

ISO files offer an advantage to virtual machines (VMs), chiefly as a means of loading operating systems and applications on virtual servers without the hassle of using physical media. Many tools for creating, editing and mounting ISOs are readily available and if you haven’t been creating ISOs already, keep reading.

An ISO file is an archive file format (ISO 9660), typically an image of a CD-ROM or DVD-ROM, similar to a .ZIP file but without file compression. An ISO can be any size, from a few megabytes to several gigabytes. Reading an ISO file is much faster than reading from physical media like CD-ROMs. Free from physical imperfections, ISO files are easy to mount on VMs and don’t require looking for a CD when it is needed.

I’ve created dozens of ISO files for different operating systems and applications. For my Windows servers, I no longer copy the I386 directory to the server since I can easily mount it as an ISO file on my virtual machines as needed, saving disk space on the VM. I also create ISO files with troubleshooting tools like the Sysinternals utilities, so I can mount them quickly to troubleshoot problems on my VMs. Once an ISO library is created, a central repository on a host datastore or remote server can be made using NFS or Samba to provide access to all VMs.

A number of applications are available to mount ISO files on a physical system by creating a virtual CD-ROM drive. Once mounted, contents of an ISO file can be accessed just like a physical CD-ROM drive. Linux and ESX systems can use the mount command to do this, while Microsoft provides for a little-known virtual CD-ROM driver that you can be downloaded for free. ISO files can be created and edited with other tools. Linux and ESX systems come installed with a command called dd that creates an ISO file from an input device like a CD-ROM or DVD-ROM. Microsoft provides a tool called cdburn in their downloadable Resource Kits. For your convenience, I’ve created a short list of some of the many tools available for creating, editing and mounting ISO files.

Tools to create and edit ISO files:

Free

  • cdburn.exe (available in Windows XP and Server 2000/2003 Resource Kits)
  • dd.exe (Linux utility)
  • ISO recorder
  • ImgBurn

Commercial

Tools to mount ISO files:

Free

Commercial

A more complete list of ISO resources can be found here.


April 28, 2008  2:16 PM

Is hypervisor-based virtualization doomed?

Keith Harrell Profile: SAS70ExPERT

The following is a guest blog written by Schorschi Decker, an IT professional specializing in virtualization and enterprise-level management with over 25 years of experience in the industry.

Operating system isolation or hypervisor-based virtualization remains popular, but are we settling for less than what we should? Hiding its limitations in modest incremental effectiveness, hypervisor-based virtualization persists because it continues to hide an ugly secret: poor quality code.

Many who have worked with hypervisor-based virtualization may already knows this, but anyone who has attempted implementation of application instancing undoubtedly see where hypervisors fail. Replication of the operating system within a virtual instance is waste, waste driven by bad code. Faster cores, more cores per package, limited improvement in memory and device bus design, marginal increases in mechanical drive design and shared storage models have all contributed to mask how hypervisors inefficiently utilize processors.

If customer adoption rates are an indicator of success, past attempts at application instancing have not been successful to any consistent degree (there are no buzzwords for an application instance method.) To be clear, homogeneous applications have benefited, such as Microsoft SQL and ISS, Oracle and even Citrix. However, in the case of Citrix, application instancing has been environment-dependent to a degree.

Resource management within a common operating instance has not significantly changed since the introduction of mainframe logical partitions (LPARs). Solaris zones is a container-based model, whereas AIX micro-partitions follow a truer application instancing model. Even Apple computer introduced simple memory partitioning in the Macintosh Operating System 7.x. DEC (yes, Digital Equipment Corporation) leveraged Microsoft Job Engine API, effectively a processor affinity layer, in a ground breaking concept product that Compaq buried. Does anyone remember that product?

The hypervisor foundation resulted from heterogeneous application partitioning failures. For Windows, application instancing has stalled at times or has otherwise been over shadowed by operating system instance isolation techniques. Windows SRM is a weak attempt to crack the hypervisor foundation, but it is so immature at this point it is useless. Microsoft SoftGrid, now Microsoft Application Virtualization has a greater potential but is just not well accepted at this point. Should Microsoft provide it for free to drive acceptance?

The technology industry has attempted some rather interesting implementations to offset the impact of operating system instance isolation, for example, thin-disking and image-sharing which are based on eliminating disk partition under utilized space. Several attempts at addressing the DLL and .Net issues (e.g. Microsoft SoftGrid as well as Citrix) have been implemented to support heterogeneous application instancing but have masked the true issue that has always existed, the lack of quality code.

Why do I make this point? Because the hypervisor is essentially a band-aid on the boo-boo of bad coding. Quality code makes for stable environments. With a stable and predicable environment, applications can be run without fear of crashing, and it is this fear that gives hypervisor virtualization its strength.

Did someone just say “operating system isolation”? Case in point, the recent Symantec Antivirus issue with VMware ESX OS. Code quality is going to become a green issue, just as watts per core and total power consumption has in the data center. Enterprise customers who purchase significant code-based products will demand better code as a way to reduce non-hardware oriented costs. Just how many lines of executed code is redundant processing when hypervisor-based virtualization is leveraged? Billions? Wake up and smell the binary-generated ozone! Those cycles cost real money and introduce a very big surface area for bug discovery.

Poor software quality makes hypervisor-based virtualization more expensive than it should be and the publishers of operating systems love it. After all, the total number of operating system licenses purchased has not gone down with hypervisor virtualization. The industry has known for years that poor quality software has been an issue. One radical solution is to hold software publishers to a higher standard, but that idea has not gained enough grassroots support – yet. When it does, the hypervisor will be history.


April 24, 2008  10:13 AM

Choosing your next virtualization project

Rick Vanover Rick Vanover Profile: Rick Vanover

For organizations with an established server virtualization environment, future virtualization projects are looming on the horizon. Whether it is desktop or application virtualization, much deliberating will undoubtly be given to the best product for the new virtualization endeavor — as it should.

The next wave of virtualization projects should always be best of breed for the requirements and functionality you require for your particular environment. For example, say you’re an organization with a successful VMware-based server virtualization environment using VirtualCenter and ESX 3. Does this mean that VMware Virtual Desktop Infrastructure (VDI) is the default selection for a virtualized desktop project? Don’t be fooled into thinking that a single-vendor environment is going to translate into an efficient one.

Identify the best solution, even if you can’t afford it. That also includes your host environment hardware for the next virtualization project. Your next virtualization project may require a decision between blades versus general purpose servers for virtual hosts. Taking the time and effort to identify the best solution after making full comparisons for of potential environments will also prepares you for any unforeseen element in post-implementation inquiry.

Make no mistake, there are plenty of advantages to going with what’s familiar: Price discounts, vendor relationships and non-disclosure access are all strong reasons to select the same vendor, but only after due diligence in your decision process should you make another commitment.


April 22, 2008  1:50 PM

Attend the best of VMworld (virtually)

Eric Siebert Eric Siebert Profile: Eric Siebert

Didn’t attend last year’s VMworld? Don’t worry: You can download many of the sessions for free from the VMworld website.

VMware has been gradually releasing the sessions on the website. After last year’s conference, VMware chose not to release all of the sessions to non-attendees since many of the sessions would be reused at VMworld Europe 2008. Currently 133 out of the 261 total sessions are available to watch online with 20 more sessions being offered each month up until VMworld 2008 in Las Vegas. VMware-enthusiasts can purchase a “virtual conference pass” for this year’s VMworld, which will provide full online access to sessions and labs.

The sessions reflect VMworld’s focus on enterprise virtualization, but several sessions are available on products like Workstation, Server, VDI and ACE. While some of the sessions are of technical benefit to system administrators, other sessions address topics such as business continuity, planning, business metrics and software lifecycle automation.

Of the free sessions available I’ve noted those I would recommend to system administrators who want to expand their technical knowledge. The sessions below are very good resources on understanding, troubleshooting, securing and tuning ESX and VirtualCenter.

To access the sessions, simply go to the VMworld website and create a free account. Once you have registered, click on the sessions and labs link to access the free sessions. You can even access sessions from previous years. Although they are dated, these sessions still have some good, applicable information. Once you click on a session, you can download the audio as an .mp3 file, the slides as a .pdf file or you can watch them together as a flash video.


April 21, 2008  2:16 PM

VMware Certified Professionals command higher salaries, report shows

Joseph Foran Profile: Joe Foran

It’s been six months since I posted about the value of the VMware Certified Professional (VCP) certificate, and I thought I’d provide an update.

As the image shows, courtesy of indeed.com, the VCP is as hot as ever.


Since I last covered this topic, the following shifts occured:

  • The VCP gained $3,000
  • The A+ climbed $6,000
  • Network+ declined $1,000
  • MCP gained $1,000
  • MCSE gained $2,000
  • CCA lost $2,000
  • CCEA picked up $2,000
  • RHCT picked up $3,000
  • RHCE picked up$2,000
  • RHCA lost $1,000

The big gain in VCP salaries over a period of less than six months shows that this technology is still very much an in-demand skill set and a hot certification to show off. It’s a new year and salaries did jump overall, so this is reflected in the data. As before, the international trend is also continuing, as the next two images (from itjobswatch.co.uk) show, in terms of salary and demand.

I intend to keep tracking these statistics every few quarters, so stay tuned. I’m also keeping my eye out for Citrix-sponsored Xen certifications and will be bringing an analysis of those to the blogosphere as soon as there’s some quantifiable information available.  And with VMware ramping up its certification programs, I expect to be adding second and third-tier VMware certifications.

What other certifications do you think should be compared? I’ve included a broad list of non-developer certs to show the variety and range in entry-level (MCP, A+) system admin certs through top-tier (CCEA, RHCA) certs to compare the VCP’s placement as a hot-technology. I’ve left off network, storage and many specialty certs because they may not be pervasive enough in the enterprise or may not be relevent topically. Since I’m one person with one view, I hope our readers will comment below and dictate to me what should be compared. So please fire away.


April 21, 2008  12:20 PM

VMware’s ESX 3i for free?

Joseph Foran Profile: Joe Foran

The Inquirer recently published a story on how Dell is considering giving away VMware VI 3i licensing on its PowerEdge servers. While I won’t rehash the details of the rumor here, I’ll add my opinion and analysis on why this bold move is being made, since it appears VMware is actively supporting this tactic after having said that hardware vendors will be free to choose what fees to charge customers for 3i, if any.

Hypervisors are destined to become a commodity item, even more so than other software, because everyone will be utilizing virtualization within the next few years. Dell and VMware aren’t reacting so much to competition from Virtual Iron, Hyper-Hype (oops, I mean Hyper-V) or Virtuozzo as they are to Phoenix’s Hyperspace and Xen’s Embedded offering. Hyperspace is the big target here, as it’s embedded virtualization from a BIOS manufacturer.

Putting a hypervisor at that level takes the old “I need to dual boot” equation from power-users who need access to Linux and Windows or Mac and Windows, etc., to another level in addition to being another virtualization offering. Virtualization originally took that need away for most people. I for one stopped dual booting and ran Linux in Windows, Windows in Linux and Windows and Linux in Mac via virtualization products from the minute I got my mitts on VMware Workstation all the way through Parallels Desktop. Now, Phoenix is turning the tables, and taking virtualization away from being built on top of the BIOS like a conventional OS and making a run to own the space from the board-level up. Phoenix is virtually going from the niche unthought-of product to an enteprise contender (no pun intended.)

VMware saw this coming. It anticipated the inevitable, embedded hypervisor, which is why 3i came out in the first place. It also knows that we, the computer-consuming public, don’t really consider the BIOS when we buy a computer (be it a personal machine or a server.) We don’t even realize that we pay for the BIOS, because BIOS builders charge chip- and board-level makers a licensing fee. That licensing fee per machine is passed on to us in the cost of the board, and it’s minimal.

I am convinced this is where virtualization is headed: Virtualization will be a commodity, practically free for all without needing much to install or configure after the fact. VMware is betting on this core hypervisor as a lead-in to its flagship products. I expect VMware to focus this new strategy of transitioning customers from base-level embedded hypervisor to high-end pricing on management, replication, storage, etc.

Dell is also wise to this trend. They see the advantage of the embedded hypervisor as much as they saw the advantage of selling VMware’s ESX product line pre-installed on their hardware. They see that sooner or later everything they sell will have virtualization built-in. I expect them to sell Hyperspace alongside 3i. I also expect that will need to make the price points equivalent, lest there be howls from customers buying Hyperspace wanting to upgrade to a higher-level of virtualization management.

Advanced features like VMotion, DRS, HA, CB, etc. are licensed at the license server level, making 3i as good a choice for virtualization as ESX 3.5. This comes from VMware’s own 3i announcement:

“VMware ESX Server 3i is the new architectural foundation for VMware Infrastructure 3, the most widely deployed virtualization software suite for optimizing and managing industry-standard IT environments. VMware customers will be able to easily implement the entire suite of VMware Infrastructure 3 products on top of this foundation, including VirtualCenter, VMotion, Distributed Resource Scheduler (DRS), High Availability (HA) and VMware Consolidated Backup (VCB).”

As it stands, 3i is cheap at around $500, so don’t expect this shift in pricing to impact VMware’s bottom line.


April 18, 2008  11:05 AM

Virtualization of Citrix Presentation Server in VMware calculations

Rick Vanover Rick Vanover Profile: Rick Vanover

In following with Joe Foran’s recent blog about virtualizing Citrix Presentation Server (PS) systems, I too have had success with this practice. I took the approach that, for certain PS configurations, there can be great virtualization candidates depending on how you use Citrix. A web interface for PS is a great candidate for a virtual system if it is on its own server, but additional criteria determine what can be configured for a virtualized Citrix environment.

Based on my experience, the deciding factor for virtualizing PS systems is how many sessions will be concurrent for your published applications. For published applications that are rarely used or will not have very many sessions, this is a good starting point for virtualized PS systems. An example would be a line of business published applications that would not expect more than four concurrent users. A few of these types of applications on a virtual machine in ESX can work very well.

The biggest question becomes virtual machine provisioning from the memory and processor standpoint. If you have a baseline of your current Citrix usage, that is a good starting point for estimating the concurrent session usage. Take the following observations of a Citrix environment:

  • Each PS session takes 16 MB of RAM
  • Each published application within that environment requires 11 MB of RAM
  • There are 4 published applications on a server, that have not exceeded 5 concurrent sessions

Just under 3.5 GB of RAM is required to meet the same environment requirements from the Citrix session perspective. By adding the base server and Citrix PS memory requirements to this calculated amount, you have identified the provisioning requirements of the Citrix server for the virtual role. From the processor standpoint, I generally provision the frequency limit at the rate of the physical system processor.

The good news is that Citrix is licensed by client connection and not the number of servers. Therefore, distributing virtualized Citrix servers in a VMware environment is well poised to meet performance and availability requirements.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: