The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog


May 8, 2008  9:10 AM

Citrix XenServer now shipping in Dell PowerEdge servers



Posted by: Bridget Botelho
Citrix XenServer, Embedded Virtualization, Servers, Virtual machine, Virtualization, Xen

Citrix Systems, Inc.’s XenServer hypervisor is now shipping in Dell PowerEdge servers, following the partnership accouncement in October 2007.

With Dell, initial products available worldwide include the Citrix XenServer Dell Express Edition and Citrix XenServer Dell Enterprise, both of which include Dell’s management software, Dell OpenManage System Management. Express Edition is a free download that can be upgraded to Enterprise edition. 

By factory-integrating the Citrix XenServer hypervisor into Dell PowerEdge platforms, users can deploy virtual machines (VMs) when they start up their systems for the first time. Also, the XenServer Dell Enterprise Edition does not require additional management licenses or hardware. Also, upgrades for features like live migration on Dell’s MD3000 direct attached storage arrays can be made easily, by imputing a license key.

In March, Hewlett-Packard began shipping XenServer embedded in ProLiant servers. HP’s servers also have specific versions of XenServer called HP Select Edition, which differs from traditional XenServer in that it is tied into HP management tools, like HP Insight Control and HP Integrated Lights-Out for remote server management, according to a Citrix spokesperson.

In light of its partnerships with HP and Dell, Citrix simplified its licensing model recently to per-server, instead of per core, as reported on SearchServerVirtualization.com. This way, users can deploy an unlimited number of virtual machines or guest operating systems on each physical server for a single price, regardless of whether it has one, two or four CPU sockets.

May 6, 2008  8:11 AM

ClearCube spin-off focusing on desktop virtualization



Posted by: Bridget Botelho
ClearCube, Desktop virtualization, VDI, Virtualization, VMware

Austin, Texas-based ClearCube announced today that its desktop virtualization software business is being spun-off into its own company, VDIworks.

VDIworks will provide the VDIworks Sentral Virtual Desktop Platform for desktop computing and virtual desktop management, which includes connection brokering, virtual machine, host and thin client management, load balancing, health and asset monitoring, inventory management, disaster recovery and support for back-end hardware and user access devices.

ClearCube will continue providing desktop computing products, including desktop virtualization software, PC Blades and thin client terminal servers.

VDIworks and ClearCube will operate seperately but under an OEM agreement whereby ClearCube will continue to market and promote the VDIworks software under the Sentral VDI Management Software brand, and the Sentral management software will still be part of ClearCube’s centralized desktop computing offerings. ClearCube customers will still get support in their current license agreements with ClearCube, and VDIworks will add OEM relationships with third-party vendors, said Rick Hoffman, former president of ClearCube and now president of VDIworks.

“Users should not notice any changes, because the support, features, benefits, etc. will all be the same,” said Hoffman.

VDIworks will receive seed funding from current ClearCube investors and will seek additional funding to support growth. About 35 research and development employees in the U.S. and Pakistan will also move to VDIworks.

Because ClearCube’s Chief Executive Officer is taking over VDIworks, ClearCube’s Chief Operating Officer Randy Printz has been promoted to president and CEO. Rick Hoffman will be joined on the VDIworks side by Chief Technology Officer Amir Husain.

Desktop virtualization is a popular vendor offering right now, with companies such as Sun Microsystems Inc., Citrix., Pano Logic Inc. and VMware Inc all offering a flavor of desktop virtualization, but users report hesitation in using it due to cost.


May 5, 2008  8:15 AM

Ericom desktop virtualization now available on Oracle VM



Posted by: Rick Vanover
Desktop virtualization, Oracle VM, Rick Vanover, Virtual machine, Virtualization, Virtualization platforms

Today, Ericom software announced the availability of Ericom PowerTerm WebConnect for Oracle VM desktop virtualization (VDI) software as a free download. This announcement of an Oracle VM for the PowerTerm VDI product extends Oracle VM’s footprint to the VDI space with an Ericom product that has excelled over the years in products based on terminal services.

Ericom currently offers support for the 14 largest hypervisors including Oracle VM through products such as WebConnect. In this configuration, the Oracle VM virtual host is managed by Ericom’s WebConnect instead of Oracle VM Manager. This configuration of Oracle VM is the base product without modification. WebConnect provides the address and credentials to the Oracle VM virtual host to start the configuration and management process.

I had an opportunity to hear from Oracle and Ericom about this release. Eran Heyman, CEO of Ericom said that his company “wants to remove the barrier of entry for a VDI solution,” as many organizations are considering implementing VDI, but do not know where to start in the selection process. “The cost is minimal, licenses will be zero and the equipment can be reused if another solution is chosen” when choosing a Oracle VM, according to Heyman.

The Oracle VM hypervisor and the Oracle VM Manager suite deliver template virtual machines, which model a virtual appliance for database products such as Oracle Database 11g.


May 5, 2008  7:54 AM

Microsoft extends virtualization management footprint with enhancements



Posted by: Rick Vanover
Microsoft, Microsoft Hyper-V, Microsoft Virtual Server, Rick Vanover, Virtualization

Microsoft announced that the beta release of Virtual Machine Manager 2008 (VMM 2008 ) will now provide the ability to manage Microsoft Virtual Server, Windows Server 2008 Hyper-V and VMware ESX platforms as part of the expanding Microsoft System Center family of products.

In this beta release, VMM 2008 can interface into VMware Virtual Infrastructure to perform migrations and use a new feature called Intelligent Placement. This feature will identify the best host for a virtual machine using the key components of network, memory, processor and network usage information. Intelligent Placement will interact with a pre-defined set of business rules configured in with VMM 2008.

This beta is available for download now from Microsoft. The release is a welcome addition to the growing management space for virtualization platforms, including cross-platform solutions.  A summary of the new features available with VMM 2008 are available in a downloadable PDF document and from the System Center VMM website.


May 5, 2008  7:51 AM

VMware now officially supports single CPU licensing



Posted by: Eric Siebert
Eric Siebert, VMware

VMware announced that they will begin support for running ESX on a single physical, multi-core (up to 4) processor.

Previously, the VMware end user license agreement was unclear about whether this was supported, creating much debate over this subject in the VMware forums (see ESX Pricing and VMware Planning). Customers were getting different responses from VMware with some representatives saying it was OK to do this and others saying it was not. One response from VMware on this issue was that it technically would work but that it was not officially supported.

Despite recent support, ESX is only being sold in two CPU increments while only being supported if a physical server is in on the VMware approved Hardware Compatibility List (HCL). This can be advantageous to customers who want to buy lower cost servers with a single processor and use them for less intensive applications. It also allows for smaller customers to buy two single processor, multi-core servers and split a ESX license between them, taking advantage of redundant hardware and features like High Availability and vMotion.

It’s good to see VMware changing their licensing policies to better adapt to customers needs. Multi-core processors have caused many other vendors to change their licensing policies to be more restrictive but VMware has stuck with their per socket licensing model. VMware currently only allows up to four cores per processor but with eight-core processors on the horizion, it’s probably inevitable that VMware will eventually change their licensing.

You can click on these links to read through VMware’s Multi-core and Single Processor licensing policies.


May 1, 2008  11:21 AM

Virtualization tools, advice focus on ROI



Posted by: Bridget Botelho
Microsoft Hyper-V, Virtualization, VMware, Why choose server virtualization?

The decision whether to adopt virtualization often comes down to the corporate bottom line. CFOs want to know how long it will be before they see return on investment from virtualization, and there are many considerations in determining ROI.

Yesterday, I spoke with Stephen Fink, senior infrastructure architect for the global IT consultancy Avanade, about a comprehensive tool he created that takes just about every inch of data centers under consideration to determine what the ROI for virtualization will be.

Fink has 14 years of experience as a consultant and created the virtualization model for ROI as a tool for his own clients, but it made its way around the company and is now used as the way to determine ROI by Avande consultants, he said.

There are 125 inputs in the Microsoft Excel-based tool – such as power and cooling, cabling, network, CPU, servers, floor space, and staffing costs – and each helps determine the impact of implementing virtualization at a customer’s location, he said.

“There will never be a one-size-fits-all solution, and there has to be a business case for virtualization; I look at their environment from a high-level approach and asses the inventory. We look at their apps, their network, the annual power costs, licensing costs for software, etc., to see what they pay for their environment, and we can now give a really good idea of the ROI with Microsoft Hyper-V and VMware,” Fink said.

Avande, which is partially owned by Microsoft, has the benchmark information on Hyper-V from the most recent release candidates and uses that to determine Hyper-V ROI. Hyper-V is scheduled for release in August.

“We look at the net costs of the environment without virtualization versus what they would pay if they virtualized, with specific server types, running ESX or Hyper-V. We can tell you how many systems can be virtualized, and you can see the cost of your virtual servers, the cost per OS and the cost of your virtual hosts, to determine your annual cost reduction from virtualized guests,” Fink explained.

Fink said consultants like him are often used to determine whether virtualization is worth the initial acquisition and licensing costs, which depends on businesses’ expectations when it comes to ROI. “If a company already operates efficiently and has a portfolio of apps that make them a poor candidate for virtualization – like very high CPU and high memory consuming apps or data base severs, virtualization may not be the answer for them,” Fink said.

Avanade uses the tool as part of its consultancy, and it is only available through Avande consultants – which, of course, comes at a cost to businesses.

Other virtualization calculator tools are available for free, like the one from VMware, but these aren’t as precise as Fink’s tool from what I can tell.

There are also plenty of experts offering advice on determining virtualization ROI that won’t cost you anything.

According to IT security and virtualization technology analyst Alessandro Perilli , to calculate ROI, “you need to apply simple math to the costs your company could mitigate or eliminate by adopting virtualization.”

He reported that virtualization can reduce some of the following direct costs:

* Cost of space (leased or owned) for physical servers
* Energy to power physical servers
* Air conditioning to cool the server room
* Hardware cost of physical servers
* Hardware cost of networking devices (including expensive gears like switches and fibre channel host bus adapters)
* Software cost for operating system licenses
* Annual support contracts costs for purchased hardware and software
* Hardware parts for expected failures
* Downtime cost for expected hardware failures
* Service hours of maintenance cost for every physical server and networking device

Scott Feuless, a senior consultant with Compass, based in Texas, wrote about how to quantify virtualization ROI recently, and IT consultant John Hayes of Avnet Technology Solutions also had some advice on figuring out the cost of virtualization that could help make the case for virtualization.


April 29, 2008  12:49 PM

No virtualization-specific requirements for PCI audits



Posted by: Eric Siebert
Eric Siebert, Virtualization security

If your company deals with credit cards, you are required to follow the Payment Card Industry’s data security standards (PCI DSS). The major credit card players – Visa, Mastercard, American Express and Discover — set forth these requirements in order to protect credit card data. If audits reveal that these regulations are not followed, fines or revocation of credit card processing privileges can result. Often, these audits force companies to implement basic security practices that should have already been in place; however, no virtualization-specific requirements have yet been put into practice.

Having just survived another annual PCI compliance audit, I was again surprised that the strict standards for securing servers that must be followed contain nothing specific concerning virtual hosts and networks. Our auditor focused on guest virtual machines (VMs), ensuring they had up-to-date patches, locked-down security settings and current anti-virus definitions. But ironically, the host server that the virtual machines were running on went completely ignored. If the host server was compromised, it wouldn’t matter how secure the VMs were because they could be easily accessed. Host servers should always be securely locked down to protect the VMs which are running on them.

It seems that much of the IT industry has yet to react to the virtualization trend, having been slow in changing procedures to adjust to some of the unconventional concepts that virtualization introduces. When I told our auditor that the servers were virtual, the only thing he wanted to see was some documentation stating that the remote console sessions to the VMs were secure. It’s probably just a matter of time before specific requirements for virtual servers are introduced. In fact, a recent webinar takes up this issue of whether or not virtualized servers can be considered compliant, addressing section 2.2.1 of the PCI DSS which states, “Implement only one primary function per server”; that is to say, web servers, database servers and DNS should be implemented on separate servers. Virtual servers typically have many functions running on a single physical server, which would make them noncompliant.

Looking at the PCI Knowledgebase, it seems many companies are confused on this and some are not implementing virtualization until this is cleared up. We’ll have to wait and see what develops and how the specification is modified to allow for virtual servers. It would be in the best interest of companies like VMware and Microsoft to work with the PCI to get this sorted out as soon as possible.

You can read the current PCI Compliance 1.1 specification here.


April 29, 2008  12:20 PM

Using ISO files with virtual machines



Posted by: Eric Siebert
Eric Siebert, Virtual machine

ISO files offer an advantage to virtual machines (VMs), chiefly as a means of loading operating systems and applications on virtual servers without the hassle of using physical media. Many tools for creating, editing and mounting ISOs are readily available and if you haven’t been creating ISOs already, keep reading.

An ISO file is an archive file format (ISO 9660), typically an image of a CD-ROM or DVD-ROM, similar to a .ZIP file but without file compression. An ISO can be any size, from a few megabytes to several gigabytes. Reading an ISO file is much faster than reading from physical media like CD-ROMs. Free from physical imperfections, ISO files are easy to mount on VMs and don’t require looking for a CD when it is needed.

I’ve created dozens of ISO files for different operating systems and applications. For my Windows servers, I no longer copy the I386 directory to the server since I can easily mount it as an ISO file on my virtual machines as needed, saving disk space on the VM. I also create ISO files with troubleshooting tools like the Sysinternals utilities, so I can mount them quickly to troubleshoot problems on my VMs. Once an ISO library is created, a central repository on a host datastore or remote server can be made using NFS or Samba to provide access to all VMs.

A number of applications are available to mount ISO files on a physical system by creating a virtual CD-ROM drive. Once mounted, contents of an ISO file can be accessed just like a physical CD-ROM drive. Linux and ESX systems can use the mount command to do this, while Microsoft provides for a little-known virtual CD-ROM driver that you can be downloaded for free. ISO files can be created and edited with other tools. Linux and ESX systems come installed with a command called dd that creates an ISO file from an input device like a CD-ROM or DVD-ROM. Microsoft provides a tool called cdburn in their downloadable Resource Kits. For your convenience, I’ve created a short list of some of the many tools available for creating, editing and mounting ISO files.

Tools to create and edit ISO files:

Free

  • cdburn.exe (available in Windows XP and Server 2000/2003 Resource Kits)
  • dd.exe (Linux utility)
  • ISO recorder
  • ImgBurn

Commercial

Tools to mount ISO files:

Free

Commercial

A more complete list of ISO resources can be found here.


April 28, 2008  2:16 PM

Is hypervisor-based virtualization doomed?



Posted by: SAS70ExPERT
Application virtualization, Citrix XenServer, Microsoft, Virtualization

The following is a guest blog written by Schorschi Decker, an IT professional specializing in virtualization and enterprise-level management with over 25 years of experience in the industry.

Operating system isolation or hypervisor-based virtualization remains popular, but are we settling for less than what we should? Hiding its limitations in modest incremental effectiveness, hypervisor-based virtualization persists because it continues to hide an ugly secret: poor quality code.

Many who have worked with hypervisor-based virtualization may already knows this, but anyone who has attempted implementation of application instancing undoubtedly see where hypervisors fail. Replication of the operating system within a virtual instance is waste, waste driven by bad code. Faster cores, more cores per package, limited improvement in memory and device bus design, marginal increases in mechanical drive design and shared storage models have all contributed to mask how hypervisors inefficiently utilize processors.

If customer adoption rates are an indicator of success, past attempts at application instancing have not been successful to any consistent degree (there are no buzzwords for an application instance method.) To be clear, homogeneous applications have benefited, such as Microsoft SQL and ISS, Oracle and even Citrix. However, in the case of Citrix, application instancing has been environment-dependent to a degree.

Resource management within a common operating instance has not significantly changed since the introduction of mainframe logical partitions (LPARs). Solaris zones is a container-based model, whereas AIX micro-partitions follow a truer application instancing model. Even Apple computer introduced simple memory partitioning in the Macintosh Operating System 7.x. DEC (yes, Digital Equipment Corporation) leveraged Microsoft Job Engine API, effectively a processor affinity layer, in a ground breaking concept product that Compaq buried. Does anyone remember that product?

The hypervisor foundation resulted from heterogeneous application partitioning failures. For Windows, application instancing has stalled at times or has otherwise been over shadowed by operating system instance isolation techniques. Windows SRM is a weak attempt to crack the hypervisor foundation, but it is so immature at this point it is useless. Microsoft SoftGrid, now Microsoft Application Virtualization has a greater potential but is just not well accepted at this point. Should Microsoft provide it for free to drive acceptance?

The technology industry has attempted some rather interesting implementations to offset the impact of operating system instance isolation, for example, thin-disking and image-sharing which are based on eliminating disk partition under utilized space. Several attempts at addressing the DLL and .Net issues (e.g. Microsoft SoftGrid as well as Citrix) have been implemented to support heterogeneous application instancing but have masked the true issue that has always existed, the lack of quality code.

Why do I make this point? Because the hypervisor is essentially a band-aid on the boo-boo of bad coding. Quality code makes for stable environments. With a stable and predicable environment, applications can be run without fear of crashing, and it is this fear that gives hypervisor virtualization its strength.

Did someone just say “operating system isolation”? Case in point, the recent Symantec Antivirus issue with VMware ESX OS. Code quality is going to become a green issue, just as watts per core and total power consumption has in the data center. Enterprise customers who purchase significant code-based products will demand better code as a way to reduce non-hardware oriented costs. Just how many lines of executed code is redundant processing when hypervisor-based virtualization is leveraged? Billions? Wake up and smell the binary-generated ozone! Those cycles cost real money and introduce a very big surface area for bug discovery.

Poor software quality makes hypervisor-based virtualization more expensive than it should be and the publishers of operating systems love it. After all, the total number of operating system licenses purchased has not gone down with hypervisor virtualization. The industry has known for years that poor quality software has been an issue. One radical solution is to hold software publishers to a higher standard, but that idea has not gained enough grassroots support – yet. When it does, the hypervisor will be history.


April 24, 2008  10:13 AM

Choosing your next virtualization project



Posted by: Rick Vanover
Application virtualization, Desktop virtualization, High availability and virtualization, Rick Vanover, Servers, Virtualization, Virtualization management

For organizations with an established server virtualization environment, future virtualization projects are looming on the horizon. Whether it is desktop or application virtualization, much deliberating will undoubtly be given to the best product for the new virtualization endeavor — as it should.

The next wave of virtualization projects should always be best of breed for the requirements and functionality you require for your particular environment. For example, say you’re an organization with a successful VMware-based server virtualization environment using VirtualCenter and ESX 3. Does this mean that VMware Virtual Desktop Infrastructure (VDI) is the default selection for a virtualized desktop project? Don’t be fooled into thinking that a single-vendor environment is going to translate into an efficient one.

Identify the best solution, even if you can’t afford it. That also includes your host environment hardware for the next virtualization project. Your next virtualization project may require a decision between blades versus general purpose servers for virtual hosts. Taking the time and effort to identify the best solution after making full comparisons for of potential environments will also prepares you for any unforeseen element in post-implementation inquiry.

Make no mistake, there are plenty of advantages to going with what’s familiar: Price discounts, vendor relationships and non-disclosure access are all strong reasons to select the same vendor, but only after due diligence in your decision process should you make another commitment.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: