May 5, 2008 7:54 AM
Posted by: Rick Vanover
, Microsoft Hyper-V
, Microsoft Virtual Server
, Rick Vanover
Microsoft announced that the beta release of Virtual Machine Manager 2008 (VMM 2008 ) will now provide the ability to manage Microsoft Virtual Server, Windows Server 2008 Hyper-V and VMware ESX platforms as part of the expanding Microsoft System Center family of products.
In this beta release, VMM 2008 can interface into VMware Virtual Infrastructure to perform migrations and use a new feature called Intelligent Placement. This feature will identify the best host for a virtual machine using the key components of network, memory, processor and network usage information. Intelligent Placement will interact with a pre-defined set of business rules configured in with VMM 2008.
This beta is available for download now from Microsoft. The release is a welcome addition to the growing management space for virtualization platforms, including cross-platform solutions. A summary of the new features available with VMM 2008 are available in a downloadable PDF document and from the System Center VMM website.
May 5, 2008 7:51 AM
Posted by: Eric Siebert
VMware announced that they will begin support for running ESX on a single physical, multi-core (up to 4) processor.
Previously, the VMware end user license agreement was unclear about whether this was supported, creating much debate over this subject in the VMware forums (see ESX Pricing and VMware Planning). Customers were getting different responses from VMware with some representatives saying it was OK to do this and others saying it was not. One response from VMware on this issue was that it technically would work but that it was not officially supported.
Despite recent support, ESX is only being sold in two CPU increments while only being supported if a physical server is in on the VMware approved Hardware Compatibility List (HCL). This can be advantageous to customers who want to buy lower cost servers with a single processor and use them for less intensive applications. It also allows for smaller customers to buy two single processor, multi-core servers and split a ESX license between them, taking advantage of redundant hardware and features like High Availability and vMotion.
It’s good to see VMware changing their licensing policies to better adapt to customers needs. Multi-core processors have caused many other vendors to change their licensing policies to be more restrictive but VMware has stuck with their per socket licensing model. VMware currently only allows up to four cores per processor but with eight-core processors on the horizion, it’s probably inevitable that VMware will eventually change their licensing.
You can click on these links to read through VMware’s Multi-core and Single Processor licensing policies.
May 1, 2008 11:21 AM
Posted by: Bridget Botelho
, Why choose server virtualization?
The decision whether to adopt virtualization often comes down to the corporate bottom line. CFOs want to know how long it will be before they see return on investment from virtualization, and there are many considerations in determining ROI.
Yesterday, I spoke with Stephen Fink, senior infrastructure architect for the global IT consultancy Avanade, about a comprehensive tool he created that takes just about every inch of data centers under consideration to determine what the ROI for virtualization will be.
Fink has 14 years of experience as a consultant and created the virtualization model for ROI as a tool for his own clients, but it made its way around the company and is now used as the way to determine ROI by Avande consultants, he said.
There are 125 inputs in the Microsoft Excel-based tool – such as power and cooling, cabling, network, CPU, servers, floor space, and staffing costs – and each helps determine the impact of implementing virtualization at a customer’s location, he said.
“There will never be a one-size-fits-all solution, and there has to be a business case for virtualization; I look at their environment from a high-level approach and asses the inventory. We look at their apps, their network, the annual power costs, licensing costs for software, etc., to see what they pay for their environment, and we can now give a really good idea of the ROI with Microsoft Hyper-V and VMware,” Fink said.
Avande, which is partially owned by Microsoft, has the benchmark information on Hyper-V from the most recent release candidates and uses that to determine Hyper-V ROI. Hyper-V is scheduled for release in August.
“We look at the net costs of the environment without virtualization versus what they would pay if they virtualized, with specific server types, running ESX or Hyper-V. We can tell you how many systems can be virtualized, and you can see the cost of your virtual servers, the cost per OS and the cost of your virtual hosts, to determine your annual cost reduction from virtualized guests,” Fink explained.
Fink said consultants like him are often used to determine whether virtualization is worth the initial acquisition and licensing costs, which depends on businesses’ expectations when it comes to ROI. “If a company already operates efficiently and has a portfolio of apps that make them a poor candidate for virtualization – like very high CPU and high memory consuming apps or data base severs, virtualization may not be the answer for them,” Fink said.
Avanade uses the tool as part of its consultancy, and it is only available through Avande consultants – which, of course, comes at a cost to businesses.
Other virtualization calculator tools are available for free, like the one from VMware, but these aren’t as precise as Fink’s tool from what I can tell.
There are also plenty of experts offering advice on determining virtualization ROI that won’t cost you anything.
According to IT security and virtualization technology analyst Alessandro Perilli , to calculate ROI, “you need to apply simple math to the costs your company could mitigate or eliminate by adopting virtualization.”
He reported that virtualization can reduce some of the following direct costs:
* Cost of space (leased or owned) for physical servers
* Energy to power physical servers
* Air conditioning to cool the server room
* Hardware cost of physical servers
* Hardware cost of networking devices (including expensive gears like switches and fibre channel host bus adapters)
* Software cost for operating system licenses
* Annual support contracts costs for purchased hardware and software
* Hardware parts for expected failures
* Downtime cost for expected hardware failures
* Service hours of maintenance cost for every physical server and networking device
Scott Feuless, a senior consultant with Compass, based in Texas, wrote about how to quantify virtualization ROI recently, and IT consultant John Hayes of Avnet Technology Solutions also had some advice on figuring out the cost of virtualization that could help make the case for virtualization.
April 29, 2008 12:49 PM
Posted by: Eric Siebert
, Virtualization security
If your company deals with credit cards, you are required to follow the Payment Card Industry’s data security standards (PCI DSS). The major credit card players – Visa, Mastercard, American Express and Discover — set forth these requirements in order to protect credit card data. If audits reveal that these regulations are not followed, fines or revocation of credit card processing privileges can result. Often, these audits force companies to implement basic security practices that should have already been in place; however, no virtualization-specific requirements have yet been put into practice.
Having just survived another annual PCI compliance audit, I was again surprised that the strict standards for securing servers that must be followed contain nothing specific concerning virtual hosts and networks. Our auditor focused on guest virtual machines (VMs), ensuring they had up-to-date patches, locked-down security settings and current anti-virus definitions. But ironically, the host server that the virtual machines were running on went completely ignored. If the host server was compromised, it wouldn’t matter how secure the VMs were because they could be easily accessed. Host servers should always be securely locked down to protect the VMs which are running on them.
It seems that much of the IT industry has yet to react to the virtualization trend, having been slow in changing procedures to adjust to some of the unconventional concepts that virtualization introduces. When I told our auditor that the servers were virtual, the only thing he wanted to see was some documentation stating that the remote console sessions to the VMs were secure. It’s probably just a matter of time before specific requirements for virtual servers are introduced. In fact, a recent webinar takes up this issue of whether or not virtualized servers can be considered compliant, addressing section 2.2.1 of the PCI DSS which states, “Implement only one primary function per server”; that is to say, web servers, database servers and DNS should be implemented on separate servers. Virtual servers typically have many functions running on a single physical server, which would make them noncompliant.
Looking at the PCI Knowledgebase, it seems many companies are confused on this and some are not implementing virtualization until this is cleared up. We’ll have to wait and see what develops and how the specification is modified to allow for virtual servers. It would be in the best interest of companies like VMware and Microsoft to work with the PCI to get this sorted out as soon as possible.
You can read the current PCI Compliance 1.1 specification here.
April 29, 2008 12:20 PM
Posted by: Eric Siebert
, Virtual machine
ISO files offer an advantage to virtual machines (VMs), chiefly as a means of loading operating systems and applications on virtual servers without the hassle of using physical media. Many tools for creating, editing and mounting ISOs are readily available and if you haven’t been creating ISOs already, keep reading.
An ISO file is an archive file format (ISO 9660), typically an image of a CD-ROM or DVD-ROM, similar to a .ZIP file but without file compression. An ISO can be any size, from a few megabytes to several gigabytes. Reading an ISO file is much faster than reading from physical media like CD-ROMs. Free from physical imperfections, ISO files are easy to mount on VMs and don’t require looking for a CD when it is needed.
I’ve created dozens of ISO files for different operating systems and applications. For my Windows servers, I no longer copy the I386 directory to the server since I can easily mount it as an ISO file on my virtual machines as needed, saving disk space on the VM. I also create ISO files with troubleshooting tools like the Sysinternals utilities, so I can mount them quickly to troubleshoot problems on my VMs. Once an ISO library is created, a central repository on a host datastore or remote server can be made using NFS or Samba to provide access to all VMs.
A number of applications are available to mount ISO files on a physical system by creating a virtual CD-ROM drive. Once mounted, contents of an ISO file can be accessed just like a physical CD-ROM drive. Linux and ESX systems can use the mount command to do this, while Microsoft provides for a little-known virtual CD-ROM driver that you can be downloaded for free. ISO files can be created and edited with other tools. Linux and ESX systems come installed with a command called dd that creates an ISO file from an input device like a CD-ROM or DVD-ROM. Microsoft provides a tool called cdburn in their downloadable Resource Kits. For your convenience, I’ve created a short list of some of the many tools available for creating, editing and mounting ISO files.
Tools to create and edit ISO files:
- cdburn.exe (available in Windows XP and Server 2000/2003 Resource Kits)
- dd.exe (Linux utility)
- ISO recorder
Tools to mount ISO files:
A more complete list of ISO resources can be found here.
April 28, 2008 2:16 PM
Posted by: SAS70ExPERT
, Citrix XenServer
The following is a guest blog written by Schorschi Decker, an IT professional specializing in virtualization and enterprise-level management with over 25 years of experience in the industry.
Operating system isolation or hypervisor-based virtualization remains popular, but are we settling for less than what we should? Hiding its limitations in modest incremental effectiveness, hypervisor-based virtualization persists because it continues to hide an ugly secret: poor quality code.
Many who have worked with hypervisor-based virtualization may already knows this, but anyone who has attempted implementation of application instancing undoubtedly see where hypervisors fail. Replication of the operating system within a virtual instance is waste, waste driven by bad code. Faster cores, more cores per package, limited improvement in memory and device bus design, marginal increases in mechanical drive design and shared storage models have all contributed to mask how hypervisors inefficiently utilize processors.
If customer adoption rates are an indicator of success, past attempts at application instancing have not been successful to any consistent degree (there are no buzzwords for an application instance method.) To be clear, homogeneous applications have benefited, such as Microsoft SQL and ISS, Oracle and even Citrix. However, in the case of Citrix, application instancing has been environment-dependent to a degree.
Resource management within a common operating instance has not significantly changed since the introduction of mainframe logical partitions (LPARs). Solaris zones is a container-based model, whereas AIX micro-partitions follow a truer application instancing model. Even Apple computer introduced simple memory partitioning in the Macintosh Operating System 7.x. DEC (yes, Digital Equipment Corporation) leveraged Microsoft Job Engine API, effectively a processor affinity layer, in a ground breaking concept product that Compaq buried. Does anyone remember that product?
The hypervisor foundation resulted from heterogeneous application partitioning failures. For Windows, application instancing has stalled at times or has otherwise been over shadowed by operating system instance isolation techniques. Windows SRM is a weak attempt to crack the hypervisor foundation, but it is so immature at this point it is useless. Microsoft SoftGrid, now Microsoft Application Virtualization has a greater potential but is just not well accepted at this point. Should Microsoft provide it for free to drive acceptance?
The technology industry has attempted some rather interesting implementations to offset the impact of operating system instance isolation, for example, thin-disking and image-sharing which are based on eliminating disk partition under utilized space. Several attempts at addressing the DLL and .Net issues (e.g. Microsoft SoftGrid as well as Citrix) have been implemented to support heterogeneous application instancing but have masked the true issue that has always existed, the lack of quality code.
Why do I make this point? Because the hypervisor is essentially a band-aid on the boo-boo of bad coding. Quality code makes for stable environments. With a stable and predicable environment, applications can be run without fear of crashing, and it is this fear that gives hypervisor virtualization its strength.
Did someone just say “operating system isolation”? Case in point, the recent Symantec Antivirus issue with VMware ESX OS. Code quality is going to become a green issue, just as watts per core and total power consumption has in the data center. Enterprise customers who purchase significant code-based products will demand better code as a way to reduce non-hardware oriented costs. Just how many lines of executed code is redundant processing when hypervisor-based virtualization is leveraged? Billions? Wake up and smell the binary-generated ozone! Those cycles cost real money and introduce a very big surface area for bug discovery.
Poor software quality makes hypervisor-based virtualization more expensive than it should be and the publishers of operating systems love it. After all, the total number of operating system licenses purchased has not gone down with hypervisor virtualization. The industry has known for years that poor quality software has been an issue. One radical solution is to hold software publishers to a higher standard, but that idea has not gained enough grassroots support – yet. When it does, the hypervisor will be history.
April 24, 2008 10:13 AM
Posted by: Rick Vanover
, Desktop virtualization
, High availability and virtualization
, Rick Vanover
, Virtualization management
For organizations with an established server virtualization environment, future virtualization projects are looming on the horizon. Whether it is desktop or application virtualization, much deliberating will undoubtly be given to the best product for the new virtualization endeavor — as it should.
The next wave of virtualization projects should always be best of breed for the requirements and functionality you require for your particular environment. For example, say you’re an organization with a successful VMware-based server virtualization environment using VirtualCenter and ESX 3. Does this mean that VMware Virtual Desktop Infrastructure (VDI) is the default selection for a virtualized desktop project? Don’t be fooled into thinking that a single-vendor environment is going to translate into an efficient one.
Identify the best solution, even if you can’t afford it. That also includes your host environment hardware for the next virtualization project. Your next virtualization project may require a decision between blades versus general purpose servers for virtual hosts. Taking the time and effort to identify the best solution after making full comparisons for of potential environments will also prepares you for any unforeseen element in post-implementation inquiry.
Make no mistake, there are plenty of advantages to going with what’s familiar: Price discounts, vendor relationships and non-disclosure access are all strong reasons to select the same vendor, but only after due diligence in your decision process should you make another commitment.
April 22, 2008 1:50 PM
Posted by: Eric Siebert
Didn’t attend last year’s VMworld? Don’t worry: You can download many of the sessions for free from the VMworld website.
VMware has been gradually releasing the sessions on the website. After last year’s conference, VMware chose not to release all of the sessions to non-attendees since many of the sessions would be reused at VMworld Europe 2008. Currently 133 out of the 261 total sessions are available to watch online with 20 more sessions being offered each month up until VMworld 2008 in Las Vegas. VMware-enthusiasts can purchase a “virtual conference pass” for this year’s VMworld, which will provide full online access to sessions and labs.
The sessions reflect VMworld’s focus on enterprise virtualization, but several sessions are available on products like Workstation, Server, VDI and ACE. While some of the sessions are of technical benefit to system administrators, other sessions address topics such as business continuity, planning, business metrics and software lifecycle automation.
Of the free sessions available I’ve noted those I would recommend to system administrators who want to expand their technical knowledge. The sessions below are very good resources on understanding, troubleshooting, securing and tuning ESX and VirtualCenter.
To access the sessions, simply go to the VMworld website and create a free account. Once you have registered, click on the sessions and labs link to access the free sessions. You can even access sessions from previous years. Although they are dated, these sessions still have some good, applicable information. Once you click on a session, you can download the audio as an .mp3 file, the slides as a .pdf file or you can watch them together as a flash video.
April 21, 2008 2:16 PM
Posted by: Joe Foran
It’s been six months since I posted about the value of the VMware Certified Professional (VCP) certificate, and I thought I’d provide an update.
As the image shows, courtesy of indeed.com, the VCP is as hot as ever.
Since I last covered this topic, the following shifts occured:
- The VCP gained $3,000
- The A+ climbed $6,000
- Network+ declined $1,000
- MCP gained $1,000
- MCSE gained $2,000
- CCA lost $2,000
- CCEA picked up $2,000
- RHCT picked up $3,000
- RHCE picked up$2,000
- RHCA lost $1,000
The big gain in VCP salaries over a period of less than six months shows that this technology is still very much an in-demand skill set and a hot certification to show off. It’s a new year and salaries did jump overall, so this is reflected in the data. As before, the international trend is also continuing, as the next two images (from itjobswatch.co.uk) show, in terms of salary and demand.
I intend to keep tracking these statistics every few quarters, so stay tuned. I’m also keeping my eye out for Citrix-sponsored Xen certifications and will be bringing an analysis of those to the blogosphere as soon as there’s some quantifiable information available. And with VMware ramping up its certification programs, I expect to be adding second and third-tier VMware certifications.
What other certifications do you think should be compared? I’ve included a broad list of non-developer certs to show the variety and range in entry-level (MCP, A+) system admin certs through top-tier (CCEA, RHCA) certs to compare the VCP’s placement as a hot-technology. I’ve left off network, storage and many specialty certs because they may not be pervasive enough in the enterprise or may not be relevent topically. Since I’m one person with one view, I hope our readers will comment below and dictate to me what should be compared. So please fire away.