Houston-based BMC Software introduced several new virtualization management products today, including nine new integrated offerings designed to eliminate the risk and operational expenses associated with management of virtualized data centers.
BMC’s new virtualization management products are fully integrated with virtualization products from Microsoft, Sun Microsystems, Inc. and VMware Inc. The new BMC software is based on an automated set of closed-loop change and configuration management (CLCCM) process workflows that reduce the latency, cost and risk associated with change management. All of the new offerings support both virtual and physical infrastructures.
The nine new offerings support goals for performance, compliance and enterprise visibility by addressing the challenges created by virtualization.
Some of the issues addressed include the following:
*Planning a virtualization/consolidation initiative: BMC Virtualization Capacity Management and Planning Service is a packaged services offering that helps customers accelerate their virtualization efforts.
*Simplifying management: BMC Performance Management does complete performance monitoring across virtual infrastructure and applications with enhanced VMware Infrastructure 3 and VMotion support.
*Ensuring availability: BMC Application Performance and Analytics helps IT actively manage service levels in virtual infrastructures.
*Performance: BMC Capacity Management replaces educated guesses with automatic assessment, prioritization of server workloads, and ongoing capacity monitoring. The result is high performance while reducing capital and operational expenses and maximizing server consolidation.
*Server sprawl: Virtualization allows new servers to be created very rapidly, leading to virtual machine (VM) sprawl. BMC Discovery Solution helps customers keep virtualized environments under control by keeping tabs on virtual servers. Support for VMware, Solaris 9/10 containers and zones, AIX LPARS as well as z/VM dependencies on mainframe (z/OS) mean that all types of virtual servers can be discovered and added to BMC Atrium CMDB.
*VM security: BMC BladeLogic Virtualization Module for Servers adds security and strengthens licensing and regulatory compliance. It includes automatic provisioning and configuration of the entire software stack, including virtual infrastructure, guest VMs and applications, and enforces security best practices, including built-in virtual server hardening rules.
*Compliance: BMC BladeLogic Operations Management Suite establishes automated, closed-loop change and configuration governance over entire virtualized environments. BMC’s policy-driven configuration control prohibits noncompliant servers from being deployed or existing beyond the next audit scan. Automated compliance and remediation capabilities detect and correct any compliance violations.
*Administration costs: BMC Run Book Automation Platform and BMC Run Book Automation VMware Adapter exploit BMC’s CLCCM workflows to automate routine change management tasks.
Of course, BMC isn’t the only game in town when it comes to virtual infrastructure management. There are a number of vendors offering management products for various purposes, including Portsmouth, N.H.-based vKernel and San Francisco-based Hyperic, Inc.
In addition, Austin, Texas-based Surgient announced today its Virtual Automation Platform 6.0, which is designed with physical provisioning and Microsoft Windows Server 2008 Hyper-V support to manage virtual resources and eliminate physical server and virtual machine (VM) sprawl.
In addition to third-party VM management products, virtualization providers offer their own; VMware sells a proprietary management and automation suite, as does Microsoft for Hyper-V.
Successful physical-to-virtual (P2V) conversions revolve around getting the virtual environment correct before presenting the actual workload to the new VM. In this video blog, Rick Vanover discusses some conversion tasks that can help a VM function optimally as the workload goes virtual from a driver perspective.
Embotics has launched a new product, V-Scout, which provides administrators with an extra view into their VMware-based virtual environments. V-Scout is a free product that complements Embotics’ existing product, V-Commander. I have had a chance to work with the version 1.0 release of the product and will share some of the key features with you.
One of V-Scout’s main objectives is to give administrators a bridge between their business processes and the technology. How many times have you been frustrated with the limited built-in documentation of the annotations notes field? Much of V-Scout’s functionality revolves around putting attributes in place for objects in the VI3 environment, allowing your requirements and information to be configured within the environment. V-Scout takes this one step further and provides a Web-based management interface that allows reporting, chargeback and various views of the VI3 environment in an intuitive fashion. Because V-Scout uses the attributes field within various VI3 objects, don’t be surprised if you see tasks occur that are reconfiguring objects within the VMware Infrastructure Client. Further, objects will start to have various emboticsManager attributes as V-Scout interacts with VI3. These attributes are called fingerprints. For those of you familiar with V-Commander, V-Scout does not offer as many features. Here is a breakdown of the feature comparison between the two products:
Once in V-Scout, one of the core features of custom attributes can be applied to a VM. As shown above, there can be 10 custom attributes assigned within V-Scout. There also many built-in attributes that can be applied, such as an expiration date and approval status for a virtual machine. In my evaluation of the product, I set up a custom attribute of a cost center for tracking purposes. The figure below shows the cost center custom attribute being configured:
The other key feature of V-Scout is the built-in reporting. The built-in reporting offers six main categories of reporting that revolve around the guest OS, the host environment, managed systems, virtual machines, population trends and an overall infrastructure summary. While most of the reports out of the box do not give you much information that you don’t already have access to, it does organize it and make the custom attributes available for criteria and a way to present based on your parameters. Below is a sample report for a particular virtual machine:
Overall, from my first pass on V-Scout, I was impressed with the free offering and will continue to use it primarily for the custom attributes features to track virtual machines. More information on V-Scout can be found on the Embotics website.
Xen.org announced the release of a new version of the project’s open source hypervisor, Xen 3.3 today, with enhancements to security, performance and scalability.
The release is now available for download from the Xen.org community site and is the product of a distributed development effort by senior engineers from more than 50 hardware, software, and security vendors.
The new Xen 3.3 release provides users with the new features including:
* Power management in the hypervisor
* Hardware Virtual Machine (HVM) emulation domains for better scalability, performance and security
* Shadow pagetable improvements for the best HVM performance ever
* Hardware Assisted Paging enhancements
* Device passthrough enhancements
* CPUID feature levelling that allows safe domain migration across systems with different CPU models (within the same vendor brand – Intel or AMD)
Xen 3.3 provides virtualization for x64, IA64 and ARM-based platforms, and through close links with CPU and chipset vendors in the Xen project, Xen 3.3 also supports the latest hardware virtualization enhancements, like Intel Virtualization Technology (Intel-VT).
With Xen’s memory ballooning feature, the hypervisor can reallocate memory between guest Virtual Machines (VMs) to guarantee performance and allow greater density of VMs per server. Xen 3.3 also offers CPU portability to allow live migration of VMs across different CPUs, active power optimization to reduce server power consumption, and significant security enhancements.
Simon Crosby, CTO, Virtualization and Management Division, Citrix Systems, said in a statement, “In just two years, Xen has rapidly gained share in virtualization, much as Linux did in operating systems – and in the same period Xen has driven the price of competing hypervisors to zero, allowing any vendor to include virtualization for free.”
In addition to its growing development community, Xen hypervisor is the standard virtualization platform used by cloud computing providers like Amazon.com. It is also used in virtualization products from Citrix (XenServer), Fujitsu, Novell, Oracle (Oracle VM), Sun Microsystems (Sun xVM), and Virtual Iron, and is available as an embedded option in many x86 servers.
The VCP (VMware Certified Professional) certification I have blogged about twice before has gone through the roof. I have never seen a jump like this in all my years in IT. See for yourself.
On an interesting note, the graph is from a new feature on their site for bloggers, although WordPress doesn’t like the display, so I still did my usual upload-a-screencap-to-photobucket, but tried to respect Indeed.com’s format.
What do I think about this data? I’m betting on lots of people with advanced degrees, lots of experience, and/or other high-end certs having added the VCP to their career-building portfolio, as well as more top-level management and executives. Also of note is the missing Red Hat Certified Architect numbers. The VCP trend does not seem to have continued across the pond, where the prevailing charts seem to show a continuous value for the VCP.
The lesson here is watch the figures, but don’t be surprised when wild, near-impossible salaries turn out to be a case of too-good-to-be-true.
The mashup market is for more than just those out there making rich-media web apps. They’ve taken the concept of the mashup to the virtualization market. The resulting product is a mash-up of two virtualization platforms, OpenVZ and KVM, which Proxmox has combined into a delightful new offering that will run just about any operating system as a virtual machine. Virtual appliances are also included in the mashup.
Like VMware VI, VirtualIron and XenServer, Proxmox Virtual Environment (PVE) is a bare-metal, type-1 hypervisor that installs onto a fresh server and turns the machine into a dedicated virtualization host. It is an open source product based on open source products, making it transparent to developers, and thereby it has all the advantages and disadvantages associated with OSS deleopment projects (I find few disdadvantages myself, but I’m admittedly biased because I think that the transparency of OSS is highly valuable.) The goal, outlined in their vision page, is to create an enterprise-class virtualization platform that affords unparalleled flexibility (my words, not theirs.)
The short-list of what PVE supports:
- Web-based Administration via SSL
- Bare-metal installation based on Debian Etch (64-bit)
- Your choice of Container-based (OpenVZ) or Fully-Virtualized (KVM) virtual machines, both on the same server, as well as Paravirtualization for Windows via AMD/Intel CPU extensions and KVM’s built-in ability to handle them.
- Built-in backup and restore
- Live and offline migration
This is one of those will-be-great-if-it-lives products. It has a lot going for it, particularly in the ability to manage multiple types of virtualization platform strategies. That said, there are still many drawbacks, as expected of a pre-1.0 release (currently at 0.9). As such, it’s got it’s share of issues to get through before it’s really ready.
PVE currently doesn’t seem to have much in the way of granular user management for the web interface (though the forums do state that it is on the roadmap). Physical-to-virtual (P2V) capabilities are still a little raw, without any in-house tools to handle migrations. The Wiki site for PVE does explain how to use existing tools such as vmzdump, VMware Converter, etc. to migrate servers into formats that PVE can handle. There’s nothing in the way of DRS/HA equivalents, and while PVE does have tools for Live Migration, they don’t work due to a”kernel error”, according to the Wiki. KVM backup is limited to using LVM2, whereas OpenVZ has that option as well as vzdump, though a tool for KVM on the roadmap for 1.0. Guest SMP is described as unstable, as well.
The cluster management feature looks a little like this image, from their website:
The more day-to-day function of creating a new virtual machine looks like this:
Because it’s a Debian operating system, storage choices are limited only by the availability of drivers for the hardware platform. iSCSI, NFS, and other remote storage file systems can be mounted and used to store virtual machines.
The product looks like it will shake up some thinking in the virtualization platform market and may get people thinking more about what it means to be limited to only one type of virtualization option. When it hits that magic 1.0 mark, and most of the major flaws above are fixed for the majority of users, this product could really shine. Overall, I rate this product a seven poker for stirring things up, down from nine because it’s still cooking.
Selecting a VDI environment is a daunting process. As I begin to evaluate technologies for VDI design and implementation for an upcoming project, the first step is often to identify the requirements from the end-user perspective.
Administrators frequently get wrapped up in the server side of a technology that the experience end of the solution may be overlooked. Two specific pieces of functionality such as screen resolution and dual monitor support can be incredibly important to the endpoint experience, and may make an implementation fail if it does not meet the requirements of all applications involved. By comparison, other topics such as USB device support, printing and sound are more of a policy decision rather than a device selection process decision.
We strategically arrive at determining device capabilities to match the requirements. At that point, we can then ‘back into’ various backend VDI solutions. Take for example the Sun Ray 2FS Virtual Display Client, which offers two DVI-I (digital video interface) ports that can provide a resolution with one monitor at 1920 x 1200 resolution, or two monitors at 3840 x 1200. Among VDI devices the standard offering is a 1600 x 1200 resolution which will satisfy most resolution situations, however. The dual DVI-I monitor may seem like overkill for a VDI-based thin client, but for many systems that perform archival by scanning documents, the high resolution and dual monitor functionality may be a requirement. Just ask any accounts payable clerk.
Some of this functionality may be circumvented by the use of existing devices, specifically VDI solutions that allow a Windows or other operating system PC to connect to the VDI broker. In this regard, if there are a very limited number of systems with requirements that may not be accommodated with standard endpoint devices, the typical PC can be used to provide the VDI connection from a full install PC. While not ideal, it is a decent stop-gap measure and a way to use of existing equipment.
Hewlett-Packard’s (HP) ProLiant DL585 G5 with AMD’s Opteron processor is the top performer for 16-core systems on VMmark’s list. It is also used in HP’s 32 core, eight socket Proliant DL785 on VMmarks list, which achieved a score of 21.88@16 tiles or 96 virtual machines.
These results from AMD based systems aren’t surprising, since AMD Opteron’s virtualization assist technology has received high praise from VMware. One VMware engineer called AMD’s Nested Page Table (NPT) technology the answer to virtualizing large workloads.
Rapid Virtualization Indexing (RVI), a feature of AMD’s third-generation Opteron, includes NPT, is designed to offer near-native performance of virtualized applications and allows fast switching between virtual machines (VMs.)
Intel Corp. has announced a technology similar to NPT, called Extended Page Tables (EPT), which will be available in its next-generation eight-core microarchitecture, code-named “Nehalem.” Nehalem is slated for production later this year
RGH, a community-based teaching hospital, has an IT infrastructure supporting business applications and patient-critical systems as well as massive amounts of data storage that is growing exponentially.
“We started using virtualization to address power and space issues in our main datacenter. We quickly adopted VMWare ESX as our standard platform for new projects and consolidated existing servers,” Tom Gibaud, an IT manager at RGH, said in an email. “It allowed us to continue business as usual and we experienced no delay in completing projects on time. Today we are way below our power threshold and gained about 50% of our floor space even after we doubled the amount of Windows Servers.”
In VMware’s statement, VMware Infrastructure has improved application performance and availability, and strengthened the hospital’s disaster-recovery capabilities. “Before going virtual, our datacenter power supply was maxed out. We couldn’t plug in a toaster. Now, with less hardware, we have capacity to handle whatever comes our way,” Gibaud said.
The hospital now runs 50 virtual machine hosts running 400 Guests with a mix of large and small workloads including terminal services, Gibaud said. In all, RGH has virtualized about 95% of its Windows-based applications, including Exchange, SQL Server, the ClinicalCare portal that physicians and nurses use to access electronic medical records, and RGH’s billing system.
In the initial phase of the virtualization deployment, Gibaud said the hospital used IBM Bladecenter servers (HS20, HS21, LS20). “This allowed us to condense many servers is a small amount of space. With VMware and IBM Bladecenters we were able to consolidate over a 150 Servers into one rack,” he said. “Today we use IBM x3850 and HP DL580 G5 to handle larger server workloads.”
In addtion, the hospital is running 200 Windows XP desktops using VMware’s Virtual Desktop Infrastructure on just two IBM x3850’s.
A well-documented procedure for physical-to-virtual (P2V) conversions still lacks the valuable information learned by experience. In this video blog, Rick Vanover introduces the stage configuration phase of a P2V conversion. When this phase and other phases of a conversion are used in a procedural manner, the success target will increase from accomodation of most scenarios that will arise in virtualization environments.