The Virtualization Room


October 6, 2011  8:54 PM

Backup and storage problems plague virtual infrastructures

Alyssa Wood Alyssa Wood Profile: Alyssa Wood

New York — At his Interop keynote this week, Microsoft’s Robert Wahbe said IT shops in 2011 are running 10.7 million virtual servers, compared to 7.8 million physical servers. Sounds great for virtualization, right?

Not so fast. Wahbe also said only 20% of physical servers in data centers are virtualized. That suggests there are still major deployment roadblocks, and the two biggest are storage and backup, speakers said in an Interop session. Administrators aren’t taking advantage of the best storage options for virtualization, and they’re stuck on old practices such as thick disks and overprovisioning, panelists said.

“You’ve got to get storage right if you’re going to get virtualization right — and especially if you’re going to get cloud computing right,” said Michael Dortch, research director at analyst firm Focus.

What can we do about storage?

Getting storage right is no easy task. EMC senior vSpecialist Ed Walsh said virtualization admins should use solid state drives (SSDs) to improve storage, because they offer high I/O performance and reduced capacity requirements. As storage has evolved, admins can now use 10 GbE and auto-tiering to boost storage performance, he said.

Another common problem is head-butting between the storage and virtualization teams. In VMware shops, new vCenter Storage APIs and plug-ins can help storage admins better manage VMware storage and increase visibility, Walsh said.

Backup best practices

When it comes to backup, we’ve said it before and we’ll say it again: Virtualization admins need to get with the times. Many customers still use physical backup tools for virtual backup, but you can’t back up all your virtual machines on agents at one time, said Doug Hazelman, Veeam vice president of product strategy. Image-level backups such as snapshots get better results, he said.

Eric Burgener, Virsto Software vice president of product management, also touted the benefits of thin provisioning over using thick disks. Some people are wary of thin-provisioned disk performance and manageability, but thin provisioning can provide flexibility and more efficient capacity utilization.

Virtualization admins should also watch out for storage overprovisioning. Burgener said lots of customers overprovision storage because vendors may offer a discount, but it’s often unnecessary.

Before you can build a private cloud, you have to master virtualization. And with the challenges that still remain, virtualization admins should consider changing some of their backup and storage practices.

October 4, 2011  7:59 PM

Red Hat’s Gluster buy has virtualization possibilities

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

At first glance, Red Hat’s acquisition of clustered open-source storage player Gluster seems like a pure storage play, perhaps with a bit of open-source camaraderie mixed in. But before it was bought out, Gluster was positioning itself as a virtualization player, too.

Red Hat’s FAQ on the Gluster buy gives a nod to how the purchase fits into the company’s virtualization and cloud computing strategy, emphasizing the use of commodity hardware and scale-out capabilities.

We view Gluster to be a strong fit with Red Hat’s virtualization and cloud products and strategies by bringing to storage the capabilities that we bring to servers today. By implementing a single namespace, Gluster enables enterprises to combine large numbers of commodity storage and compute resources into a high-performance, virtualized and centrally managed pool. Both capacity and performance can scale independent of demand, from a few terabytes to multiple petabytes, using both on-premise commodity hardware and public cloud storage infrastructure. By combining commodity economics with a scale-out approach, customers can achieve better price and performance, in an easy-to-manage solution that can be configured for the most demanding workloads.

There’s also potentially an even more direct virtualization play here, based on Gluster’s history. The company came out of stealth in 2007 with GlusterFS, a scale-out file system for clustered NAS based on open-source code, and by late 2009 was offering support for running virtual machines (VMs) directly on its clustered NAS.

Customers who chose this option could use the cluster’s internal replication to provide high availability (HA) failover for VMs running on the cluster using a checkbox at the time of installation. From there, the file system automatically handled replication using an underlying object-based storage system.

As governing the data center infrastructure becomes the next frontier for virtualization players, this purchase could be seen as an answer to forthcoming vStorage APIs from VMware that aim to bring server and storage virtualization closer together, as well as to new storage virtualization products like Nutanix, which have the same broad goal.


September 28, 2011  2:06 PM

Announcing our new advisory board member

Colin Steele Colin Steele Profile: Colin Steele

Please join me in welcoming Christian Mohn to the Server Virtualization Advisory Board!

Mohn, an infrastructure consultant based in Norway, is a VMware vExpert for 2011 and co-host of the vSoup podcast. He’s been working in IT since 1997, specializing in Windows, VMware, Citrix and more.

Check out his blog, vNinja.net, and make sure to follow him on Twitter @h0bbel.


September 23, 2011  8:24 PM

OVA working to boost KVM exposure

Alyssa Wood Alyssa Wood Profile: Alyssa Wood

The Open Virtualization Alliance, founded in May to promote KVM, has grown to more than 200 members since its launch, seeing specific interest from cloud-focused companies. But KVM isn’t exactly the first hypervisor people think of when they want to deploy a private cloud, so why the growth? 

Open Virtualization Alliance (OVA) board members credit the growth to increased awareness of the Kernel-based Virtual Machine (KVM) hypervisor, freedom of choice, and KVM’s features. Founding members of the OVA include BMC Software, Eucalyptus Systems, Hewlett-Packard, IBM, Intel, Red Hat and SUSE. Now, more than 50% of OVA members focus on cloud computing.

Despite this growth, many users are still unfamiliar with KVM. To increase understanding, the OVA is developing KVM best practices documentation. This fall, the alliance also plans to create forums for users to share best practices, as well as webinars, webcasts and learning events.

KVM deployments certainly face an uphill battle against the virtualization market leaders, and Red Hat’s KVM offerings still lag VMware in features. But for now, “[KVM] certainly will become more noticeable in the landscape,” said Inna Kuznetsova, OVA board member and vice president of IBM Systems and Technology Group.

KVM keys to success: Performance, security, management

OVA board members tout that KVM achieved the highest virtualization performance levels in SPECvirt benchmark tests, but it is KVM’s security features that appeal to some cloud providers. The hypervisor uses Security-Enhanced Linux, developed by the U.S. National Security Agency.

“Once you’re on a cloud, you have a multi-tenancy environment, so you want to have a high level of security,” Kuznetsova said. “And that’s what makes KVM attractive.”

Companies have eyed KVM for its price as well. With security features already built into the hypervisor, admins can spend less on virtualization security tools, Kuznetsova said.

Kuznetsova also pointed out KVM’s various virtualization management capabilities. The hypervisor relies on libvirt for basic management, and administrators can add advanced tools such as Red Hat Enterprise Virtualization or IBM Systems Director with VM Control. With these kinds of tools, you can manage multiple hypervisors, including VMware, Hyper-V and Xen.

The speed of innovation in open source development has also contributed to increased awareness of KVM. Because so many developers work on open source offerings, the technologies can advance very quickly, Kuznetsova said.

“The world of open source changes so fast, you always need to go back and see what’s changed,” she said.


September 22, 2011  1:32 PM

VMware networking R&D head jumps ship

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

VMware’s director of networking research and development is now with a new company, Big Switch
Networks.

Howie Xu was with VMware for nine years before making the move. He will lead the R&D unit at Big Switch, which is an OpenFlow company; OpenFlow is a software-defined networking protocol that introduces a management broker above the network to do sophisticated routing.

Xu was last publicly visible at VMworld 2010, discussing a network virtualization platform dubbed vChassis, which would have used plug-ins to a new software layer to manage networking elements of the infrastructure.

Continued »


September 15, 2011  2:03 PM

Cisco Nexus 1000V to support Hyper-V 3.0

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Cisco Systems Inc. is preparing to support Windows Server 8 Hyper-V with its Nexus 1000V virtual switch, which previously only supported VMware.

A newly extensible virtual switch was among the new Hyper-V 3.0 features turning heads at Microsoft’s Build conference this week, but until now, specific partners had not been mentioned. Cisco has been previewing this support at Build, and has recently made a public post on its website about the Nexus 1000V and Hyper-V.

In the post, Cisco said it will also support Windows Server 8 Hyper-V with its Unified Computing System Virtual Machine Fabric Extender feature, which uses single-root I/O virtualization to connect virtual machines to the network through virtualized physical network adapters.

As with the forthcoming Hyper-V itself, there’s no indication of when these features will become generally available. What’s being talked about at Build this week is a pre-beta preview meant for developers, not enterprise deployments.


September 8, 2011  2:28 PM

VMware teases vSphere roadmap at VMworld

KeithKessinger Keith Kessinger Profile: KeithKessinger

In future vSphere releases, VMware will focus on optimizing for the cloud and improving how clusters work.

These changes will further blur the line between a virtual data center and a cloud infrastructure, said Bogomil Balkansky, VMware’s vice president of product marketing. During an interview at VMworld 2011, he explained VMware’s vSphere roadmap in terms of three expanding circles, starting at the host, then moving out to the cluster and ending at the data center/cloud level:

Continued »


September 7, 2011  2:13 PM

VMware MVP: A good idea, but…

KeithKessinger Keith Kessinger Profile: KeithKessinger

VMware highlighted its Mobile Virtualization Platform at VMworld 2011, but I left the show feeling like the technology is little more than a novelty.

The concept itself is a good one, but the lack of Apple iOS support, VMware’s reliance on Google’s fragmented hardware partners and concerns about battery life will all be major obstacles to widespread Mobile Virtualization Platform (MVP) adoption.

Nowadays, more people are using their personal smartphones for work purposes — checking email, viewing documents, accessing corporate apps, etc. It’s convenient for users, but it creates security and management nightmares for IT departments.

With the Mobile Virtualization Platform (MVP), your IT department can run a virtual machine (VM) on your smartphone, complete with another operating system — in effect, giving you a personal phone and a work phone on the same device. Inside the VM, IT admins can use VMware’s new Horizon line of application-management tools to authorize specific applications and corporate email accounts.

Continued »


September 6, 2011  8:47 PM

VMware previews future SRM features

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In the future, VMware Site Recovery Manager will offer policy-based, multi-tenant disaster recovery for vCloud Director.

That’s according to VMware officials who previewed the Site Recovery Manager (SRM) roadmap during the VMworld 2011 conference last week.

SRM operates at the virtual machine (VM) level today, but the next version will allow for application-level disaster recovery (DR) protection according to policies set by either vSphere administrators or organizational managers, said Ashwin Kotian, a senior product manager for VMware.

“We want to … enable DR similar to how you enable [High Availability],” Kotian said. “Associate a service level, and based on the service level, SRM is going to make sure that [an application] gets provisioned to the right data stores, that it’s properly replicated and that it’s going to be associated with the right recovery plan.”

Continued »


September 6, 2011  5:51 PM

Wanted: Server Virtualization Advisory Board member

Colin Steele Colin Steele Profile: Colin Steele

We have an open spot on our Server Virtualization Advisory Board. Would you like to fill the void?

Our advisory board members are our go-to experts who keep us up to date on the latest news and trends in the server virtualization market. They share their insights with our readers by answering a topical question of the month. And they even have a fancy page that shows off their pictures and bios.

If you’re a server virtualization user or consultant — no vendor employees, please — and this sounds like your cup of tea, here’s how to throw your hat in the ring: email me by Sept. 26 with your bio and a few sentences about why you want to join the Server Virtualization Advisory Board.

We’ll choose the newest board member by the end of the month. Good luck!


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: