The Virtualization Room


November 29, 2011  8:20 PM

VMware gets in on Puppet show

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

VMware has joined Google and Cisco as a new investor in Puppet Labs, according to a press release issued today.

VMware was one of the investors that contributed to an $8.5 million round of funding for the IT systems automation software maker, bringing Puppet’s total funding to $15.75 million since its founding in 2005.

Puppet’s software, available as both an open source offering and in a commercial version called Puppet Enterprise, automates provisioning of VMware and Amazon EC2 instances in enterprise IT environments. IT shops have also used the open-source version of Puppet to increase server-to-admin ratios. Demand for experience with Puppet’s software has grown rapidly in the last year, according to a Wall Street Journal report.

VMware and Cisco “have hands-on, in-production-at-scale experience with Puppet – in some cases, going back several years,” wrote Puppet founder Luke Kanies in a blog post on the funding round.

November 28, 2011  4:58 PM

VMware patches iSCSI bug in vSphere 5

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The first patch for vSphere 5 has been issued, a fix for a bug that was causing long boot times for virtual machines attached to iSCSI storage systems.

The issue affected vSphere 5 virtual machines connected through software-based iSCSI initiators, and occurred, according to VMware’s Knowledge Base, because “ESXi 5.0 attempts to connect to all configured or known targets from all configured software iSCSI portals. If a connection fails, ESXi 5.0 retries the connection 9 times.”

According to the Knowledge Base article, “VMware is delivering an ISO file for this patch release due to the nature of this issue. This is not common practice and is only done in special circumstances.”

In some cases, this bug led to boot times of up to 90 minutes. Bill Hill, infrastructure IT lead for a Portland-based logistics company, said it took some of his servers that long to boot with the buggy version of ESXi 5.0.

This would be an open-and-shut case of a simple bug fix, Hill said, but it remains a mystery to him, as well as to some VMware insiders, why the original buggy ISO file remains available for download on VMware’s website as of today.

“It’s a frustrating situation,” said Hill. “Why leave a time bomb out there?”

VMware’s PR representatives did not have an official response as of this post.


October 18, 2011  1:36 PM

VMware sees growth in management tools, plans updates

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Site Recovery Manager, vCloud Director and vCenter Operations are seeing strong sales, VMware executives said on the company’s earnings call last night.

VMware reported total revenue for the third quarter of $942 million, an increase of 32% from a year ago. CFO Mark Peek and CEO Paul Maritz said management tools were a strong part of those sales, although they didn’t attach any specific numbers to those tools.

“Much of the increased interest for our management tools is being driven by the build-out of private clouds within our customers’ data centers,” said Peek, according to a transcript of the call.

Continued »


October 13, 2011  7:27 PM

Flash virtualization all the rage among startups

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As the virtualization market matures, functionality that used to live in the underlying infrastructure is steadily being absorbed into virtual machines.

One of the hotter areas for emerging companies of late is software that allows agents inside guest VMs to automate the use of host-based Flash storage as cache in order to boost application performance.

Last week, FlashSoft, a Flash caching company which came out of stealth at VMworld 2011, released a new version of its beta product. Fusion-io has been around longer than most with its solid-state drives, but it too is also moving into the automated caching space following its acquisition of IO Turbine earlier this year.

This week, another company came out of stealth called Nevex, which also claims to have invented a better Flash caching mousetrap. Like FlashSoft, Nevex’s CacheWorks product uses agents inside guest VMs to automate the use of Flash as cache. Also like FlashSoft, Nevex plans to make its software work at the hypervisor level to eliminate the need for guest agents.

Where Nevex says it differs from other offerings is that its software caches at the file, rather than the block level. This means users can select which specific applications to accelerate using Flash, instead of letting an algorithm move hot blocks into the cache. Nevex also integrates with Windows to control which data to promote to DRAM for multi-level caching.

When SSDs first came on the enterprise IT scene, they were a part of the storage area network (SAN), sitting behind a storage controller. But with these newer offerings, the use of Flash as cache on the host, rather than as persistent storage on a back-end array, is the vision.

In the meantime, with the advent of new technologies, it’s becoming easier to picture the entire enterprise data center infrastructure, from the virtualized network to this type of virtualized storage, running as software inside x86 hosts. I’m reminded of the old Sun tagline, “the network is the computer.” Sun has since been gobbled up by Oracle, of course, but it feels like we’re finally seeing that concept come to fruition.


October 13, 2011  6:49 PM

Network virtualization coming for Hyper-V 3.0

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Microsoft let drop a new tidbit of information about Hyper-V 3.0 in a blog post this week.

Hyper-V Network Virtualization:

Allows you to keep your own internal IP addresses when moving to the cloud while providing isolation from other organizations’ VMs – even if those VMs use the same exact IP addresses

The post is otherwise a rehash of Hyper-V 3.0 features already previewed at the Microsoft Build conference last month, including support for a new virtual switch and scalability improvements.

It’s unclear at this point whether this technology is related to the new NVGRE standard proposed by a Microsoft-led consortium within IETF. But what is clear is that network virtualization is becoming a new battleground for the market’s biggest virtualization vendors.


October 6, 2011  8:54 PM

Backup and storage problems plague virtual infrastructures

AlyssaWood Alyssa Wood Profile: AlyssaWood

New York — At his Interop keynote this week, Microsoft’s Robert Wahbe said IT shops in 2011 are running 10.7 million virtual servers, compared to 7.8 million physical servers. Sounds great for virtualization, right?

Not so fast. Wahbe also said only 20% of physical servers in data centers are virtualized. That suggests there are still major deployment roadblocks, and the two biggest are storage and backup, speakers said in an Interop session. Administrators aren’t taking advantage of the best storage options for virtualization, and they’re stuck on old practices such as thick disks and overprovisioning, panelists said.

“You’ve got to get storage right if you’re going to get virtualization right — and especially if you’re going to get cloud computing right,” said Michael Dortch, research director at analyst firm Focus.

What can we do about storage?

Getting storage right is no easy task. EMC senior vSpecialist Ed Walsh said virtualization admins should use solid state drives (SSDs) to improve storage, because they offer high I/O performance and reduced capacity requirements. As storage has evolved, admins can now use 10 GbE and auto-tiering to boost storage performance, he said.

Another common problem is head-butting between the storage and virtualization teams. In VMware shops, new vCenter Storage APIs and plug-ins can help storage admins better manage VMware storage and increase visibility, Walsh said.

Backup best practices

When it comes to backup, we’ve said it before and we’ll say it again: Virtualization admins need to get with the times. Many customers still use physical backup tools for virtual backup, but you can’t back up all your virtual machines on agents at one time, said Doug Hazelman, Veeam vice president of product strategy. Image-level backups such as snapshots get better results, he said.

Eric Burgener, Virsto Software vice president of product management, also touted the benefits of thin provisioning over using thick disks. Some people are wary of thin-provisioned disk performance and manageability, but thin provisioning can provide flexibility and more efficient capacity utilization.

Virtualization admins should also watch out for storage overprovisioning. Burgener said lots of customers overprovision storage because vendors may offer a discount, but it’s often unnecessary.

Before you can build a private cloud, you have to master virtualization. And with the challenges that still remain, virtualization admins should consider changing some of their backup and storage practices.


October 4, 2011  7:59 PM

Red Hat’s Gluster buy has virtualization possibilities

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

At first glance, Red Hat’s acquisition of clustered open-source storage player Gluster seems like a pure storage play, perhaps with a bit of open-source camaraderie mixed in. But before it was bought out, Gluster was positioning itself as a virtualization player, too.

Red Hat’s FAQ on the Gluster buy gives a nod to how the purchase fits into the company’s virtualization and cloud computing strategy, emphasizing the use of commodity hardware and scale-out capabilities.

We view Gluster to be a strong fit with Red Hat’s virtualization and cloud products and strategies by bringing to storage the capabilities that we bring to servers today. By implementing a single namespace, Gluster enables enterprises to combine large numbers of commodity storage and compute resources into a high-performance, virtualized and centrally managed pool. Both capacity and performance can scale independent of demand, from a few terabytes to multiple petabytes, using both on-premise commodity hardware and public cloud storage infrastructure. By combining commodity economics with a scale-out approach, customers can achieve better price and performance, in an easy-to-manage solution that can be configured for the most demanding workloads.

There’s also potentially an even more direct virtualization play here, based on Gluster’s history. The company came out of stealth in 2007 with GlusterFS, a scale-out file system for clustered NAS based on open-source code, and by late 2009 was offering support for running virtual machines (VMs) directly on its clustered NAS.

Customers who chose this option could use the cluster’s internal replication to provide high availability (HA) failover for VMs running on the cluster using a checkbox at the time of installation. From there, the file system automatically handled replication using an underlying object-based storage system.

As governing the data center infrastructure becomes the next frontier for virtualization players, this purchase could be seen as an answer to forthcoming vStorage APIs from VMware that aim to bring server and storage virtualization closer together, as well as to new storage virtualization products like Nutanix, which have the same broad goal.


September 28, 2011  2:06 PM

Announcing our new advisory board member

Colin Steele Colin Steele Profile: Colin Steele

Please join me in welcoming Christian Mohn to the Server Virtualization Advisory Board!

Mohn, an infrastructure consultant based in Norway, is a VMware vExpert for 2011 and co-host of the vSoup podcast. He’s been working in IT since 1997, specializing in Windows, VMware, Citrix and more.

Check out his blog, vNinja.net, and make sure to follow him on Twitter @h0bbel.


September 23, 2011  8:24 PM

OVA working to boost KVM exposure

AlyssaWood Alyssa Wood Profile: AlyssaWood

The Open Virtualization Alliance, founded in May to promote KVM, has grown to more than 200 members since its launch, seeing specific interest from cloud-focused companies. But KVM isn’t exactly the first hypervisor people think of when they want to deploy a private cloud, so why the growth? 

Open Virtualization Alliance (OVA) board members credit the growth to increased awareness of the Kernel-based Virtual Machine (KVM) hypervisor, freedom of choice, and KVM’s features. Founding members of the OVA include BMC Software, Eucalyptus Systems, Hewlett-Packard, IBM, Intel, Red Hat and SUSE. Now, more than 50% of OVA members focus on cloud computing.

Despite this growth, many users are still unfamiliar with KVM. To increase understanding, the OVA is developing KVM best practices documentation. This fall, the alliance also plans to create forums for users to share best practices, as well as webinars, webcasts and learning events.

KVM deployments certainly face an uphill battle against the virtualization market leaders, and Red Hat’s KVM offerings still lag VMware in features. But for now, “[KVM] certainly will become more noticeable in the landscape,” said Inna Kuznetsova, OVA board member and vice president of IBM Systems and Technology Group.

KVM keys to success: Performance, security, management

OVA board members tout that KVM achieved the highest virtualization performance levels in SPECvirt benchmark tests, but it is KVM’s security features that appeal to some cloud providers. The hypervisor uses Security-Enhanced Linux, developed by the U.S. National Security Agency.

“Once you’re on a cloud, you have a multi-tenancy environment, so you want to have a high level of security,” Kuznetsova said. “And that’s what makes KVM attractive.”

Companies have eyed KVM for its price as well. With security features already built into the hypervisor, admins can spend less on virtualization security tools, Kuznetsova said.

Kuznetsova also pointed out KVM’s various virtualization management capabilities. The hypervisor relies on libvirt for basic management, and administrators can add advanced tools such as Red Hat Enterprise Virtualization or IBM Systems Director with VM Control. With these kinds of tools, you can manage multiple hypervisors, including VMware, Hyper-V and Xen.

The speed of innovation in open source development has also contributed to increased awareness of KVM. Because so many developers work on open source offerings, the technologies can advance very quickly, Kuznetsova said.

“The world of open source changes so fast, you always need to go back and see what’s changed,” she said.


September 22, 2011  1:32 PM

VMware networking R&D head jumps ship

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

VMware’s director of networking research and development is now with a new company, Big Switch
Networks.

Howie Xu was with VMware for nine years before making the move. He will lead the R&D unit at Big Switch, which is an OpenFlow company; OpenFlow is a software-defined networking protocol that introduces a management broker above the network to do sophisticated routing.

Xu was last publicly visible at VMworld 2010, discussing a network virtualization platform dubbed vChassis, which would have used plug-ins to a new software layer to manage networking elements of the infrastructure.

Continued »


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: