Site Recovery Manager, vCloud Director and vCenter Operations are seeing strong sales, VMware executives said on the company’s earnings call last night.
VMware reported total revenue for the third quarter of $942 million, an increase of 32% from a year ago. CFO Mark Peek and CEO Paul Maritz said management tools were a strong part of those sales, although they didn’t attach any specific numbers to those tools.
“Much of the increased interest for our management tools is being driven by the build-out of private clouds within our customers’ data centers,” said Peek, according to a transcript of the call.
As the virtualization market matures, functionality that used to live in the underlying infrastructure is steadily being absorbed into virtual machines.
One of the hotter areas for emerging companies of late is software that allows agents inside guest VMs to automate the use of host-based Flash storage as cache in order to boost application performance.
Last week, FlashSoft, a Flash caching company which came out of stealth at VMworld 2011, released a new version of its beta product. Fusion-io has been around longer than most with its solid-state drives, but it too is also moving into the automated caching space following its acquisition of IO Turbine earlier this year.
This week, another company came out of stealth called Nevex, which also claims to have invented a better Flash caching mousetrap. Like FlashSoft, Nevex’s CacheWorks product uses agents inside guest VMs to automate the use of Flash as cache. Also like FlashSoft, Nevex plans to make its software work at the hypervisor level to eliminate the need for guest agents.
Where Nevex says it differs from other offerings is that its software caches at the file, rather than the block level. This means users can select which specific applications to accelerate using Flash, instead of letting an algorithm move hot blocks into the cache. Nevex also integrates with Windows to control which data to promote to DRAM for multi-level caching.
When SSDs first came on the enterprise IT scene, they were a part of the storage area network (SAN), sitting behind a storage controller. But with these newer offerings, the use of Flash as cache on the host, rather than as persistent storage on a back-end array, is the vision.
In the meantime, with the advent of new technologies, it’s becoming easier to picture the entire enterprise data center infrastructure, from the virtualized network to this type of virtualized storage, running as software inside x86 hosts. I’m reminded of the old Sun tagline, “the network is the computer.” Sun has since been gobbled up by Oracle, of course, but it feels like we’re finally seeing that concept come to fruition.
Microsoft let drop a new tidbit of information about Hyper-V 3.0 in a blog post this week.
Hyper-V Network Virtualization:
Allows you to keep your own internal IP addresses when moving to the cloud while providing isolation from other organizations’ VMs – even if those VMs use the same exact IP addresses
The post is otherwise a rehash of Hyper-V 3.0 features already previewed at the Microsoft Build conference last month, including support for a new virtual switch and scalability improvements.
It’s unclear at this point whether this technology is related to the new NVGRE standard proposed by a Microsoft-led consortium within IETF. But what is clear is that network virtualization is becoming a new battleground for the market’s biggest virtualization vendors.
New York — At his Interop keynote this week, Microsoft’s Robert Wahbe said IT shops in 2011 are running 10.7 million virtual servers, compared to 7.8 million physical servers. Sounds great for virtualization, right?
Not so fast. Wahbe also said only 20% of physical servers in data centers are virtualized. That suggests there are still major deployment roadblocks, and the two biggest are storage and backup, speakers said in an Interop session. Administrators aren’t taking advantage of the best storage options for virtualization, and they’re stuck on old practices such as thick disks and overprovisioning, panelists said.
“You’ve got to get storage right if you’re going to get virtualization right — and especially if you’re going to get cloud computing right,” said Michael Dortch, research director at analyst firm Focus.
What can we do about storage?
Getting storage right is no easy task. EMC senior vSpecialist Ed Walsh said virtualization admins should use solid state drives (SSDs) to improve storage, because they offer high I/O performance and reduced capacity requirements. As storage has evolved, admins can now use 10 GbE and auto-tiering to boost storage performance, he said.
Another common problem is head-butting between the storage and virtualization teams. In VMware shops, new vCenter Storage APIs and plug-ins can help storage admins better manage VMware storage and increase visibility, Walsh said.
Backup best practices
When it comes to backup, we’ve said it before and we’ll say it again: Virtualization admins need to get with the times. Many customers still use physical backup tools for virtual backup, but you can’t back up all your virtual machines on agents at one time, said Doug Hazelman, Veeam vice president of product strategy. Image-level backups such as snapshots get better results, he said.
Eric Burgener, Virsto Software vice president of product management, also touted the benefits of thin provisioning over using thick disks. Some people are wary of thin-provisioned disk performance and manageability, but thin provisioning can provide flexibility and more efficient capacity utilization.
Virtualization admins should also watch out for storage overprovisioning. Burgener said lots of customers overprovision storage because vendors may offer a discount, but it’s often unnecessary.
Before you can build a private cloud, you have to master virtualization. And with the challenges that still remain, virtualization admins should consider changing some of their backup and storage practices.
At first glance, Red Hat’s acquisition of clustered open-source storage player Gluster seems like a pure storage play, perhaps with a bit of open-source camaraderie mixed in. But before it was bought out, Gluster was positioning itself as a virtualization player, too.
Red Hat’s FAQ on the Gluster buy gives a nod to how the purchase fits into the company’s virtualization and cloud computing strategy, emphasizing the use of commodity hardware and scale-out capabilities.
We view Gluster to be a strong fit with Red Hat’s virtualization and cloud products and strategies by bringing to storage the capabilities that we bring to servers today. By implementing a single namespace, Gluster enables enterprises to combine large numbers of commodity storage and compute resources into a high-performance, virtualized and centrally managed pool. Both capacity and performance can scale independent of demand, from a few terabytes to multiple petabytes, using both on-premise commodity hardware and public cloud storage infrastructure. By combining commodity economics with a scale-out approach, customers can achieve better price and performance, in an easy-to-manage solution that can be configured for the most demanding workloads.
There’s also potentially an even more direct virtualization play here, based on Gluster’s history. The company came out of stealth in 2007 with GlusterFS, a scale-out file system for clustered NAS based on open-source code, and by late 2009 was offering support for running virtual machines (VMs) directly on its clustered NAS.
Customers who chose this option could use the cluster’s internal replication to provide high availability (HA) failover for VMs running on the cluster using a checkbox at the time of installation. From there, the file system automatically handled replication using an underlying object-based storage system.
As governing the data center infrastructure becomes the next frontier for virtualization players, this purchase could be seen as an answer to forthcoming vStorage APIs from VMware that aim to bring server and storage virtualization closer together, as well as to new storage virtualization products like Nutanix, which have the same broad goal.
Please join me in welcoming Christian Mohn to the Server Virtualization Advisory Board!
Mohn, an infrastructure consultant based in Norway, is a VMware vExpert for 2011 and co-host of the vSoup podcast. He’s been working in IT since 1997, specializing in Windows, VMware, Citrix and more.
The Open Virtualization Alliance, founded in May to promote KVM, has grown to more than 200 members since its launch, seeing specific interest from cloud-focused companies. But KVM isn’t exactly the first hypervisor people think of when they want to deploy a private cloud, so why the growth?
Open Virtualization Alliance (OVA) board members credit the growth to increased awareness of the Kernel-based Virtual Machine (KVM) hypervisor, freedom of choice, and KVM’s features. Founding members of the OVA include BMC Software, Eucalyptus Systems, Hewlett-Packard, IBM, Intel, Red Hat and SUSE. Now, more than 50% of OVA members focus on cloud computing.
Despite this growth, many users are still unfamiliar with KVM. To increase understanding, the OVA is developing KVM best practices documentation. This fall, the alliance also plans to create forums for users to share best practices, as well as webinars, webcasts and learning events.
KVM deployments certainly face an uphill battle against the virtualization market leaders, and Red Hat’s KVM offerings still lag VMware in features. But for now, “[KVM] certainly will become more noticeable in the landscape,” said Inna Kuznetsova, OVA board member and vice president of IBM Systems and Technology Group.
KVM keys to success: Performance, security, management
OVA board members tout that KVM achieved the highest virtualization performance levels in SPECvirt benchmark tests, but it is KVM’s security features that appeal to some cloud providers. The hypervisor uses Security-Enhanced Linux, developed by the U.S. National Security Agency.
“Once you’re on a cloud, you have a multi-tenancy environment, so you want to have a high level of security,” Kuznetsova said. “And that’s what makes KVM attractive.”
Companies have eyed KVM for its price as well. With security features already built into the hypervisor, admins can spend less on virtualization security tools, Kuznetsova said.
Kuznetsova also pointed out KVM’s various virtualization management capabilities. The hypervisor relies on libvirt for basic management, and administrators can add advanced tools such as Red Hat Enterprise Virtualization or IBM Systems Director with VM Control. With these kinds of tools, you can manage multiple hypervisors, including VMware, Hyper-V and Xen.
The speed of innovation in open source development has also contributed to increased awareness of KVM. Because so many developers work on open source offerings, the technologies can advance very quickly, Kuznetsova said.
“The world of open source changes so fast, you always need to go back and see what’s changed,” she said.
VMware’s director of networking research and development is now with a new company, Big Switch
Howie Xu was with VMware for nine years before making the move. He will lead the R&D unit at Big Switch, which is an OpenFlow company; OpenFlow is a software-defined networking protocol that introduces a management broker above the network to do sophisticated routing.
Xu was last publicly visible at VMworld 2010, discussing a network virtualization platform dubbed vChassis, which would have used plug-ins to a new software layer to manage networking elements of the infrastructure.
Cisco Systems Inc. is preparing to support Windows Server 8 Hyper-V with its Nexus 1000V virtual switch, which previously only supported VMware.
A newly extensible virtual switch was among the new Hyper-V 3.0 features turning heads at Microsoft’s Build conference this week, but until now, specific partners had not been mentioned. Cisco has been previewing this support at Build, and has recently made a public post on its website about the Nexus 1000V and Hyper-V.
In the post, Cisco said it will also support Windows Server 8 Hyper-V with its Unified Computing System Virtual Machine Fabric Extender feature, which uses single-root I/O virtualization to connect virtual machines to the network through virtualized physical network adapters.
As with the forthcoming Hyper-V itself, there’s no indication of when these features will become generally available. What’s being talked about at Build this week is a pre-beta preview meant for developers, not enterprise deployments.
In future vSphere releases, VMware will focus on optimizing for the cloud and improving how clusters work.
These changes will further blur the line between a virtual data center and a cloud infrastructure, said Bogomil Balkansky, VMware’s vice president of product marketing. During an interview at VMworld 2011, he explained VMware’s vSphere roadmap in terms of three expanding circles, starting at the host, then moving out to the cluster and ending at the data center/cloud level: