At first glance, Red Hat’s acquisition of clustered open-source storage player Gluster seems like a pure storage play, perhaps with a bit of open-source camaraderie mixed in. But before it was bought out, Gluster was positioning itself as a virtualization player, too.
Red Hat’s FAQ on the Gluster buy gives a nod to how the purchase fits into the company’s virtualization and cloud computing strategy, emphasizing the use of commodity hardware and scale-out capabilities.
We view Gluster to be a strong fit with Red Hat’s virtualization and cloud products and strategies by bringing to storage the capabilities that we bring to servers today. By implementing a single namespace, Gluster enables enterprises to combine large numbers of commodity storage and compute resources into a high-performance, virtualized and centrally managed pool. Both capacity and performance can scale independent of demand, from a few terabytes to multiple petabytes, using both on-premise commodity hardware and public cloud storage infrastructure. By combining commodity economics with a scale-out approach, customers can achieve better price and performance, in an easy-to-manage solution that can be configured for the most demanding workloads.
There’s also potentially an even more direct virtualization play here, based on Gluster’s history. The company came out of stealth in 2007 with GlusterFS, a scale-out file system for clustered NAS based on open-source code, and by late 2009 was offering support for running virtual machines (VMs) directly on its clustered NAS.
Customers who chose this option could use the cluster’s internal replication to provide high availability (HA) failover for VMs running on the cluster using a checkbox at the time of installation. From there, the file system automatically handled replication using an underlying object-based storage system.
As governing the data center infrastructure becomes the next frontier for virtualization players, this purchase could be seen as an answer to forthcoming vStorage APIs from VMware that aim to bring server and storage virtualization closer together, as well as to new storage virtualization products like Nutanix, which have the same broad goal.
Please join me in welcoming Christian Mohn to the Server Virtualization Advisory Board!
Mohn, an infrastructure consultant based in Norway, is a VMware vExpert for 2011 and co-host of the vSoup podcast. He’s been working in IT since 1997, specializing in Windows, VMware, Citrix and more.
The Open Virtualization Alliance, founded in May to promote KVM, has grown to more than 200 members since its launch, seeing specific interest from cloud-focused companies. But KVM isn’t exactly the first hypervisor people think of when they want to deploy a private cloud, so why the growth?
Open Virtualization Alliance (OVA) board members credit the growth to increased awareness of the Kernel-based Virtual Machine (KVM) hypervisor, freedom of choice, and KVM’s features. Founding members of the OVA include BMC Software, Eucalyptus Systems, Hewlett-Packard, IBM, Intel, Red Hat and SUSE. Now, more than 50% of OVA members focus on cloud computing.
Despite this growth, many users are still unfamiliar with KVM. To increase understanding, the OVA is developing KVM best practices documentation. This fall, the alliance also plans to create forums for users to share best practices, as well as webinars, webcasts and learning events.
KVM deployments certainly face an uphill battle against the virtualization market leaders, and Red Hat’s KVM offerings still lag VMware in features. But for now, “[KVM] certainly will become more noticeable in the landscape,” said Inna Kuznetsova, OVA board member and vice president of IBM Systems and Technology Group.
KVM keys to success: Performance, security, management
OVA board members tout that KVM achieved the highest virtualization performance levels in SPECvirt benchmark tests, but it is KVM’s security features that appeal to some cloud providers. The hypervisor uses Security-Enhanced Linux, developed by the U.S. National Security Agency.
“Once you’re on a cloud, you have a multi-tenancy environment, so you want to have a high level of security,” Kuznetsova said. “And that’s what makes KVM attractive.”
Companies have eyed KVM for its price as well. With security features already built into the hypervisor, admins can spend less on virtualization security tools, Kuznetsova said.
Kuznetsova also pointed out KVM’s various virtualization management capabilities. The hypervisor relies on libvirt for basic management, and administrators can add advanced tools such as Red Hat Enterprise Virtualization or IBM Systems Director with VM Control. With these kinds of tools, you can manage multiple hypervisors, including VMware, Hyper-V and Xen.
The speed of innovation in open source development has also contributed to increased awareness of KVM. Because so many developers work on open source offerings, the technologies can advance very quickly, Kuznetsova said.
“The world of open source changes so fast, you always need to go back and see what’s changed,” she said.
VMware’s director of networking research and development is now with a new company, Big Switch
Howie Xu was with VMware for nine years before making the move. He will lead the R&D unit at Big Switch, which is an OpenFlow company; OpenFlow is a software-defined networking protocol that introduces a management broker above the network to do sophisticated routing.
Xu was last publicly visible at VMworld 2010, discussing a network virtualization platform dubbed vChassis, which would have used plug-ins to a new software layer to manage networking elements of the infrastructure.
Cisco Systems Inc. is preparing to support Windows Server 8 Hyper-V with its Nexus 1000V virtual switch, which previously only supported VMware.
A newly extensible virtual switch was among the new Hyper-V 3.0 features turning heads at Microsoft’s Build conference this week, but until now, specific partners had not been mentioned. Cisco has been previewing this support at Build, and has recently made a public post on its website about the Nexus 1000V and Hyper-V.
In the post, Cisco said it will also support Windows Server 8 Hyper-V with its Unified Computing System Virtual Machine Fabric Extender feature, which uses single-root I/O virtualization to connect virtual machines to the network through virtualized physical network adapters.
As with the forthcoming Hyper-V itself, there’s no indication of when these features will become generally available. What’s being talked about at Build this week is a pre-beta preview meant for developers, not enterprise deployments.
In future vSphere releases, VMware will focus on optimizing for the cloud and improving how clusters work.
These changes will further blur the line between a virtual data center and a cloud infrastructure, said Bogomil Balkansky, VMware’s vice president of product marketing. During an interview at VMworld 2011, he explained VMware’s vSphere roadmap in terms of three expanding circles, starting at the host, then moving out to the cluster and ending at the data center/cloud level:
VMware highlighted its Mobile Virtualization Platform at VMworld 2011, but I left the show feeling like the technology is little more than a novelty.
The concept itself is a good one, but the lack of Apple iOS support, VMware’s reliance on Google’s fragmented hardware partners and concerns about battery life will all be major obstacles to widespread Mobile Virtualization Platform (MVP) adoption.
Nowadays, more people are using their personal smartphones for work purposes — checking email, viewing documents, accessing corporate apps, etc. It’s convenient for users, but it creates security and management nightmares for IT departments.
With the Mobile Virtualization Platform (MVP), your IT department can run a virtual machine (VM) on your smartphone, complete with another operating system — in effect, giving you a personal phone and a work phone on the same device. Inside the VM, IT admins can use VMware’s new Horizon line of application-management tools to authorize specific applications and corporate email accounts.
In the future, VMware Site Recovery Manager will offer policy-based, multi-tenant disaster recovery for vCloud Director.
That’s according to VMware officials who previewed the Site Recovery Manager (SRM) roadmap during the VMworld 2011 conference last week.
SRM operates at the virtual machine (VM) level today, but the next version will allow for application-level disaster recovery (DR) protection according to policies set by either vSphere administrators or organizational managers, said Ashwin Kotian, a senior product manager for VMware.
“We want to … enable DR similar to how you enable [High Availability],” Kotian said. “Associate a service level, and based on the service level, SRM is going to make sure that [an application] gets provisioned to the right data stores, that it’s properly replicated and that it’s going to be associated with the right recovery plan.”
We have an open spot on our Server Virtualization Advisory Board. Would you like to fill the void?
Our advisory board members are our go-to experts who keep us up to date on the latest news and trends in the server virtualization market. They share their insights with our readers by answering a topical question of the month. And they even have a fancy page that shows off their pictures and bios.
If you’re a server virtualization user or consultant — no vendor employees, please — and this sounds like your cup of tea, here’s how to throw your hat in the ring: email me by Sept. 26 with your bio and a few sentences about why you want to join the Server Virtualization Advisory Board.
We’ll choose the newest board member by the end of the month. Good luck!
Red Hat revealed a future feature of KVM and Red Hat Enterprise Linux at VMworld 2011 that will allow native non-virtualized applications to run alongside virtual machines and virtual desktops on a host. Called Hybrid Mode, it will eliminate latency issues associated with running workloads inside virtual machines, while still delivering the consolidation and management benefits of virtualization, said Navin Thadani, senior director of Red Hat’s virtualization business.
Virtual machine performance has continued to improve, and more companies are virtualizing tier-one applications. Even so, workloads that require low latency, such as a bank’s financial-trading applications, still have performance issues in a virtual infrastructure.
With Hybrid Mode, you can have bare-metal performance with a native Red Hat Enterprise Linux (RHEL) application on a host that also runs virtual machines and virtual desktops – all of which can be managed through the RHEL interface. At the same time, you can still improve consolidation ratios by sticking workloads normally reserved for physical servers on virtual hosts.
That said, there are some caveats. You will need to use common sense when placing performance-intensive applications on a server with other virtual machines. If the server’s resources are taxed, obviously the workloads will suffer. Also the application is native to RHEL, so it doesn’t have the advantages of virtualization, such as live migration.
Red Hat users shouldn’t get too excited about Hybrid Mode, which will not ship with Red Hat Enterprise Virtualization 3.0 this year.