You don’t? OK then, I’ll let Uma Thurman tell it:
[kml_flashembed movie="http://www.youtube.com/v/5D_QKY0_Bxk" width="600" height="361" wmode="transparent" /]
Sometimes IT customers can feel like Baby Tomato, lagging behind all the innovation and hype that vendors are throwing out there and eventually getting squished. It’s a big issue for SAP now, as my colleague Courtney Bjorlin at SearchSAP.com just blogged about. But it’s an equally big concern in the server virtualization market — particularly for VMware, thanks to its focus on cloud computing.
Gartner recently called out SAP for its “on-premise, on-demand, on-device, in-memory” vision that is very innovative but at the same time very disruptive. As Courtney wrote, SAP “needs to be loud about what it’s doing to innovate. … But that means it’s often talking about initiatives long before they’re even close to ramp-up stage.”
VMware is in the same kind of Catch-22 situation. You know the story by now: the hypervisor became more of a commodity, VMware focused more on infrastructure, yada yada, VMworld is turning into a cloud conference.
Most VMware customers would agree that cloud computing offers significant benefits and is the future of IT, but that doesn’t mean they’re ready for it now. Many are still trying to overcome challenges like VM stall, which is a significant obstacle to private cloud adoption. And others face infrastructure roadblocks. (George Reese, CTO of cloud management vendor enStratus, told me today that older IT organizations “need some significant work” on the infrastructure side before they can build a private cloud.)
In the “Pulp Fiction” joke, Baby Tomato is stuck with his parents. But when it comes to IT, customers don’t have to blindly follow their vendors, falling behind and getting squished. In fact, if the gap gets too large, it’s Papa Tomato that eventually gets turned to ketchup — by its competitors.]]>
Now, Oracle is banking on an appliance to do the same for its lagging virtualization market share.
Our sister site SearchITChannel.com reports that a so-called “Oracle VM machine” (perhaps developed by Oracle’s Department of Redundancy Department?) is in the works. Oracle President Charles Phillips disclosed the news during the company’s quarterly earnings call last week.
The Oracle VM machine will bundle the Oracle VM hypervisor with Oracle VM Manager software on a server with integrated network switches and storage arrays. Phillips didn’t offer many details (or a timeline), but it’s basically the same approach that Cisco has taken with UCS and rival Hewlett-Packard is now taking with its converged infrastructure push.
At best, converged infrastructure is only a good option for certain organizations — usually those that are totally new to virtualization or don’t have large virtual infrastructures already in place. In this respect, Oracle is making a smart move: In theory, it will be easier to gain virtualization market share by going after greenfield opportunities than by trying to convert VMware shops.
But a lot of potential customers are wary of these kinds of appliances because they fear vendor lock-in. Sure, Cisco relies on VMware virtualization and EMC storage for the UCS, and HP has agreements in place with both VMware and Microsoft, but you still limit your options by going this route.
And the Oracle VM machine will presumably be even worse, because Oracle doesn’t need to rely on any other vendor’s equipment. The company already has the virtualization and management software, and all the hardware is there too, thanks to the Sun acquisition.
Between Oracle’s very late push into the virtualization market and the overall lukewarm reception to these kinds of appliances, the Oracle VM machine better offer one heck of a kick-start if it’s going to change the company’s fortunes.]]>
That’s the question our virtualization columnist Mark Vaughn asked me on Twitter yesterday as I covered the Red Hat Summit here. My 140-characters-or-less response was, “It’s definitely bold, and they clearly have a vision. The real question is, has Red Hat bet on the right horse?”
That’s a pretty good summary of Red Hat’s virtualization efforts and its shift from Xen to KVM, but let’s break it down in more detail:
It’s definitely bold: KVM is just three years old. (It debuted in Linux 2.6.20 in 2007.) Only one other vendor — Canonical, with its Ubuntu Linux distribution — supports KVM. And KVM operates a lot differently than the kind of virtualization you’re used to from VMware, Microsoft, Citrix, etc.
As far as strategies go, aligning yourself with a relatively new technology that has almost no market penetration takes some intestinal fortitude.
They clearly have a vision: That said, Red Hat didn’t choose KVM just to be different. I definitely got the sense at the Red Hat Summit that the company’s execs and engineers believe in this technology. Time and time again they stressed the benefits of KVM (mostly around the efficiency you get because KVM is part of the Linux kernel).
Navin Thadani, senior director of Red Hat’s virtualization business, even implied that the company was never crazy about Xen (which it had supported in previous Red Hat Enterprise Linux versions.)
“When we did Xen in RHEL in 2007, it was the only thing out there,” he said.
Has Red Hat bet on the right horse? Here’s the million-dollar question. Being bold and having a vision will only get you so far.
Despite some of the technical benefits of KVM, it also has its drawbacks, which you can read about in our Xen vs. KVM face-off. (Little support from platform and management vendors and a lack of advanced features top the list.) But the biggest obstacle isn’t on the technology side. It’s on the customer side. VMware, Microsoft, Citrix (and even Oracle and other smaller vendors) have quite the head start, and migrating to KVM isn’t exactly a piece of cake.
Red Hat could definitely find some KVM success among its core open source base, but it’s a long way to the top if they want to rock ‘n’ roll in the virtualization market.]]>
Red Hat Enterprise Virtualization 2.2 guest OSes will support up to 256 GB of RAM and 16 virtual CPUs. (The previous limit was 64 GB of RAM and 16 virtual CPUs.) The platform will also be able to import and export VMs in the Open Virtualization Format (OVF), which VMware and Oracle VirtualBox already support.
But perhaps the most important enhancement is a new V2V converter that will let you migrate VMs from VMware and Red Hat Enterprise Linux (RHEL) Xen to OVF, so you can run those VMs in Red Hat Enterprise Virtualization 2.2.
If OVF isn’t your style, an actual Xen-to-KVM converter is also in the works and may be available by the time RHEL 6, which is now in beta, becomes generally available. The alternative is to manually convert those VMs to KVM, and this morning here at the Red Hat Summit, I sat in on a hands-on lab about manually migrating from Xen to KVM. Let me tell you, it ain’t easy.
Depending on your infrastructure, you may have to change all your VMs’ MAC addresses, change the default kernel on all your guests and hosts, rename all references to your storage disks, shut down and restart a bunch of systems and perform a lot of commands. And that’s just according to the Red Hat instructor! (An expert in the lab with me said the class skipped or glanced over a lot of other important steps.)
We’ll have a lot more coverage of the Red Hat Summit throughout the week, but so far, virtualization is emerging as one of the big themes.]]>
Basically, ISV stall is a roadblock to virtualization that occurs when software vendors won’t support their applications on virtual servers. It’s part of a larger problem that CA’s Andi Mann recently termed “VM stall” — when a virtualization roll-out hits a wall after the initial consolidation phase.
This afternoon I spoke with David Lynch, vice president of marketing for Embotics, about VM stall. He said the problem affects most of Embotics’ customers, and it’s a tough one to solve because technology alone won’t cut it.
Most organizations experience VM stall as they try virtualizing more mission-critical applications, Lynch said.
Virtualizing mission-critical apps is difficult enough from a technology standpoint, and relying on manual processes can really grind the project to a halt, Lynch said. Of course, Embotics’ products are all about automating monitoring and reporting in virtual infrastructures, so the company has a vested interest in spreading its message. But Lynch acknowledged that VM stall is not a problem that products alone can solve.
That’s because the biggest cause of VM stall has nothing to do with technology and everything to do with bureaucracy. Oracle, SAP and other financial applications, for example, have a lot more stakeholders and affect a lot more employees than the run-of-the-mill systems you probably virtualized first. Try making any changes to these mission-critical apps — let alone migrating them from physical to virtual infrastructures — and you’ll run into a whole lot of red tape.
“[Virtualization] cuts across all the silos that the traditional data center has in place,” Lynch said.
And breaking down these silo walls isn’t easy. When Embotics tries to get different departments to work together to overcome VM stall, a typical response is, “No, we don’t want to involve that group,” Lynch said. He also told the story of one organization that increased its storage budget to support virtualization, only to see the storage group go and spend that money on something else.
It’s tempting to say, “Get all your decision-makers in the same room, explain the value of virtualization to them, and come up with a plan to overcome VM stall.” But that’s easier said than done.
Some decision-makers in other departments may be anti-virtualization, just because that’s the way they’ve always done things. Others may view VM stall as “your problem” and have no inclination to help you. And all it takes is one or two of these people to stop your virtualization deployment in its tracks.
When it comes to VM stall, there are no magic jumper cables.]]>
One of the big questions around the announcement: Why Novell? As News Director Alex Barrett wrote in her story, “Red Hat still leads Novell in terms of Linux market share by a wide margin, leading some to wonder why VMware didn’t partner with that company instead.”
VMware isn’t the only virtualization vendor to spurn Red Hat lately. In fact, this latest news makes you wonder if Red Hat’s virtualization strategy is backfiring.
VMware said it chose Novell because of SUSE’s broad ISV support, particularly with SAP. But there’s more to it than that. As Barrett pointed out, “Red Hat has been pursuing its own virtualization strategy — first with Xen, and more recently with KVM — putting it directly in VMware’s crosshairs.”
The virtualization tide first started to turn against Red Hat back in April, when Citrix went on the attack. (Citrix, with its flag firmly planted in Xen soil, is especially threatened by Red Hat’s KVM push.) So CTO Simon Crosby countered by encouraging Red Hat Enterprise Linux (RHEL) customers to move to SLES or Oracle Enterprise Linux.
I wrote at the time that Red Hat can’t afford to turn off its core customers — admins and engineers who work with RHEL — in its pursuit of the virtualization market. Well, the company is already turning off the core vendors that it may have been better off partnering with instead. Customers may not be far behind.
It’s a serious issue for the company as its annual Red Hat Summit approaches. The show begins Tuesday, and I’ll be there covering this topic in depth.]]>
Two leading third-party management vendors — Veeam and Vizioncore — are now going at it as well. Vizioncore kicked off the fracas last week with an attack on Veeam, saying its “poorly designed architecture for data backup will undermine a virtual environment.” In a 1,900-word blog post, Vizioncore took Veeam Backup and Replication 4.1.1 to task for several of its technical features (or lack thereof). The post also included some more, um, provocative statements.
In the intro, Kelly Polanski (who works for WaveBreak Marketing and blogs for Vizioncore) downplayed Veeam as a “small company.” She pointed out that Veeam is a privately held company based in Russia — as opposed to Vizioncore, which is owned by public company Quest Software and is “obligated to provide audited reporting of company financials.” And later, Vizioncore’s Jason Mattox called out Veeam for using unpublished VMware API calls — an issue that has caused some friction between Veeam and VMware in the past.
Veeam’s Doug Hazelman fired back the following day, calling Vizioncore’s blog a “desperate” attempt to attract customers because the Vizioncore brand is “going away soon.” (Quest plans to give Vizioncore the far-less-catchy name of “Quest Software Server Virtualization Management Group” this summer.) He also noted Polanski’s affiliation with “a company that is known to be a ‘pay for play’ blogger service and we assume the only place Vizioncore could turn to for something positive about their product.”
Sensing an opportunity, another third-party management vendor, VKernel, also jumped into the fray. Chief Marketing Officer Bryan Semple reached out to potential customers on Facebook this week.
“While these two feud, we are just going to keep our head down, completely focused on solving the very complex problem of capacity management in a VMware or Hyper-V environment,” he wrote.
Debate between competing vendors is good when it focuses on technical merits, performance, cost and other issues that truly affect customers. But what about the kind of sniping that’s happening here? Vizioncore and Veeam are both well-known vendors in the virtual server infrastructure management market. They don’t need to get people talking by slamming each other over what country they’re based in or who writes their blogs.
The fact that Vizioncore and Veeam have decided to take the low road, however, shows just how heated the competition is in the third-party management market. Virtual server infrastructures are getting more complex. Admins are realizing that their hypervisor vendors can’t provide all the management tools they need. Filling that gap presents a major opportunity.
In this context, you can kind of understand why the vendors would look for any reason possible to put each other down.
But for most customers, it doesn’t matter if Veeam is based in Moscow, Russia or Moscow, Idaho. And it doesn’t matter if Kelly Polanski or Kelly Kapowski is writing blogs for Vizioncore.]]>
For those of you who don’t know, SearchServerVirtualization.com sponsors the Best of VMworld Awards, which recognize the many outstanding products on display at the show. This year our panel of expert judges will be giving away awards in nine categories.
Seven of this year’s Best of VMworld Awards categories will be the same as last year’s: business continuity and disaster recovery, security, management, hardware, desktop virtualization, new technology and best of show. We’ve also split the cloud computing category from last year into two categories for 2010: private cloud technologies and public/hybrid cloud technologies. (As you may have heard, there’s going to be a lot of talk about cloud at VMworld this year.)
We’re now accepting nominations for the 2010 Best of VMworld Awards. Visit our nomination form to check out the full details and to submit a product. And if you have any questions, feel free to email us at email@example.com. But to answer your first question, the deadline is Aug. 6!]]>
Users on the VMware Communities forum and other message boards said the patch prevented them from logging into the vSphere Client — both on Windows 7 and XP. (The patch addresses a vulnerability that leaves signed XML data open to tampering.)
Some users overcame this problem by uninstalling the patch (identified as KB980773, part of update KB982168), which obviously isn’t ideal. But the better solution is to install the latest version of vSphere Client 4.0 Update 1.
Did you run into this problem yesterday? Did that solution work for you? Let us know in the comments below.]]>
For the most part, VMware and the other vendors there focused on the nuts and bolts of the cloud; Bogomil Balkansky, vice president of product marketing, admitted, “We have really outdone ourselves in terms of the hype and marketing.”
(But occasionally they did revert to the kind of sales pitches and hyperbole that have led to so much cloud skepticism in the first place; Balkansky later compared cloud computing to the Industrial Revolution.)
One session I attended, in particular, offered some great real-world advice about a serious issue: security for virtual infrastructures and private clouds.
The session was actually called “Virtualizing Business Critical Apps,” but the 45-minute “introduction” by sponsor Trend Micro turned out to be more valuable than the session presentation itself. Trend Micro’s Harish Agastya, director of data center security, gave a relatively vendor-neutral and spin-free overview of some of the major virtualization security issues you should be aware of. Here are four of particular note:
For more information on securing your virtual infrastructure, follow these best practices for server virtualization security.]]>