It may be time to add virtualization to that list. A new virtualization survey by the German analyst firm KuppingerCole shows significantly higher adoption of Microsoft Hyper-V in Europe than we’ve seen stateside.
The survey, commissioned by CA, shows that VMware is still the dominant platform across the pond. More than 83% of the 335 organizations surveyed have deployed VMware.
But, surprisingly, more than 40% have also deployed Microsoft virtualization. That’s much higher than what we’ve heard here from IDC (23% adoption rate for Hyper-V) and Gartner (27% by 2012).
These results don’t necessarily mean that the Hyper-V vs. VMware fight is raging in Europe. In our “Virtualization Decisions 2010 Purchasing Intentions Survey,” which had about a 50/50 split between U.S. and non-U.S. respondents, only 13% of respondents identified Microsoft (Hyper-V or Virtual Server) as their primary platform, compared to 76% for VMware.
But these KuppingerCole results at least show more openness toward mixed virtual environments in Europe. And it’s not just a two-horse race: More than 51% of organizations also said they have deployed Citrix XenServer.
Of course, mixed virtual environments bring about their own management challenges. And coincidentally or not, managing heterogeneous environments is something CA likes to talk about these days.
Why do you think mixed virtual environments are more popular in Europe? Does it have to do with market dynamics, vendor reputations, the economy? Or is it just like our differences in football, fashion and food: different strokes for different folks?]]>
I’m happy to introduce our selection, Maish Saidel-Keesing. Many of you know him as a moderator on the VMware Communities site or through his popular blog, Technodrone, which was ranked No. 32 on Eric Siebert’s latest poll of the best VMware blogs. You may also follow him on Twitter, @maishsk.
Maish is an infrastructure administrator and virtualization architect for NDS Group in Jerusalem, Israel, and he’s spent 12 years working in IT, with a focus on virtualization for the past five years. He’ll bring an international perspective to the Server Virtualization Advisory Board and fit right in with the other expert users and consultants. Welcome, Maish!]]>
I can also agree. My own conversations since I started covering virtualization have followed a similar path, away from the hypervisor and back toward concerns about the underlying infrastructure. As Saipetch notes, this is particularly true when it comes to data security (as well as virtual backup, which is mostly a separate discussion).
Survey suggests struggles with virtualization security rise in proportion to percent virtualized
The experiences and discussions @edsai, @Knieriemen and I had are anecdotal, but a recent survey of just under 300 networking pros showed a more scientific correlation between the percentage of virtualization in an environment and the identification of security as a top problem.
“As companies virtualize more of their critical servers and resources, security becomes a greater issue,” wrote a Stephen Brown, product marketing manager for Network Instruments which conducted a survey in an email to The Virtualization Room. “Companies ranking security as their top concern had more than half of their servers (53%) virtualized and one-third of storage (29%) virtualized. This compared to the general virtualized population where 43% had over half of their servers virtualized and 26% had over half of their storage virtualized.”
Standards offer some guidance, but is it enough?
One approach the IT industry is taking to improving virtualization security is the development of standardized and enforced guidelines for how organizations handle sensitive data. . But it’s an arduous task, as demonstrated by the lengthy process that went into the latest PCI DSS 2.0 spec . The latest version of the spec finally acknowledged virtualization as acceptable, after more than two years of development and debate. The spec has yet to be supplemented with virtualization-specific guidance for IT practitioners.
Chris Richter, VP of security products for Savvis, said there are generally two “schools of thought” among security auditors when it comes virtualization. One holds that today’s virtual security controls are adequate for use in closely regulated environments. The other maintains that today’s procedures are not yet adequate for validating e virtual security controls and that there are hypervisor exploits of which we are not yet aware.
Thus, even with the general blessing of virtualization in DSS 2.0, whether virtual environments pass muster is still largely left up to individual auditors, who remain divided on whether virtual environments can truly be secured at this stage of their development.
Meanwhile, Edward Haletky, CEO and a virtualization security, SMB and cloud analyst for the Virtualization Practice, also points out that other regulations, like the Health Insurance Portability and Accountability Act (HIPAA), which focus on keeping sensitive data confidential (i.e., encrypted), and which haven’t been fully brought to bear in the virtual world.
“Right now if you’re a virtualization administator, you can pretty much see all the data. Cloud admins can see data. IT as a service admins control the service catalog and may be able to see data,” Haletky said.
Keeping heavily classified data confidential may require a virtual version of the trusted platform modules (TPMs) that are currently used to authenticate hardware devices by applying cryptographic hashes that ensure the software running on them has not changed.
‘Virtual TPMs’, as well as data encryption that can be applied more granularly at the level of virtual disks and memory, rather than to whole physical disks, would go a long way toward improving enterprise virtual security overall, Haletky says. “We need data confidentiality enforced at the VM level through encryption, and we’re not there yet.”]]>
Back in July, at Microsoft’s Worldwide Partner Conference, all the talk was about Windows Azure and the company’s Platform as a Service-based approach to private cloud computing. Steve Ballmer talked about the “dramatic” difference between cloud computing and virtualization, and VP Robert Wahbe said Infrastructure as a Service was “just a feature” of a private cloud.
Those comments showed a major difference between Microsoft’s cloud strategy and VMware’s virtualization-centric, IaaS approach. And Microsoft was delivering a clear, consistent message.
That all changed last week, when we got some news about System Center Virtual Machine Manager 2012.
SCVMM 2012, as senior news writer Beth Pariseau reported, is all about the IaaS private cloud model. Some of its new features will include self-service portals, automated server provisioning and dynamic load balancing. We’re talking classic IaaS stuff right there.
So what happened in the five months since the WPC?
Wahbe acknowledged in July that IaaS is where the market is these days, but he and other Microsoft execs seemed much more focused on their long-range vision at the time. PaaS and Windows Azure still may be their private cloud endgame, with the goal of SCVMM 2012 being to shore up the IaaS “feature.” (That’s something Microsoft desperately needed to do to stay competitive with VMware, by the way.)
But if PaaS and Windows Azure are still the endgame, you’d think Microsoft would tout them more when talking about SCVMM 2012. Sure, the company spent a lot of time on Azure at the Professional Developer’s Conference in Dublin this month, but there’s been nary a word on how the IaaS capabilities in SCVMM 2012 will tie in.
Cloud computing is confusing enough as it is for customers, many of whom are still trying to wrap their heads around basic server virtualization. It doesn’t do anyone any favors to talk about PaaS and Azure at the WPC and PDC, then talk about IaaS at TechEd Europe without saying how they’ll all work together.]]>
VMware was initially considered a likely candidate to buy at least some of Novell’s IP because of the way the two companies have tightened their alliance in recent years. Last June, VMware said it would standardize its virtual appliances on Novell’s SUSE Linux, which had been modified by Novell a year earlier to run faster on VMware virtual machines.
After the initial reports of talks between VMware and Novell, some users said such a deal would fit VMware’s pattern of expansion into a software stack beyond the hypervisor, which has included acquisitions of SpringSource and Zimbra as well as a partnership with Salesforce.com.
Reuters followed the WSJ story with a report Sept. 22 which said those talks had stalled, citing a “valuation gap” between Novell and its suitors when it came to products outside the company’s SUSE Linux operating system unit. Still, some industry experts expressed hope at that time that VMware would evaluate acquiring at least some of Novell’s virtualization management IP, particularly within its PlateSpin portfolio.
Meanwhle, “as many as 20 companies initially expressed interest in Novell,” according to the September 15 WSJ report. This detail takes on a new wrinkle when combined with a line in Novell’s press release about the acquisition today, which says that as part of the $2.2 billion deal, some of its IP will be sold to “CPTN Holdings LLC, a consortium of technology companies organized by Microsoft Corporation, for $450 million in cash.”
We know now that VMware did not become one of Novell’s principal buyers, but if you connect all the dots, it is possible VMware took a look at Novell two months ago, and passed. Also, very little is known about CPTN Holdings at this point, whether it’s what specific patents were bought or which technology companies are included. Thus, VMware at least theoretically could have gained access to some Novell IP today, in a roundabout way (which would also add new meaning to the term ‘coopetition’), even if it didn’t pick up the whole enchilada.
On the other hand, if VMware isn’t a part of the consortium, it will be interesting to see if Microsoft and friends are able to use these patents to disrupt the market, given VMware’s past coziness with Novell and SUSE.]]>
The users and consultants on our advisory board are our go-to experts who help us stay on top of the latest news and trends in the server virtualization market. Their primary responsibility is to answer the question of the month, where they weigh in on hot topics in the industry (or answer seasonal questions, like in this month’s Thanksgiving-themed article). Often, they’re also the first people we call when we need perspective on a breaking news story, a podcast guest, or a technology sanity-check.
If you’re interested, email me a short bio and why you think you’d be a good fit for the board. Please send all responses by Monday.]]>
For example, yesterday, the coalition between VMware, Cisco and EMC (VCE) launched new Vblocks targeting VDI and SAP deployments, but VCE’s senior vice president of solutions Todd Pavone declined to comment on how many customers Vblocks have garnered since they officially began shipping last November.
“There is definitely momentum for standard architectures,” said Gartner Inc. analyst Chris Wolf. Some cloud service providers have found that enterprise customers are willing to pay a premium for a trusted architecture from vendors they know, he said. Stacks are also garnering interest among financial institutions that want to deploy trusted infrastructure quickly.
Still, while the turnkey concept may fly, there’s no guarantee these particular offerings from big vendors will burn up the market. Among the potential use cases for Vblocks for companies that can afford them is a quick-setup test / dev environment. But there are also startups like Kubisys, launched earlier this year, looking to offer turnkey test / dev appliances for about $80,000 MSRP – far lower than the million-dollar price tags on some Vblock bundles. The same is true in the VDI space, though Vblocks may have greater allure for MySAP users struggling with that resource-intensive application.
Since the early days of the VCE alliance, however, there has also been concern among some enterprises that preconfigured bundles will lock them in and constrain their choice of technologies and vendors. And reports from the sales field are lukewarm, indicating growing but moderate interest.
“We are seeing significant interest in VCE and Vblocks, but most of the time it ends up getting broken up in to individual parts (i.e. an EMC array or a Cisco UCS), unless it is a greenfield opportunity, like a new data center,” wrote one systems integrator in New England. “We do expect to sell two real Vblocks this quarter, though.”
It’s not just VMware’s stack offerings having mixed success in the market, either (at least, from what information we can gather about them) – many of Oracle‘s users, for example, have flatly rejected the company’s attempts to get them to run Oracle apps only on Oracle VM and stacks of the company’s server and storage hardware acquired with Sun.
Yet big vendors, like HP, continue to rack up acquisitions in an effort to build these turnkey stacks, and continue to insist that customers are asking — nay, demanding — that they deliver them. If this is really true, and there’s a huge groundswell of IT managers begging for proprietary turnkey stacks, I haven’t caught sight of it myself yet — nor have I been given any specific revenue or market share numbers that reflect it.]]>
Backup vendor Veeam surveyed 500 organizations with more than 1,000 employees about their virtualization and data protection practices. Here are a few stats from Veeam’s “VMware Data Protection Report 2010” that reveal the problems with backup and recovery in many enterprises:
Of course, it’s in Veeam’s business interests to publicize the challenges of backup and recovery — and its solutions — and that’s what these survey results are doing. But backup is a recurring problem for enterprises, and it’s keeping them from successfully recovering VMs.
Two-thirds of organizations said they experience problems every month when attempting to recover a server. And according to the study, these failures cost the average enterprise more than $400,000 every year.
Even worse, many companies are wasting their time and energy performing full-server recoveries to recover a single file or application item. A full recovery of a backed-up VM takes nearly five hours — not much better than the six it takes to recover a physical server, Veeam said.
A major reason that organizations still hit these bumps on the backup and recovery road: They use the same products for both physical and virtual server backup, when we all know that virtualization requires a fundamentally different approach. This refusal to invest in proper virtualization backup tools would certainly cause problems with VM recovery. No wonder people are worried about virtualizing mission-critical workloads!
Enterprises encounter a variety of issues when they use physical backup tools for VM backup. It’s expensive, increases recovery time, weakens host performance and requires more storage. Plus, you have to install an agent. You might not want to spend the money on new products for virtual backup, but using traditional backup for VMs is likely to cost you more in the long run.
Many enterprises are starting to recognize that reality, and casting aside the physical world mindset. Virtualization-specific technology improves backup and recovery, but it’s clear that not everyone is quite ready for completely virtual data protection.]]>