SolarWinds and VMware are in a bit of a dustup, slinging words like arrows on company blogs. It started last week when Robbie Wright, director of product marketing, virtualization and storage at SolarWinds, wrote a post on the company’s blog titled “Has VMware ceded the SMB market to Microsoft Hyper-V?”
In the post, Wright noted that Hyper-V can make headway in the SMB market, because there are some features on VMware hypervisors that admins at smaller shops won’t or don’t need to consider. Wright goes on to note a Gartner prediction that 85% of companies with fewer than 1,000 employees will be Hyper-V shops.
Chanda Dani, senior product marketing manager at VMware, took issue with that and other claims. Dani said the “85%” prediction is incorrect; that of all Hyper-V installations, 75% will be in SMB with fewer than 1,000 employees. Dani said the “statement has been erroneously interpreted in the blog. The author should back up Gartner’s statements with citations.”
Then, earlier this week, Wright took to the SolarWinds blog again, responding to Dani’s critiques. He took to task the idea that VMware products could cater to the small-to-medium business set when VMware’s Essentials kits might not support the needs of a medium-sized company. Wright said it’s hard to deny the advancements Microsoft has made in the SMB market with Hyper-V.
With 60% of our purchasing intentions survey respondents planning to expand server virtualization, Microsoft has a chance to cut into VMware’s substantial market share. It’s no wonder why it’s a contentious topic.
What do you think about VMware and SolarWinds’ slings? Let us know in the comments.
As is typical with many software updates that follow a stable product, there are still a lot of VMware customers delaying the upgrade to vSphere 5. In many cases, it’s just a matter of customers waiting to see how the new product shapes up. Better to let others run the gauntlet and then stroll in quietly after the bugs have been worked out, right?
Users may be more wary of bugs today. But, in the case of vSphere 5, the delay may have as much to do with the lack of a major feature or driving need to make the switch, said Tim Antonowicz, a senior sales engineer with Mosaic Technology, an IT infrastructure consulting company based in Salem, N.H.
“In most cases that I’ve come across, people didn’t see a compelling reason to upgrade. If they had a vSphere 4.0 or 4.1 infrastructure, they could keep it patched and updated without doing a major upgrade. In their minds, why introduce something new into what is a stable environment right now, when there’s no confirmed need?” Antonowicz said.
In fact, it wasn’t the new features included in vSphere 5 that garnered most of the attention after the July 2011 launch, it was the change in the licensing model. While there have been some reported bugs with vSphere 5, more recently Antonowicz has seen customers deciding that it is safe enough to make the move. Instead of one keystone feature that might have pushed faster adoption, it has been a variety of smaller improvements driving this new wave of upgrades.
- With vSphere 5, you can have bigger file systems that allow you to put more of your data together in a consistent format. Admins can also now thin provision the data they don’t need, allowing for the proper interaction between the software and the array.
- “VMware finally got around to building a totally new high-availability system from the ground up. So High Availability, is much more robust and better supported in vSphere 5,” Antonowicz said.
- Storage optimization is now more efficient. The Storage Distributed Resource Scheduler helps automate storage management. Administrators can set the storage policy of their virtual machines (VMs) and automatically manage the balancing and placing of the VMs across storage resources.
“Taken individually, none of those changes are a compelling reason to upgrade,” Antonowicz said.
But taken together, along with the calming of fears over bugs, and we should start to see more organizations take the vSphere 5 plunge in the next few months.
This cloud computing commercial with Charles Barkley is very revealing on so many levels.
- We learn that the 1993 NBA Most Valuable Player and 1992 Olympic gold medalist doesn’t understand the value of moving mission-critical workloads to the cloud. (In his defense, many CIOs and IT departments haven’t hopped aboard the cloud computing bandwagon for a variety of reasons. So, we can give him a pass.)
- He’s a bit sensitive about his final NBA season, in which he averaged 14.5 points per game and 10.4 rebounds per game. That’s not bad, but it was a far cry from his prime, when he was a lock for 25 points and 12 rebounds a night.
- The man is not afraid to wear skinny ties.
On the bright side, The Round Mound of Rebound didn’t know much about Angola before the 1992 Olympics, and he powered Team USA to a 116-48 victory over the Angolans.
Red Hat hit a milestone, posting $1 billion in revenue for the fiscal year. Not bad for a company that originally made money by selling telephone support for an upstart open source operating system. Despite this success, Red Hat has floundered in the virtualization market. By most estimations, it’s battling Oracle for fourth place, behind VMware, Microsoft and Citrix Systems, respectively. In many ways, Red Hat is the T-Mobile of virtualization.
Looking beyond their positions in their respective markets, both Red Hat and T-Mobile share a number of similarities.
In a recent interview with Ars Technica, Red Hat CEO Jim Whitehurst admitted that VMware virtualization is ahead of his company’s offering – which isn’t much of a shocker. Red Hat Enterprise Virtualization 3.0 does some nifty stuff, but it still lags behind vSphere 5 in a number of areas, including storage live migration and the amount of RAM you can assign to VMs.
The same argument can be made for T-Mobile: It’s playing catch-up with the competition. For starters, T-Mobile hasn’t deployed a true 4G network. Unlike AT&T and Verizon, which are rolling out nationwide LTE networks, or Sprint, which uses WiMAX, T-Mobile uses HSPA+ to deliver faster wireless speeds, but that technology is still considered 3G. (For all the nerds out there, I understand that LTE isn’t theoretically 4G. But that’s a discussion for a different day.) Also, T-Mobile’s selection of handsets leaves much to be desired. For goodness’ sake, it’s the only U.S. carrier that doesn’t offer the iPhone!
But the Red Hat and T-Mobile comparisons extend beyond both companies’ shortcomings. They provide a cost-effective and viable alternative to their competition. For example, a RHEV subscription nets you enterprise-grade features, such as high availability and distributed resource scheduling, for a fraction of VMware’s price. For organizations that are just beginning to virtualize or looking to add a second hypervisor, it’s hard to ignore the allure of a cheap and stable virtualization platform. (Although, I wonder if those IT shops would prefer Hyper-V, which comes with Windows Server.)
T-Mobile, for its part, offers dirt-cheap voice and data plans. When I look at its Classic Unlimited Plus plan, I’m ashamed of my monthly AT&T iPhone bill. T-Mobile makes sense for people who don’t want to fork over $100 a month for a smartphone. Ultimately, it’s great for customers and the wireless market, as a whole.
The U.S. Department of Justice and the Federal Communications Commission (FCC) seem to agree. They recently opposed a $39 billion merger between AT&T and T-Mobile for antitrust reasons. The FCC even said that T-Mobile was a “disruptive force” that keeps wireless carriers from raising pricing.
I won’t go as far as calling Red Hat a disruptive force in the virtualization market. But the capitalist in me believes that competition is good for buyers and sellers. And T-Mobile and Red Hat promote greater competition and apply pressure on their respective market leaders.
Symantec Corp. is locked in a court battle with virtual backup vendor Veeam Software over patent infringement, but Veeam customers don’t expect the suit to affect them. Some prospective users also say it won’t stop their evaluations of Veeam’s product.
Symantec accuses Veeam as well as another competitor, Acronis, Inc. of doing “irreparable harm” in its lawsuits, and the security and storage software company seeks monetary damages. It is also attempting to stop Veeam and Acronis from using certain technology in their products.
Symantec claims that Veeam infringed on patents for “Disaster Recovery and Backup Using Virtual Machines,” “Computer Restoration Systems and Methods,” “Method and System of Providing Replication,” and “Selective File and Folder Snapshot Creation.”
Users considering a move to Veeam are undeterred. “Isn’t that what every backup vendor does?” said Femi Adegoke, IT Director at the West Gastroenterology Medical Group in Los Angeles, Calif., of the alleged areas of infringement. “[The lawsuit] doesn’t bother me — Symantec’s product has a lot of moving parts and legacy stuff involved. I prefer to go with some of the newer guard [in Veeam].”
Current Veeam shops were blase about the news. “It doesn’t concern us — we don’t work for them,” said Kevin Stephens, infrastructure specialist for the Ohio Department of Developmental Disabilities (DoDD), which uses Veeam’s Backup and Replication software. “We use both Symantec and Veeam products.”
One Veeam customer that uses Veeam’s full suite of software, including Backup and Replication and Reporter said he’s not concerned, either – “at least, not yet.” “We live in a litigious society, so you never know how it will work out,” said Barry Blakely, infrastructure architect for Mazda N.A.
Some in the market see the suit as ‘patent trolling’ on Symantec’s part, brought on by competitive products’ popularity, but it’s more likely a prelude to some form of partnership or even acquisition, said Greg Schulz, founder and analyst with the StorageIO Group.
“Maybe the outcome down the road is some cross-licensing or a partnership between Veeam and Symantec,” he said. “We’ve seen a long list of things like this eventually get settled out of court.”
Tech professionals enjoyed their largest annual salary growth since 2008, according to a report released by Dice last week, but cloud and virtualization skills were not among the most fortunate.
After two straight years of wages remaining nearly flat, tech professionals on average garnered salary increases of more than two percent, boosting their average annual wage to $81,327 from $79,384 in 2010.
Virtualization as a skill also saw jumps in pay, up six percent to $86,669.
The vSphere Storage Appliance (VSA) released with vSphere 5 now supports more usable capacity per host thanks to relaxed RAID requirements, according to a VMware blog post.
RAID, or Random Array of Independent Disks, refers to the way data is striped and / or mirrored across disks to achieve redundancy. Higher RAID levels afford better data protection, but can eat up more disk capacity. Previously, the VSA required RAID 10, which offers high levels of data protection, but contributed to a 75% capacity overhead for the overall VSA.
Now, the VSA supports RAID 5 or RAID 6, which use fewer disks for data protection, resulting in more available capacity for users.
VMware officials unveiled a new offering designed to help IT pros run proof-of-concepts in the cloud at yesterday’s well-attended New England VMware User Group Winter Warmer at Gillette Stadium.
The new program, called Virtual Customer Labs, will use the same underlying technologies as the Hands-on Labs at VMworld, according to a lunch session presented by Josh Liebster, a systems and sales engineer for VMware. It will allow users to test various VMware products “without having to worry about any Active Directory mishaps or storage being misconfigured.”
One example of the growing criticality of vCenter comes with vSphere 5’s Auto Deploy feature. In certain disaster scenarios, its dependency on vCenter can send users down a ‘rabbit hole’ of availability issues if the environment is not designed correctly, experts say.
Auto Deploy can be used to deliver host and VM configuration information across the network, while an Auto Deploy server manages state information for each host. It stands to be most appealing in large environments where quick deployment of new VMs is a must.
In an environment where vCenter, the vCenter database and the Auto Deploy server are all virtualized in the same vCenter datacenter, and all hosts and VMs are totally reliant on Auto Deploy for their state information, Auto Deploy cannot set up vSphere Distributed Switches (vDS) if vCenter Server is unavailable. If the host can’t connect to vCenter Server, it remains in maintenance mode and virtual machines cannot start.
I’ve been looking more deeply into the proposed IETF standard VXLAN of late, but my reading has left me with more questions than answers.
VXLAN, or Virtual eXtensible LAN, submitted to IETF last fall and talked up at last year’s VMWorld, is a protocol for routing Layer 2 traffic over Layer 3 networks, with the goal of either expanding the available VLAN address space, or supporting inter-data center VM mobility, depending on who you ask. It’s also important to note that VXLAN, for now, is still a proposed standard that remains largely theoretical.
Recently, a blog post by Scott Lowe caught my interest by pointing out some key differences between VXLAN and Cisco’s Overlay Transport Virtualization (OTV) protocol, which also encapsulates layer 2 frames in layer 3 packets. Continued »