Nobody likes gotcha journalism, but sometimes you have to call a spade a spade.
As Silver Peak Systems was gearing up to announce the release of their virtual WAN optimization appliance, I noticed that a few of their customer use cases shared an interesting quality — they illustrated unusual circumstances where virtual WAN optimization was not just a convenience or a cost-saver, but a necessity.
After having recently listened to Silver Peak’s marketing team talk up their virtual WAN op appliance, the VX Series, imagine my surprise when I stumbled across a Q&A with Silver Peak President and CEO Rick Tinsley that former site editor Tim Scannell had done in September 2009. He asked how Rick thought virtualization would affect WAN optimization.
You can view his full answer on the Q&A page, but since it’s long, here’s the (edited-down and emphasis-added) portion that caught my attention:
“Virtualization is interesting but has to be the most overused buzzword in the industry today. There is server virtualization, which everyone in the industry has had experience with, and the ROI is pretty straightforward. When you get into desktop virtualization, however, you’re going to find that your mileage will vary and that it will vary tremendously.
[...] In terms of virtualizing the network element, this is where the marketing people tend to get a little bit ahead of themselves. We went through this a couple of years ago when some of the vendors were talking about having server blades in their boxes. If you ask people who run networks, most do not want a Windows server on their router. When we went through our own internal server virtualization process, we found that some apps lend themselves very well to virtualization and that you can truly get better server utility and better ROI from these applications.
Virtualizing network elements – like routers and switches and WAN accelerators – is one of those things that makes for a good PowerPoint and good marketing, but I’m not sure where it’s going to go in terms of actual deployment.
I gave Rick a chance to explain what drove his very sudden about-face and how he might respond to criticism that this may be seen by some customers as a hasty (and thus potentially sloppy) attempt to catch up to competitors, who have been well-established here for some time.
Check out his response below the jump…
“We were concerned about how virtualization would impact our product strategy. You have to remember we’re a bit different from other [vendors] we compete with,” Tinsley told me, pointing to Silver Peak’s legacy as a high-performance data center vendor. “It’s not hard to achieve the same performance in a virtual incarnation as it is in a dedicated appliance, but on the high end, they can’t really be virtualized and this is true of any appliance.”
The move was driven by multiple customer requests and eventually lead to talks with VMware, Tinsley said.
“We were surprised when we started. We knew everyone uses virtual machines in their data center, but we weren’t sure how many customers use virtual infrastructure in their branch offices,” he said, adding that he found many large technology companies were using virtualization at the branch to save on shipping costs and configuration time. “It really is a pretty compelling [case].”
He noted that Silver Peak is still hesitant to develop a virtual WAN optimization appliance more powerful than 50 Mbps and will encourage customers to stick with its physical appliances (which range from 4 Mbps to 1 Gbps) at headquarters and data centers.
But what about quality? Silver Peak released its VX Series about 10 months after Tinsley made that statement to SearchEnterpriseWAN.com. I’m not a developer or engineer, but that seems like a short time to go through the whole process — from design to release. But Tinsley said that Silver Peak’s legacy of running on standard hardware meant that virtualizing the software was fairly straightforward.
“The reality is for products like ours — that run on Linux operating systems and standard server componentry — is it’s pretty simple to virtualize them,” he said. “It’s more about making sure you can deliver the same functionality and performance for a given model, and that’s why we limited the virtualization up to our 50-megabit [product], which frankly for us is on the lower end of performance…. It’s not like we had to write a new product or new code. It’s virtually the same code.”