VMware historically has had a good relationship with storage vendors, especially those who focus on storing and protection for virtual machines. The annual VMworld conference draws more storage vendors than any storage-dedicated show. The VMware Partner Exchange (PEX) show has also been storage vendor-friendly, although less so this year.
As first reported by CRN, VMware asked Nutanix and Veeam Software to stay away from PEX next week. Those two vendors have had success with products that help organizations running VMware hypervisors. Nutanix sells hyper-converged systems that include storage, servers and VMware hypervisors in one box, and Veeam sells backup for virtual machines.
But VMware has also become more competitive with Nutanix and Veeam as VMware expanded its capabilities and products over the past couple of years. VMware’s Virtual SAN (vSAN), currently in beta, is a software-only version of what Nutanix does. Nutanix also competes with Vblocks, sold by the VCE joint venture comprised of VMware, its parent EMC and Cisco. And VMware’s vSphere Data Protection (VDP) backup product, an OEM version of EMC’s Avamar software, directly competes with Veeam’s Backup & Recovery.
Still, VMware isn’t telling all hyperconverged systems and VM backup vendors to stay away from PEX. SimpliVity, which also sells hyper-converged systems that bundle VMware hypervisors, will be at the show and its CEO Doron Kempel will be a speaker. Unitrends PHD, which also handles VM backups, will also be there.
Why Nutanix and Veeam? Maybe because of their success. They are among the fastest growing storage vendors, according to the numbers released by the private companies. Nutanix claims it has gone over $100 million in revenue in barely two years of selling products, and forecasts more than $80 million for 2014. Veeam claims its annual revenue passed $100 million in 2012, and that its software protects more than 5.5 million VMs. These vendors can also be seen as growing threats to EMC and VMware’s larger storage partners, including NetApp, Dell, Hewlett-Packard and IBM.
Although Veeam now protects Microsoft Hyper-V VMs too, it much of its early success came from customer referrals from VMware. Doug Hazelman, Veeam vice president of product strategy, said he would not speculate why Veeam is no longer welcome at VPX but said his company still considers VMware an ally.
“We still have a good relationship with VMware,” he said. “The vast majority of the more than 80,000 customers we have are running on VMware. In the software industry there is always overlap between vendors like VMware and Microsoft and their partners, and Veeam is no different.”
Hazelman wouldn’t say if he thought being officially absent from PEX will hurt business, but he said Veeam will send a team to the show for meetings with VMware partners. “We have a strong and vocal customer base and partner base,” he said. “Just because we may not be on the show floor or at the partner exchange, doesn’t mean we won’t be out there.”
Despite all the hyper around Fibre Channel over Ethernet (FCoE) a few years ago, old fashioned Fibre Channel (FC) remains the dominant SAN protocol.
A report released today by technology research firm Evaluator Group shows there is good reason for that. Evaluator Group testing found FC significantly faster than FCoE with far less CPU utilization. FC also required fewer cables and power than FCoE, according to the report.
Before we get into the numbers, I want to point out that FC-centric Brocade funded the testing. Brocade sells FCoE gear too, but has been more bullish on FC while its rival Cisco has been FCoE’s chief evangelist. That doesn’t mean the results were skewed – Evaluator Group senior partner Russ Fellows said his group conducted the tests at its labs without vendor interference – but Brocade may not have released the results if FC did not come out a clear winner.
Evaluator Group used Hewlett-Packard BladeSystem c7000 chassis and 16 Gbps FC switching and HBAs on the FC side. For FCoE, Evaluator Group used Cisco UCS 5108 blade chassis and 10-Gigabit Ethernet (GbE) switching. In both cases, the storage was a 16-gig FC solid-state arrays.
The difference in response times for FC and FCoE didn’t show up until workloads surpassed 70% SAN utilization. However, FC response times were two to 10 times faster than FCoE as workloads surpassed 80% SAN utilization. FC also used 20 percent to 30 percent less CPU power than FCoE according to the report.
Speed and low latency aren’t FCoE selling points, so those results were no big surprise. A need for less cabling and power are supposed to be FCoE’s advantages, however, so it was a surprise that FC required 50% fewer cables for LAN and SAN connectivity. “This highlights and confirms the inaccuracy of the FCoE claims of fewer cables and connections,” the report states.
The tests also found the Cisco UCS required 50% more power and cooling than the HP blade with FC equipment.
The tests also determined that FC has more predictable performance with FCoE, which had twice as great a difference between average and standard deviation at 50% utilization than FC. The difference was 10 times as great with 90% workload utilization.
“If you have a high-performing application and use solid state storage, Fibre Channel is the better way to go,” Fellows said. “There is less overhead and better performance. I was surprised that Fibre Channel looked as much better than it did. The cabling and power advantage was a bit of a surprise, too.”
Fellows added that CPU utilization was almost identical when using a hardware initiator for FCoE. The test results for the report used a software initiator because that is the standard configuration for UCS, but FCoE performed better in subsequent tests using hardware initiators.
FCoE adoption for storage has been slow, for several reasons. Fellows said that while FCoE performance is good enough for many workloads, he doesn’t expect it to supplant FC any time soon. “It will continue to roll out, but I don’t think adoption will be that strong,” he said. “I think FCoE will be similar to iSCSI – it will work, people will use it and it will expand, but iSCSI hasn’t taken over the world yet.”
Violin Memory today named Kevin DeNuccio as its CEO, and he must decide if he wants to try and turn around the struggling all-flash array vendor or sell it off.
DeNuccio replaces interim CEO Howard Bain, who held that position since Violin’s board fired Don Basile in December. Bain remains Violin’s chairman. Basile cut his remaining tie with Violin when he officially resigned from the board last Friday. DeNuccio, who also has the title of president, replaced Basile on the Violin board.
Basile left as CEO less than three months after Violin went public, and saw its stock price plummet from its initial public offering of $9 to $2.68. It didn’t help Basile that Violin’s first earnings report as a public company was disappointing, as it lost $34.1 million and missed its revenue and guidance targets.
Clinton Group, a Violin investor, has been pushing the board to sell the company. Clinton Group president Gregory Taxin wrote a letter to the Violin board in December urging it to sell the company. Last week he told Bloomberg at the Activist Investor Conference that Violin has received informal inquiries from five suitors.
DeNuccio won’t be making any public statements for at least a few days according to a Violin spokesperson, who said the new CEO will be tied up in meetings with employees, partners, investors and customers.
Although he is a director for solid-state drive (SSD) vendor SanDisk, DeNuccio’s background goes far deeper in telecommunications that storage. He most recently managed angel investor Wild West Capital, which he founded in 2012. He also served as CEO of Metaswitch Networks from 2010-2012 and Redback Networks from 2001-2008. He took Redback through Chapter 11 bankruptcy before sellng it to Ericcson. He also held executive positions with Bell Atlantic Network Integration, Cisco, Wang Laboratories and Unisys Corp.
Data protection appliance vendor Quorum enters 2014 coming off what it claims are record sales with expectations to grow more this year, a fresh $10 million funding round and a still-emerging cloud DR market in front of it. At the same time, the company is in transition as it searches for a new CEO to manage its expected expansion.
Walter Angerer, who works for one of Quorum’s venture capitalists and sits on its board, became interim CEO in November when the company announced its latest funding. He replaced Larry Lang, who served as CEO since 2010.
“As you go through growth cycles, you grow out of the startup phase and enter a growth phase,” Angerer said, explaining the reason for the CEO change. “It was appropriate for us to hand things over to new leadership to accelerate growth.”
Angerer said he hopes to find a new CEO soon. He has reason to find somebody quick. Besides his role as a venture partner with Quorum VC Toba Capital, Angerer is also founder CEO of Parsec Labs, a NAS virtualization startup preparing to roll out its first products.
Quorum has already begun revamping its executive team with the addition of VP of marketing John Gallagher and VP of products Kemal Balioglu. Gallagher has storage experience with DataDirect Networks, LSI and EMC Isilon. Like Angerer, Balioglu spent time at Symantec.
Quorum bills its onQ appliances as “one-click” backup and recovery. Customers can use an appliance on-site and replicate to a second appliance at a DR site. Or, the second appliance can be hosted by a VAR or cloud provider.
Angerer said Quorum has close to 500 customers and it has doubled its revenue every year for the past three years. He predicts that Quorum will more than double revenue this year.
“There’s a lot more awareness and appreciation of the need for DR rather than just backup,” he said.
Gallagher said it will also be an active year for product releases. “Our engineering team is quite busy,” he said. “We’ll have core technology advances late in the first half of the year, and some more technologies that will expand or market in the second half.”
IBM’s sale of of its x86 server business to Lenovo raises questions about IBM storage.
Lenovo did not buy any storage systems in the $2.3 billion transaction, but it will OEM and license IBM Storwize storage arrays, tape systems and General Parallel File System (GPFS) software as part of the deal.
I see much potential impact on the storage business from this deal. Certainly it is too soon to make any conclusions and even to speculate at this point. But, there are some areas worth considering:
- Storage sales are often server-led, meaning customers often buy new storage systems when they purchase servers. The x86 server line did have storage drag along with it. That’s part of the reason why Lenovo will OEM IBM storage, and may mitigate some of the disruption for customers. But will the sales drag continue for the Storwize systems? Lenovo is a low-cost provider and there will be pricing pressure. And, Lenovo may want a more commodity priced offering.
- Will the pricing pressure drive down the prices of storage overall and make the storage business less profitable? With less profit, there will be less motivation to invest in development and the competitive pressures may be too great.
- A possible upside is Lenovo’s success as a low-cost provider will drives more volume and increases profit, allowing greater investments in the storage business by IBM.
- IBM’s service and support for enterprises has been a factor for customers buying IBM systems. Will that change after the acquisitions? This could be a hidden impact which will take some time to surface.
- What about the IBM storage products not in the OEM deal? This is less of an issue because these systems are primarily sold with the server systems that IBM has held on to (Power and System-z) or through independent storage sales. The Storwize systems should continue sales from IBM and its resellers for independent storage sales and with the Power servers as well. But with Lenovo selling the Storwize and tape systems, will there be conflict with IBM? There may be engagement rules established to handle this situation but we don’t’ know that yet.
- What does the big picture look like for IBM? IBM is moving away from low-margin devices and systems. The threshold for selling off the business is not clear and probably for good business reasons. But, the question comes down to whether the investment and resulting innovations in storage will produce enough added value to continue the investments. IBM has an excellent portfolio of storage products currently and some great people in sales, support, and in the channel and distributors. There is great potential to continue to be competitive and move forward with technology and solutions. This is what is expected of IBM.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
EMC’s earnings report Wednesday served as another reminder that storage spending is not growing at anywhere near the rate that data is. With more data spread around on mobile devices and the cloud, the storage model is changing and companies are not rushing out to buy more traditional storage arrays.
EMC’s $6.68 billion revenue last quarter was a big jump from its $5.54 billion in its disappointing third quarter. But EMC’s full year storage revenue of $23.22 billion fell short of the vendor’s original target of $24.5 billion, and its 2014 forecast of $24.5 billion was below Wall Street expectation. EMC executives said industry-wide storage revenue growth continues to slow, and the company announced it will reduce its staff by about 1,000 to cut costs.
The slowdown is clearly more industry-related than specific to EMC. Worldwide storage sales have been down overall this year, and IBM said its storage revenue dropped 13% year-over-year during its earnings call earlier this month.
“We are disappointed we didn’t hit our original goal of $23.5 billion in revenue,” EMC CEO Joe Tucci said, “We are, however, proud that we did hit our $5.5 billion free cash flow goal, grew considerably faster in the markets we serve, and substantially faster than the overall IT market.”
Tucci added that the transition in the IT business now presents “the biggest, most disruptive and yet most opportunistic transition in IT’s 60-plus year history.” However, he and EMC Information Infrastructure CEO Dave Goulden mentioned several times that CIOs are reluctant to buy in this market “We recognize that CIOs right now are being very cautious in their spend,” Goulden said. “We are seeing a little bit of a pause in the market … We are really factoring what we are seeing in the market and the dilemma that the CIOs are facing into our thoughts about how the year might play out. So we are taking a conservative view of IT spending.”
Tucci added that EMC expects the IT market to grow around only 2% this year, and EMC’s guidance represents a 3% gain over 2013.
As for the layoff, Goulden called it a “rebalancing activity” to put EMC’s workforce more in line with the current technology and product landscape. EMC had a similar layoff last May. The company has about 60,000 employees.
“Last year when we did this, we actually wound up with about 2,000 more people at the end of the year when we started off this,” he said. “This year we expect to probably end the year flat or slightly up. Just think of it as rebalancing rather than restructuring.”
Syncsort Data Protection has an official name, three months since splitting off from the Syncsort data integration company.
The data protection vendor Monday said it is now called Catalogic Software, and adopted the slogan “Catalog, protect, manage” to describe its DPX data protection and EPX catalog management applications.
The data protection spinoff came when part of its management team and new investors acquired that business from Syncsort. Flavio Santoni, who was CEO of Syncsort, is the Catalogic CEO. The rest of the Catalogic management team consists of chief marketing officer John McArthur, CTO Walter Curti, VP of sales Mike Kuehn, senior director of customer support Ira Goodman, and senior director of business development and alliances Bob Sarubbi.
Their goal now is to keep Catalogic from becoming catatonic in a highly competitive data protection market.
The computer storage industry seems interesting to many on the outside. Fellow engineers I associate with who are in different disciplines often ask pointed questions when we get together. The most consistent questions are why there are so many storage startups, and why don’t the big-name storage companies innovate more so there would not be so many startups.
That is really a long discussion rather than a simple answer. The reason for startups is they are the best vehicles for bright people with great ideas to bring their visions to reality. The fact that big vendors don’t innovate at a level that would eclipse startups is really an indictment of the organization or structure of companies. I’ve worked at a number of these large companies, and I usually relate some examples I’ve experienced when we are in this discussion. It doesn’t take long for my friends to become somewhat disillusioned as to the state of those companies.
The easiest thing to talk about is the set of characters that are impediments to bring an innovative idea to fruition. I’ll name a few and I’m sure anyone who has tried to achieve something inside a big company can add to this with additional examples. Here are a few types:
• The Blockers. These people believe their position is to make all new ideas go through their process and that nothing can advance unless they are satisfied that process has been met – to everyone’s satisfaction. Usually, they set up a series of gates that must be passed, which is really their way of forcing their process be followed. Passing these gates or even contemplating what it takes is enough to drive anyone with a great idea out of the company.
• Diffusers. These people typically don’t understand the idea or the potential value and hide that lack of knowledge by adding many tangential points to a discussion. These additions dilute the value of the good idea, misdirect the conversation, and give credence to another other idea that is not relevant. Diffusers usually know what they are doing and are intentional in their effort to avoid dealing with the knowledge necessary. Or, they are dangerously clueless people.
• Nitpickers – These near-OCD people want to have every detail covered through sales and support when the discussion is at the concept state. They do not understand how to bring forth a new idea with great potential value. They can cause tremendous delays and require a great deal of work be done that is really meaningless because it is being done before the cake is ready to be put into the oven. Nitpickers add little value and create more problems than they solve.
I also frequently raise the issue of how an established company has different requirements than a startup does for bringing a product to a customer. I only have to show a one-inch thick copy of the “Safety Guide” for installing a storage system from a large vendor to make my point.
These impediments make up what I call the “Department of Revenue Prevention,” and drive many of the best and brightest to take their ideas to the startup route. A startup probably will not be ultimately successful for them, and the idea they worked so hard to bring to market may ultimately not be successful. Still, working twice as hard when there is a chance of success is still better than dealing with the institutionalized impediments most large companies put in place.
It is interesting that established companies did not start out that way. They built in these impediments as their organizations grew by adding processes and people to create the blockages. It is also unfortunate. But if you try to change that, there’s someone standing in the way from that happening.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Kenneth Hui, open cloud architect at Rackspace, isn’t a big fan of the term software-defined storage — especially when discussing the cloud. He prefers “programmable storage” to describe capacity that is flexible enough to expand and contract-based traffic workloads and the resources needed.
“It’s gotten to the point where software-defined means anything to anybody,” said Hui, during an interview at the Virtualization Technology Users Group (VTUG) last week at Gillette Stadium in Foxboro, MA. “In OpenStack, we have goals to make storage programmable. It’s programmable in the way it’s consumed. I don’t want to go to the storage team to provision storage. It’s managed by a team of cloud administrators and requests are put in by the end users.”
Hui was part of a two-man speaker team (the other was Cody Bunch, a cloud/VMware expert and author who addressed the audience barefooted) that delivered at keynote at VTUG about cloud principles and bridging the gap between VMware and open-source OpenStack.
“You have to understand, OpenStack is not a virtualization tool,” Hui said. “It’s not a month-long software project. It’s a collection of software projects. OpenStack puts things together to create a cloud platform. It’s a new management layer. It’s one orchestration tool where you spin up enterprise infrastructure from a single pane of glass.”
In the OpenStack cloud world, storage is a part of the overall infrastructure but not a one-size-fits all configuration for every application and traffic workload. OpenStack Swift is object storage for the pure cloud applications that need to scale to petabytes of data. Wikipedia, which uses Swift, is an example of a cloud application that requires object storage.
There also is OpenStack Cinder for persistent or block-based storage for high-performance application, while OpenStack Compute uses ephemeral storage that is like a data store that is created on the fly and then deleted when it is no longer needed.
“In cloud, instances are mostly temporary,” Hui said. “You usually spin up an instance to fit a specific requirement and you adjust the resources to fit the workload. Right now, in the traditional data center the storage guy tunes the storage and then hands it to the compute guys.”
The bigger question is whether traditional, monolithic storage will co-exist with t cloud or become marginalized by it. Hui believes in the former rather than the latter.
“It’s like when the open systems guys said, ‘This is the end of mainframes.’ Where are mainframes today?” Hui asked. “Mainframes generally make more revenue today than cloud. There will always be some workloads that will stay on mainframes, and it’s not trivial to move off legacy systems. It’s never going to be trivial. When I talk to storage administrators about cloud storage versus traditional storage, it’s not an either/or conversation. It’s what is the best use case.”
Brocade last week revealed it is getting out of the adapter business, and it has sold off those products to QLogic.
It’s easy to see why Brocade made this move. Despite being the Fibre Channel switch market leader, its host bus adapter (HBA) and converged network adapter (CNA) products never caught on and it barely made a dent in the market shares of QLogic and its main rival Emulex. Getting rid of that part of the business allows it to focus on its main FC and Ethernet switching.
But what’s in it for QLogic? While the purchase price was low enough the vendors did not have to disclose it, why does QLogic need Brocade’s adapters? It already has competing products for every one of Brocade’s adapters.
There are two advantages for QLogic, according to its director of corporate marketing Tim Lustig. It will pick up about three points of HBA market share and about 12 points of CNA share by acquiring the Brocade products, plus the deal opens the way for better technical cooperation between the two vendors. This deal follows QLogic’s decision last July to stop development of its FC switching products that compete with Brocade.
“QLogic positions this as a strategic relationship,” Lustig said of the acquisition.
Lustig said QLogic will sell and support Brocade’s adapter products, but will not upgrade any of those devices. They will honor Brocade’s OEM deals with IBM, Hewlett-Packard and Dell, which often sell Brocade adapters as lower-cost alternatives to QLogic’s adapters.
“We’re not interested in the technology itself,” Lustig said. “We acquired only the current product lines, and we will be responsible for support of products already sold.”
QLogic will also integrate Brocade’s ClearLink diagnostics technology into its HBAs, following a similar announcement made by Brocade and Emulex last November. QLogic and Brocade have also agreed to align product plans and testing for Gen 5 (16 Gbps) and Gen 6 (32 Gbps) FC technology, and jointly market next-generation storage area networking (SAN) products.
Lustig said he expects 2014 to be the year when 16-gig FC picks up steam. He said QLogic still gets about 70 percent of its revenue from 8 Gbps FC devices and about 10 percent from 16 Gbps, with most of the rest from 4 Gbps. “The market is just starting to transfer over,” he said. “We think 2014 will be the year for 16-gig.”