Backup appliance revenue increased for the second straight quarter, with Symantec making up more than one-third of the Backup appliance revenue increased for the second straight quarter, with Symantec making up more than one-third of the gains according to the latest gains according to the latest IDC numbers.
Backup appliance revenue increased 11.2 percent year-over-year to $789.2 million in the third quarter of 2014. That follows an 8.5 percent year-over-year increase in the second quarter of 2014 and a 2.5 percent decline in the first quarter.
The market that IDC classifies as purpose-built backup appliances includes disk targets that require separate backup software and appliances with backup software integrated. EMC’s Data Domain is an example of the first category with Symantec’s NetBackup appliances an example of the integrated systems.
NetBackup appliances made solid gains last quarter, as Symantec revenue grew 37.9 percent year over year. Symantec remains a distant second behind EMC, but its market share ticked up from 10.4 percent last year to 12.9 percent. EMC revenue grew 7.2 percent but it lost share from 64.8 percent to 62.5 percent. Symantec revenue grew from $74.1 million in the third quarter of 2013 to $102.2 million in the same quarter this year. That $26.1 million gain made up a good chunk of the $79.7 million increase of the overall market.
EMC and Symantec were the only vendors with more than 10 percent market share. IBM remained third with $50.6 million and dropped 14.7 percent year-over-year. No. 4 Hewlett-Packard took a 17 percent hit to $26.2 million. Quantum increased 28.9 percent to $20.5 million and grew revenue share from 2.2 percent to 2.6 percent, passing Barracuda Networks to move into fifth place.
The rest of the market – “others” in IDC’s list – grew 40.6 percent to $86.4 million and 12.2 percent market share.
IDC said capacity shipped hit 687,474 TB, up 81.5 percent from last year. Open systems revenue increased 12.9 percent while mainframe revenue dropped 1.1 percent.
Tintri, which has earned a good reputation for providing storage for VMware virtual machines, this week made its Microsoft Hyper-V support generally available on its VMstore systems. That paves the way for multiple-hypervisor users on VMstore.
Tintri also supports Red Hat Enterprise Virtualization (RHEV) hypervisors, and Tintri customers can use VMStore with different hypervisors for production data on the same box. That’s still unusual, however. Saradhi Sreegiriraju, Tintri VP of Product Management, said Tintri customers are overwhelmingly using VMware in production but like to kick the tires with Hyper-V and RHEV — particularly in test/dev.
“We have customers who want to explore other hypervisors,” he said. “Some like to use Hyper-V for non-production workloads and VMware for production – that’s what they’ve been using and they don’t want to upset that applecart.”
Sreegiriraju said customers who used Hyper-V in beta had the same use cases typical to Tintri’s VMware customers – a lot of VDI, for instance.
He also said the Hyper-V beta program brought Tintri into many shops that weren’t customers. “We’ve had a lot of prospects who want to deploy Hyper-V or who had asked us to come back when we have Hyper-V support,” he said.
Hyper-V support could provide a nice benefit for Tintri after VMware makes Virtual Volumes (VVOLS) available. VVOLS will enable traditional storage systems to natively store virtual machine disks (VMDKs), which Tintri has done from the start. But if VVOLS helps competitors catch up with Tintri on VMware storage, it can still expand into Hyper-V’s installed base.
Sreegiriraju said VVOLs won’t change Tintri’s value proposition because many legacy vendors who adopt it still have storage systems based on LUNs and volume management. Tintri will also support VVOLs. “VVOLs are nothing more than APIs that storage systems need to implement,” he said. “It doesn’t fundamentally change anything for those systems. We believe there will be a lot of challenges adopting old storage systems that worked on LUNs and volumes to now work with VVOLs.
“And VVOLs will be limited to VMware, and we now work on Hyper-V and Red Hat too.”
DataGravity picked up $50 million in funding today, two months after coming out of stealth with systems built for storage admins and data scientists.
The vendor – founded by EqualLogic veterans Paula Long and John Joseph – now has amassed $92 million over three funding rounds.
CEO Long said DataGravity will use the funding to market the Discovery Series arrays it began shipping in October, as well as to build out customer support. Discovery Series arrays combine hybrid flash storage with data analytics, e-discovery and data protection features.
DataGravity has a few more than 90 employees now, and Long said the current funding should take the company to around 300 before the next funding round.
On the product development front, she said DataGravity has a “big and rich roadmap that will build on four pillars of data-aware storage – storage visualization, data governance, data privacy and data protection.”
DataGravity is still know largely as the old EqualLogic crew inside the storage world, so the next logical step would be for it to make a name for itself based on its Discovery Series products. Dell acquired iSCSI SAN pioneer EqualLogic for $1.4 million in 2008.
“EqualLogic’s focus was on storage automation so that storage could manage itself,” Long said. “DataGravity believes that too, but also in multi-protocol [iSCSI and file] storage and that you should know what’s in your data and benefit from it.
“We joke that at EqualLogic we used to make an A-plus storage admin. At DataGravity we tell that person to make room for a data scientist and data security person too.”
Accel Partners led the funding round, with previous investors Andreessen Horowitz and General Catalyst Partners contributing. Accel’s Ping Li joins DataGravity board. He also serves on the boards of Cloudera, Code 42, Nimble Storage and Primary Data.
External storage revenue increased less than one percent year-over-year last quarter, with none of the major vendors growing more than 3.5 percent. That’s better than the two previous quarters of year-over-year declines, but hardly suggests a recovery.
The networked storage revenue of $5.8 billion ticked up from $5.754 million in the third quarter of 2013. EMC held its lead with $1.82 billion, up 3.5 percent from $1.71 billion the previous year. Its overall market share increased from 30.6 percent to 31.4 percent. No. 2 Net App increased 0.3 percent to 12.9 percent market share, No. 4 Hewlett-Packard (HP) went from 9.5 percent share to 9.7 share with 2.7 percent revenue growth, and No. 6 Dell grew 6.2 percent revenue and increased share from 7.2 percent to 7.3 percent.
No. 3 IBM and No. 5 Hitachi Data Systems were the big losers. IBM’s revenue of $591 million was a 9.9 percent drop from last year, and its market share fell from 11.1 percent to 10.2 percent. HDS revenue dropped $2.7 percent to $432 million, and its market share went from 8.3 percent to 7.4 percent.
The losses of IBM and HDS were picked up mostly by smaller vendors. The “others” category grew 6.2 percent to $1.23 million and edged up from 20.1 percent to 21.2 percent.
External disk storage growth lagged behind overall disk storage, which includes server-based and direct attached storage. Total disk storage grew 5.1 percent to $8.75 billion. EMC also led overall disk storage revenue, followed by HP, Dell, IBM and NetApp. The overall disk storage market bounced back from the second quarter, when it fell 5.11.4 percent from last year.
IDC pointed out that server-based storage and smaller external arrays outperformed the market. Server-based storage revenue increased 10 percent and sub-$100,000 external array revenue grew more than six percent. IDC this quarter began tracking Original Design Manufacturers storage sold directly to hyperscale data centers. That ODM storage revenue grew 22 percent, accounted for 43 percent of storage capacity and drove much of the growth in the overall storage market.
IBM and HDS both rely largely on storage for large enterprises, which apparently dropped off in favor of server-based, entry level and ODM storage.
All storage systems have a controller, which is a device with a processor that sends instructions to the disks. Storage controllers differ from vendor to vendor, but generally fit into three types — custom designed, purpose-built, and commodity server-based.
Each type of implementation has advantages and drawbacks. Vendors will highlight the characteristics of what their systems offer. It is up to customers to do the evaluation to determine what system best fits their needs.
These are the key characteristics for each type of storage controller:
Custom designed storage controllers have hardware that is specific for that storage system. Custom ASICs (Application Specific Integrated Circuits) or FPGAs (Field Programmable Gate Arrays) are common in custom controllers. They might also have custom logic or use standard components such as Intel processors. Storage software exploits the custom hardware.
Performance and reliability are the two major advantages of a custom controller design. Performance acceleration comes from using custom hardware for data movement, RAID protection, compression, encryption, or other processing intensive operations. Reliability improvements come from built-in error checking and a reduction in number of components required.
The disadvantages are that it is usually costs more to implement a custom design and it takes the vendor longer to develop a new or updated storage controller.
A purpose-built storage controller uses commonly available elements such as processors and adapter boards integrated into a package. The storage software has an understanding of the specific hardware in a configuration.
The advantages of purpose-built storage controllers include:
• Serviceability is improved because of the ability to non-disruptively replace components.
• Scaling can be done non-disruptively by adding adapter cards.
• Reliability is improved with testing and control for components used in the controller.
• Technology is advanced by leveraging other company’s research and development for the components used.
The disadvantages of purpose-built controllers are the generational changes that occur in common hardware such as the processor technology require engineering changes and more testing of the newly integrated controller.
Some storage controllers use standard servers with the storage software as an application that runs on the server. The server can be a brand name or a white box server.
The main advantage of using a commodity server as a storage controller is cost. The high volume for the server makes it the least expensive of the implementations, and there are many sources for the servers.
The disadvantages are that the server may be less reliable than the other implementations, it may be difficult to provide non-disruptive serviceability for component replacement and upgrades/scaling, variations in server hardware may create support problems, and storage software may require regular updating.
I went into detail into the different architectures of storage systems in an Industry Insight article I wrote, which is available here.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Overland Storage and Sphere 3D completed their merger Tuesday, and while the combined company will be called Sphere 3D, it looks a lot more like the old Overland Storage.
Officially, Overland becomes a wholly owned subsidiary of Sphere 3D after Sphere 3D paid $81 million for Overland stock. The Overland, Tandberg Data (acquired by Overland last January), Sphere 3D and V3 Systems (acquired by Sphere 3D last January) brands remain.
Overland CEO Erik Kelly will run the new company as CEO and chairman (a position he held at Sphere 3D before the merger). Former Sphere 3D CEO Peter Tassiopoulos ranks below Kelly as vice chairman and president. Overland CFO Kurt Kalbfleisch will take that same job with Sphere 3D. The new Sphere 3D board includes former Overland directors Vic Mahadevan and Dan Bordessa, and Sphere 3D holdovers Peter Ashkin, Mario Biasini, and Glenn Bowman along with Kelly and Tassiopoulos.
Most of the products and revenue will also come from Overland because Sphere 3D didn’t have much of either before the merger. Sphere 3D reported third-quarter earnings Monday, claiming $1.6 million in revenue and $3.75 million in losses.
Overland last month reported $22.9 million in revenue for its most recent quarter, and lost $7.3 million. The new company’s goal is to turn Overland’s disk and tape storage products and Sphere 3D’s Glassware virtualization products into one profitable business instead of two money-losing companies. That will be a tough trick.
Kaminario announced a $53 million funding round today at the same time it followed through on a pledge to support data-at-rest encryption in its K2 all-flash arrays by year’s end.
The cash infusion was the largest to date for Kaminario and boosted the company’s capital total to $128 million. New investors Silicon Valley Bank, Lazarus Hedge Fund and a large unnamed public company joined existing investors Sequoia Capital, Pitango, Globespan, Tenaya and Mitsui in backing the Newton, Massachusetts-based startup, which also has offices in Israel, California and New York.
Kaminario CEO Dani Golan said the financing round was significantly oversubscribed, with demand at about $100 million, reflecting the interest that investors have in the hot all-flash array market. Golan claimed his company has been winning an unusually high percentage of deals among midrange enterprise customers, which he defined as having a run rate of $100 million to $5 billion.
“With our product K2, it’s clear that it’s a highly differentiated product and its differentiation is not going to go away any time soon,” Golan said.
Always known for its high-performance flash arrays, Kaminario shipped a major new fifth-generation release in May that added enterprise capacity-saving features such as thin provisioning, inline deduplication and inline compression. Golan touted the company’s public promise to deliver an average price of $2 per usable GB, with data reduction factored in. He said sales have quadrupled since the launch of K2 v5.
Two key features missing from that K2 v5 release, and pledged for 2014, were replication and encryption of data at rest. Kaminario said the always-on, data-at-rest encryption can be added non-disruptively to a K2 array without downtime or data loss. Golan said replication is coming soon.
The financial horizon looks brighter for Violin Memory than it did a year ago.
The Santa Clara, California-based all-flash array (AFA) vendor reported last week that revenue grew for the second straight quarter, up 17% to $21.7 million, and bookings ramped up at more than twice the rate of revenue on a sequential basis. CEO Kevin DeNucci said the company hopes to break even by the end of next year.
Violin faces heated competition in the increasingly crowded AFA market, but the company appears to have finally stemmed the bleeding. At the end of last year, Violin’s board of directors terminated former CEO Donald Basile after the company lost more than $90 million through the first three quarters of 2013, including $34.1 million in its initial quarterly earnings report since going public in September 2013.
DeNuccio replaced interim CEO Howard Bain in February 2014. Other new additions included Eric Herzog from EMC to head marketing and business development and Tom Mitchell from Avaya to shore up global field operations.
This year, Violin’s quarterly net loss fell from $30.1 million in its first fiscal quarter to $8.4 million for the quarter ending July 31, when the dramatic drop took into account a $17.4 million gain on the sale of the company’s PCIe product line. With the windfall removed, the fiscal Q2 net loss would have been higher than the $23.5 million loss for the most recent quarter ending on Oct. 31.
Jim Handy, chief analyst at Objective Analysis, said expenses are down, assets are up, and Violin’s new management team has a good shot of making the financials match the strength of the company’s technology.
“The management focus right now is a whole lot better than what they had under Basile,” said Handy. “Basile was trying to grow revenues at pretty much all cost and doing it by taking unnecessary risk. These guys are doing things that are a whole lot better thought out. They said they’re trying both to grow revenue and to increase profitability at the same time, which is a tricky mix, but that says that they’ve got their heads screwed on right.”
DeNuccio said the market for Violin’s arrays is expanding from its niche as a “performance-centric solution” to general purpose workloads supporting a mix of applications. He said the company is making “tangible progress” as large enterprises and cloud data centers transition from disk-based storage to all-flash systems. More than a third of the revenue from Violin’s top 10 customers in the last quarter resulted from disk to flash migrations for primary storage, according to DeNuccio.
“As we increase our presence in the primary storage tier, the composition of revenue should become more predictable and consistent as large customers tend to expand primary storage capacity on a regular basis given the size of their ongoing needs,” said DeNuccio.
DeNuccio said Violin’s top five customers in fiscal Q3 revenue represented 48% of total revenue, but he explained away the risk by noting that customers have varied from quarter to quarter. He said only three customers have appeared more than once among the top five quarterly transactions over the last five quarters. Violin’s customer base includes more than 400 enterprise, cloud and global Fortune 500 companies, according to DeNuccio.
In the fiscal third quarter, Violin closed two deals above $1 million each for the company’s new Concerto enterprise data services, and software revenue rose to about 13% of product revenue. CFO Cory Sindelar said the company’s $2.2 million in software sales was more than double the prior quarter’s $1 million.
Henry Baltazar, a senior analyst at Forrester Research, said Violin still has some work to do to get closer to profitability, but he expects the company to increase revenue as flash systems become more acceptable in midrange and enterprise environments. He said Violin will need to continue to expand its market through partnerships such as its work with Microsoft to produce a Windows Flash Array.
Arun Taneja, founder and consulting analyst at Taneja Group, thinks the next two quarters will be particularly telling for Violin. “Their new products are all shipping now, and their new strategy has had some time to play out. If they show good growth for the next two quarters, they would, in my opinion, have climbed out of the morass,” Taneja said.
In a report issued last week, Sterne, Agee & Leach, an investment firm based in Birmingham, Alabama, advised that Violin will need to hit a quarterly run rate of $35 million in the second half of its fiscal 2016 year for the company’s stock to be considered for a longer term investment horizon. Violin provided guidance of $23 million to $25 million for the next quarter.
Samsung’s acquisition this month of Proximal Data marked the latest in a string of deals by solid-state drive (SSD) makers to beef up their product portfolios with server-based flash caching and other software extras.
The transaction list also includes SanDisk’s purchases of Fusion-io this year and FlashSoft and Schooner in 2012; Western Digital/HGST’s acquisitions of sTec, VeloBit, Virident in 2013; and Toshiba’s addition this year of SSD maker OCZ, which also brought cache software to the picture. Seagate bought LSI’s flash business from Avago this year, and the company could be ripe for another acquisition.
“Increasingly as we look at the market, the competitive field is offering complementary software capabilities to go with their SSDs,” said Bob Brennan, senior vice president of the Memory Solutions Lab at Samsung Semiconductor Inc. “And we saw that we needed to augment our capabilities in this space.”
Brennan said the Proximal Data purchase fell in line with the company’s overall strategy of facilitating SSD-based enterprise storage adoption. He said Samsung views flash as the future, and the company wants to grow aggressively as it transitions from hard disk to flash.
Samsung Electronics Co. Ltd. sold its hard-disk drive (HDD) business to Seagate Technology plc in 2011. The Proximal deal served as a complement to Samsung’s 2012 acquisition of NVELO for client-side SSD caching software. Proximal focused on enterprise server-side caching for virtualized environments with its AutoCache technology, Brennan noted.
“It’s all been an industry trend of drive makers buying into this technology because it will help them sell their flash drives,” said Tim Stammers, a senior analyst at New York-based 451 Research. “It’s just that extra 10% help. The flash drive market is going to commoditize as the controller technology matures, and you’re going to reach out for anything you can find that will help you.”
He said acquiring server-based caching was a natural and predictable move for SSD makers. They can promise customers that the software will make their flash drives even more useful to them. But, there has been little advantage for storage array makers trying to sell server-side flash caching software, according to Stammers.
Stammers said Dell has a strong product with its Fluid Cache, but other major array vendors have been pulling back from server-side caching investments. NetApp took its Flash Accel off the market and told him there wasn’t enough demand to justify the development costs. Stammer said EMC has made no major updates in over a year to its XtremCache server-based, write-through caching software, which was formerly known as VFCache. Hewlett-Packard’s SmartCache mainly targets high-performance computing, he noted.
“We believe quite strongly that it’s only going to be a niche market,” Stammers said. “There are some complications with the way that this type of software interacts with backend arrays, and it only suits applications that are cache friendly. If the caching mechanism works well when it predicts which data is hot, you are going to get low latency. If you get a cache miss, you are not going to get low latency, and the caching software will have done nothing for you.”
Niche appeal or not, Samsung became the latest drive maker to join the fray. Brennan said Samsung is not sure if it will sell Proximal’s AutoCache as a standalone piece of software product or bundle it with another product. He said the company will poll key customers to get their opinions on the technology.
“Over time, not right away, we would expect to provide a complete system solution optimization such that the Proximal software works better with Samsung SSDs,” Brennan said.
Jim Handy, chief analyst at Objective Analysis, said Fusion-io started the server-side flash trend with its success in PCIe SSDs and subsequent acquisition of ioTurbine’s optimization and caching software. He said everyone else is now saying, “Oh yeah, we need that, too.”
“Samsung is very keen on being No. 1 in every market that it participates in, and it’s a very long way from there in the enterprise SSD market right now,” said Handy, noting the company’s strength in consumer SSDs. “This was like putting on the turbo jets to get an important position sooner rather than later.”
Nimble Storage continued its pattern of revenue growth that outpaces the industry by a wide margin while running up more losses last quarter.
In its third quarter as a public company, Nimble reported revenue of $59 million for 77 percent year-over-year growth. It lost $9.7 million, more than the $8.1 million it dropped a year ago but less than the $10 million-plus losses of the two most recent quarters.
Nimble exceeded its forecast for revenue and lost less than it expected last quarter. Still, CEO Suresh Vasudevan said it will take another five quarters for Nimble to turn a profit.
Nimble’s total revenue remains minute compared to the likes of EMC, NetApp, IBM, Hewlett-Packard, Dell and Hitachi Data Systems (HDS). But 77 percent revenue growth is impressive in an industry where the biggest vendors are either declining or increasing a few percentage points over last year.
All-flash array vendor Violin Memory, which went public around the same time as Nimble, this week reported revenue of $21.7 million last quarter. That was down 23 percent year-over-year, and Violin lost $17.8 million.
Vasudevan said Nimble added 568 customers in the quarter and had a double-digit increase in average selling price.
He expects to continue to grow by driving repeat business in Nimble’s traditional SMB and small enterprise customer base while moving more into the Global 1,000 thanks to 2014 product additions. This year, Nimble has added a CS7000 enterprise array, all-flash expansion shelf and Fibre Channel connectivity. Nimble had been iSCSI only until adding FC support last week.
“Our architectural approach is broader and superior to that of major incumbents as well as emerging companies in our industry,” Vasudeven said. “Complementing the strength of our technology, we have demonstrated a strong track record for execution, which continued during the third quarter.”
“The only change I would call out is that EMC and NetApp together have continued to become the more dominant part of the mix,” he said. “It used to be Dell quite some time back, but EMC and NetApp have continued to increase [in number of competitive deals] and HP is also increasing at the expense of Dell.”
He said Nimble runs into hyper-converged vendor Nutanix and occasionally VMware VSAN, especially in deals involving the SmartStack reference architecture Nimble sells with partner Cisco. Vasudeven said while hyper-convergence is valuable in certain use cases “it tends to have a penalty. You are scaling compute and networking and storage together irrespective of what problem you are solving in an application, and that causes you to over provision the amount of hardware. We find typically that we are much more cost competitive when we are competing against the likes of Nutanix.”