Druva has scored $51 million in new private financing to diversify its cloud backup platform and accelerate global marketing and sales.
The vendor said part of the proceeds will be used to introduce new features in Druva software., including machine learning in 2017 for analyzing multiple data sets in the public cloud.
Prying capital from investors is challenging in the current climate, making Druva’s $51 million a considerable haul. The new money brings its total private capital raised to $118 million since Druva launched in 2008.
CEO Jaspreet Singh attributed the new investment to Druva’s continuing focus on the cloud to eliminate separate hardware and software for different use cases.
“The timing to raise money isn’t great right now, but we have a strong story to tell. We have a strong tier of public cloud behind us for collaboration, disaster recovery and business intelligence. Part of DR is backup and recovery and part of it is information management. We do both,” Singh said.
“People are looking at cloud storage as a means to retain data longer. Druva software is a born-in-the-cloud, cloud-native technology that doesn’t require you to buy any dedicated hardware or software, which is pretty attractive if you are a growing enterprise.”
Singh said machine learning will be added to Druva software in January to allow customers to extract greater value from idle cloud backups.
Druva sells two branded cloud backup products. Druva’s software for backing up enterprise endpoints is called inSync, which converges backup and data governance across physical and public cloud storage.
Druva Phoenix is a software agent to back up and restore data sets in the cloud for distributed physical and virtual servers. Phoenix applies global deduplication at the source level and points archived server backups at a cloud target.
Druva in May added disaster recovery as a service (DRaaS) to Phoenix to continuously back up VMware image data to Amazon Web Services.
Druva’s software-based analytics works off a golden backup copy in the cloud. Users can search the single-instance storage and run multiple workflows off the same data.
Existing Druva investor Sequoia India headed a consortium that included new investors Singapore Economic Development Board, Blue Cloud Ventures and Hercules Capital. Other existing vendors to participate included Nexus Venture Partners, NTT Finance and Tenaya Capital.
Dell EMC plans to “stay the course” with its flash storage portfolio despite overlapping products at the midrange and low end, an executive at the newly combined company confirmed.
Daniel Cobb, a fellow and vice president of media strategy at Dell EMC, said the company would continue to invest in all of its the flash products. That includes support for emerging technologies such as nonvolatile memory express (NVMe), NVMe over Fabrics and 3D TLC NAND flash.
“You may not always see the newest technologies first in the lowest end platforms,” Cobb said. “That’s usually not the way it happens. But as things continue to go mainstream and suppliers get their volumes up and their costs down and under control, we’ll see the appropriate technologies end up across the whole portfolio.”
Cobb referred to Dell EMC’s DSSD rack-scale appliance as “the flagship in terms of performance and throughput” for real-time workloads. EMC’s original all-flash array platform, XtremIO, and all-flash VMAX target general-purpose enterprise workloads.
The greatest potential for all-flash overlap is in the midrange. The Dell EMC flash portfolio includes EMC’s new Unity-F and older VNX-F arrays. Dell holdovers include the SC Series, formerly known as Compellent, and PS Series, formerly EqualLogic.
“Our plans there are stay the course, keep those customers happy, keep them running on the media that they’re comfortable running with,” Cobb said. “Both platforms have already made the move to flash.”
Cobb said he expects VNX customers to “be delighted” with the new Unity product and ultimately move to that product. But he said they can stay with VNX as long as they want, much the same as Compellent and EqualLogic will be able to do.
“As [former EMC CEO] Joe Tucci liked to say, ‘I’d rather have multiple products in a portfolio and risk managing the overlaps than leave some gaps.’ We’re pretty comfortable doing that now. We’ve been doing it for a while,” Cobb said.
He said EMC is able to continue to invest in so many flash product lines because it is accustomed to sharing investments such as flash management, deduplication and compression across multiple product lines.
Yet another all-flash product is on EMC’s roadmap. Project Nitro, an all-flash version of its Isilon scale-out NAS array, is due to be equipped with more cost-effective 3D TLC NAND flash to target file and object workloads. Cobb provided no updated timetable for Project Nitro.
EMC already held a commanding 40% market share for the second quarter of 2016 in the all-flash array (AFA) market, according to a report released by International Data Corp. (IDC) this month. NetApp (16%), Hewlett Packard Enterprise (13.8%), Pure Storage (11.5%) and IBM (8.7%) trailed by considerable margins.
Dell’s SC and PS Series arrays do not qualify for IDC’s AFA stats, because they’re only all-flash configurations of hybrid flash arrays, according to Eric Burgener, a storage research director at IDC. EMC products factoring into IDC’s second quarter statistics were XtremIO, VMAX All Flash, Unity-F and DSSD D5, Burgener said.
MxSP is hyper-converged infrastructure (HCI) software designed to run on commonly used server hardware. Maxta said customers can download a free production license of its HCI software to implement a maximum three-node cluster with up to 24 TB of raw storage.
The “freemium” model gives enterprises a perpetual MxSP license that can be upgraded to a paid support contract with more storage. Customers choosing the no-cost download can get advice and self-help resources via on online community forum sponsored by Maxta.
Most hyper-converged vendors package their HCI software on branded appliances that consolidate computing resources, networking, storage and virtualization tools within a single piece of hardware. Maxta, on the other hand, licenses MxSP as a virtual storage appliance that pools storage on x86 servers. Maxta server resellers also prepackage MxSP on commodity storage servers as part of its MaxDeploy reference architecture.
MxSP is licensed on a per-server basis based on dual-socket or quad-socket servers. Maxta does not charge customers by processors, server class or by storage capacity. As is typical of hyper-converged vendors, Maxta deployments start at a minimum of three nodes. Maxta requires each server node to have at least one solid-state drive with 100 GB of available capacity, plus two 300-GB hard disk drives.
Maxta vice president of marketing Mitch Seigle said making the HCI software available as a free offering removes some barriers that discourage enterprises from hyper-converging resources.
“We are enabling them to stand up an HCI cluster using hardware they typically already have available in house, at no cost. They can evaluate and test it in their environment on their schedule, without the constraint of trial ‘time bombs’ or limited functionality,” Seigle said.
As an example, Seigle said an enterprise could test an MxSP test cluster with the free version, and subsequently take it directly into production by upgrading to a paid support contract and unlimited storage. Each node added to the existing cluster requires a paid license for MxSP HCI software, which includes 12 months of Maxta support.
Giving customers a free perpetual HCI software license is an attempt by Maxta to boost brand recognition, particularly as a way to emphasize how its HDI differs from appliance-based products. Maxta did not disclose how many paying customers it has, but Seigle said “hundreds of users” have registered to download the free version since its launch Aug. 29.
Spectra Logic is extending its BlackPearl Deep Storage Gateway line upwards and downwards. The tape vendor has added an entry-level model and a much larger offering that more than triples the number of LTO drives the original product could manage.
Spectra BlackPearl appliances, originally introduced in 2013, offer a direct interface from any workflow into an active archive, providing access to nearline storage, tape and cloud. The new additions are the BlackPearl V Series and BlackPearl P Series.
The V Series can store 300 million objects and transfer data at up to 300 MBps sustained to disk or tape, while the P Series can store more than 1 billion objects, transfer up to 3 GBps sustained to disk or tape, and manage more than 20 LTO-7 tape drives.
Those new products join the previously available BlackPearl S Series, which can transfer data at up to 800 MBps to disk or tape.
The V Series could fit well in a small post-production house that works on a job-by-job basis, perhaps doing color correction, said Spectra CTO Matt Starr.
“They’re not retrieving hundreds of terabytes per day,” Starr said.
The P Series is for large organizations. A typical customer might have 20 or 30 Avid edit stations running every day and hit the archives more often. The P Series is up to four times faster than the original product.
The goal of Spectra BlackPearl was to open tape up to places where the medium was rarely used or usage has greatly expanded because of recent trends. Starr pointed to data from body cameras as an example – police departments may need that footage for a month if it is not required as evidence, but may need to retain it much longer if it is.
Starr doesn’t anticipate adding a fourth unit, as he said the three types of Spectra BlackPearl appliances cover the market well.
The V Series starts at $12,000, plus a yearly maintenance fee. The P Series list price is between $80,000 and $90,000, depending on the configuration. The original S Series starts at $33,000.
Research firm IDC reports factory revenues for worldwide purpose-built backup appliances (PBBA) experienced solid growth in the second quarter while enterprise storage systems revenue remained flat during that same period.
Factory revenues for worldwide purpose-built backup appliances (PBBA) grew 11.5% year over year in second quarter to $871 million, according to IDC’s Worldwide Quarterly Purpose-Built Backup Appliance (PBBA) Tracker. Most of that growth came from open systems, which increased 12% to $788 million. The rest of the revenue came from backup systems for mainframes.
“After three consecutive quarters of year-over-year decline, mainframe systems revenue was up by 6.2 percent from a year ago,” according to the PBBA tracker report. “Total worldwide PBBA capacity shipped for Q2 2016 reached one exabyte, an increase of 35.3 percent from Q2 2015.”
EMC held on to its lead in the overall PBBA market, with $538 million and 62% revenue share. Veritas came in second at 13% with $116 million in revenues. IBM ranked third with $49 million (six percent) and HPE fourth with $32 million (four percent). Dell came in fifth with 3 percent market share and $25 million in revenues.
The overall picture was different for worldwide enterprise storage systems factory revenue as it posted zero year-over-year growth in the second quarter with about $9 billion in revenues, according to IDC’s worldwide Quarterly Enterprise Storage Systems Tracker. External storage systems still is the largest market but the $6 billion in sales represented flat year-over-year growth
However, total capacity shipments were up by about 13% year over year to 35 exabytes. Sales of server-based storage also were up at 10% during the quarter and accounted for almost $2.4 billion in revenue.
EMC and HPE came in at a statistical tie for the total worldwide enterprise storage systems market, accounting for 18% and 17.6% market share, respectively. (IDC considers anything less than a percentage point apart a statistical tie). EMC’s revenue dropped 5.5 percent to $1.599 million, while HPE’s business grew almost 9 percent and generated $ 1.556 million in revenue.
Dell came in third with 11% market share and increased its business by 14% year over year, generating 1 million in revenue, while IBM had 7 percent market share and lost 16 percent in year-over-year revenue, generating $600 million in overall storage revenue. NetApp came in fourth, showing a 3.2 percent decline in year-over-year in overall storage revenue growth with $595 million in revenues.
“As a single group, storage system sales by original design manufacturers, selling directly to hyperscale data center customers accounted for $9 percent of global spending during the quarter,” the report stated.
EMC was the largest external enterprise storage system supplier, accounting for 28% of worldwide revenues. All of EMC’s revenue came from external (networked) storage.
HPE and NetApp were tied for tmarket share. HPE had 10.6% share and $602 million in sales. NetApp captured 10. 5% of the worldwide market share and generated $6 million in sale.
IBM’S revenue declined about nine percent since last year and it stood fourth with $538 million, followed by Hitachi at $419 million and Dell at $395 million. Hitachi revenue grew 15% and Dell increased 10% year over year.
StorageCraft will use its new analytics technology to tell its customers to stop backing up certain data.
That’s right, the data protection vendor wants its customers to back up less. And StorageCraft’s acquisition of Gillware Online Backup from Gillware Data Services this week will help it do that.
StorageCraft’s flagship ShadowProtect SPX software backs up virtual and physical Microsoft Windows and Linux servers. It also sells Cloud Services replication software, GranularRecovery software for Exchange, and management and monitoring software.
The key piece of Gillware Online Backup is Backup Analyzer. The application can look at all of a customer’s files, suggest those that have not been backed up and which files may not need to be backed up.
StorageCraft CEO Matt Medeiros said Backup Analyzer technology will optimize ShadowProtect backups, and he expects the Gillware development team to expand its current technology.
“Knowing what you should back up can be difficult,” Medeiros said. “Backup Analyzer helps customers determine high, medium and low priority for backups. Now we can help customers intelligently tier their data.
“The storage industry wants you to believe that all data is equal. It’s not. Some companies are finding that 50 percent of their data is not even of value to the business anymore. Yet we back it up, buy more storage for it, and pay people to manage it.”
Draper, Utah-based StorageCraft sells its software through managed service providers and VARs.
The Gillware Online Backup team consists of around 30 people, mainly engineers. The acquisition brings StorageCraft’s total headcount to a bit over 350, Medeiros said. The Gillware team will stay in Madison, Wisconsin.
Gillware Data Services already resells StorageCraft software, and that partnership will continue.
StorageCraft did not disclose the sale price, but part of the $187 million equity investment that TA Associates made in StorageCraft in January will fund the deal. Medeiros joined StorageCraft from Dell SonicWall at the time of the TA Associates funding.
Zetta unveiled the latest version of its disaster recovery as a service (DRaaS) that promises data recovery in less than five minutes.
The company initially launched its DRaaS back in May 2015 to protect virtual and physical servers in the cloud. The Zetta cloud DRaaS service does not require a local appliance and allows managed service providers (MSPs) and companies to run a virtualized native network in the cloud.
Mike Grossman, Zetta’s CEO, said the latest version includes upgrades that customers have asked for since the service was initally introduced. The latest version also includes failback, automated disaster recovery testing that encompasses both backup confirmation and “bootable” servers in the cloud, and active directory integration.
“We really feel like we have learned what customers need,” said Grossman. “A lot of key pieces needed to be built. We’ve been focusing on all these things in the last year.”
Also, Zetta cloud storage built in a preconfiguration capabitilty for network, VPN and firewall configuration at the time of onboarding to the cloud. Zetta does up-front configuration of the network, firewall, VPN connectivity. The offering is priced on a monthly, per-gigabyte storage cost and the amount of RAM used.
“We are a cloud company and there are complications in network configurations,” said Grossman. “There are network configuration nuances that we have to deal with that application vendors do not. We found out that a lot of vendors don’t complete the set. Now, we are dealing with it upfront.
Zetta’s cloud DRaaS, which targets MSPs and enterprise businesses, can protect at least 100 physical or virtual servers. The DR as a service offering has built-in WAN optimization that moves up to 5 TB of data in 24 hours for backup and recovery. It’s also optimized for data sets that are up to 500 TB.
Zetta cloud DRaaS supports multiple servers, applications, native networks and heterogeneous operating system platforms. It can boot physical and virtual systems in the cloud via a virtual private network or Remote Desktop Protocol connection. It can replicate native file systems and map a local drive in the cloud to recover individual files or entire server images.
Grossman said they target small-to-medium enterprises and partners and they offer a “no-data corruption” guarantee. The management portal allows for fast configuration and a software agent does efficient and resilient transport to and from the cloud. It offers standards-based, stateless architecture, along with the ability to manage multi-tenant storage.
Kaminario Clarity will be available to any K2 customer. Kaminario has a target of the first quarter of 2017 for delivering Clarity.
Kaminario Clarity features include a quality of service that lets customers set service levels for specific types of workloads. For instance, the K2 array could prioritize small reads and writes over large ones in a transactional database to match usage patterns.
Clarity will also include a new portal customers can use to see insights into K2 performance, as well as suggestions to improve performance and capacity.
Josh Epstein, Kaminario VP of global marketing, said a future step will be to automate the service levels for applications. Kaminario intends to add Clarity agents that will integrate into specific applications, such as Oracle and Microsoft SQL databases. The agents will provide more granular metrics for those applications.
“We’re gathering statistics about the K2 and the storage ecosystem – databases, servers, networking – and providing analytics, trends and insights from across our installed base,” he said. “The analytics tell customers how to configure and optimize their storage infrastructure.”
Kaminario Clarity continues the trend of vendors providing tools that collect data from storage arrays, upload it to clouds and provide analytics reports from customers. Other cloud-based analytics include Nimble Storage Nimble InfoSight, PureStorage Pure1 Cloud Global Insight, EMC Unity CloudIQ, HPE StoreFront Remote, IBM Spectrum Control Storage Insights, and Tintri Analytics. These tools are gaining popularity with newer array models, specifically those incorporating flash.
The Storage Networking Industry Association (SNIA) released Swordfish, a new specification that could ease the management of storage equipment and data services in converged, hyper-converged, hyperscale and cloud environments.
The SNIA Storage Management Initiative’s Swordfish 1.0 specification aims to simplify the provisioning, monitoring and management of block, file and object storage.
For instance, the Swordfish application programming interface (API) can associate different classes of service with storage gear of varying performance levels. An IT administrator would need only to specify the class of service to allocate storage to servers and virtual machines (VMs), rather than having to specify details on the most suitable storage array.
So far, the SNIA Swordfish specification offers extensive functionality only for block and file storage. Capabilities include the provisioning with class of service as well as replication and capacity and health metrics. Object storage support is on the Swordfish roadmap.
“One of the reasons SNIA’s so interested in doing Swordfish as an extension of Redfish is that this is an industry play to wind up with a unified approach for server, storage and fabric management,” said Don Deel, chair of SNIA’s SMI governing board and NetApp’s senior standard technology.
Swordfish can work across a variety of storage network fabrics including Fibre Channel, Ethernet, SAS and PCI Express (PCIe), according to Deel.
Swordfish will eventually replace SNIA’s Storage Management Initiative Specification (SMI-S) and possibly overcome SMI-S limitations. Deel said SMI-S is an “equipment-oriented” standard that exposes what the storage gear can do. By contrast, Swordfish is a “customer-centric interface” that focuses on use cases for “what IT administrators need to do with storage in a data center on a day-to-day basis,” Deel said.
“SMI-S has a ton of functionality but it does not scale well. That is a key for plugging and playing into all of these new models,” said Richelle Ahlvers, chair of SNIA’s SSM Technical Work Group and principal storage management architect at Broadcom.
Ahlvers said the tech industry has been shifting to REST-based interfaces. SNIA partners wanted to see standards updated with a more modern interface that could play in all environments, including the emerging hyperscale and cloud scenarios. They also wanted storage management APIs that are simpler to implement and consume and accessible via a standard browser, she said.
“SMI-S and other standards, even on the server side, have been very complicated. It’s a high learning curve,” Ahlvers said
SNIA’s Scalable Storage Management (SSM) Technical Work Group formed last October to scope out the Swordfish project and drew up a charter in December. Broadcom, Dell, EMC, Hewlett Packard Enterprise (HPE), Inova, Intel, Microsoft, NetApp, Nimble Storage, and VMware are among vendors that played key roles in developing Swordfish.
The SNIA Swordfish specification is publicly available for implementation. Ahlvers said anyone with a Redfish implementation could tack on Swordfish within a few months, but those starting from scratch would need to do more work. She expects to see products and early implementations start to show up in the middle of next year.
“The key here is really going to be the client drivers,” said Ahlvers, noting the work of Intel, Microsoft and VMware. “Between those three, that’s going to be helping to pull the vendors to add support for Swordfish.”
SNIA Swordfish team members and industry experts are presenting details on the new specification at this week’s Storage Developer Conference in Santa Clara, California.
Nutanix took the last step before completing its initial public offering today when it set the target price range for its offering.
Nutanix filed an S-1 registration form with the Securities and Exchange Commission detailing plans to sell 14 million shares of Class A stock for between $11 and $13 per share. The hyper-converged market leader seeks to raise $209 million through the IPO. A Nutanix IPO price of $13 would make the company worth $1.8 billion. That falls below its $2 billion valuation at the time of its last funding round in 2015.
Nutanix first filed to go public last December, but the Nutanix IPO was stalled by a slow IPO market. There has only been a handful of tech IPOs in 2016.
One of Nutanix’s founders and its original CTO, Mohit Aron, said Nutanix executives and its investors likely were scared off by the poor IPO market. He said the current IPO market is less forgiving of a company still losing money despite strong revenue growth. Aron holds 10.7 million shares of shares of Nutanix common stock but no longer works for the vendor.
“Investors used to look at growth in past years,” Aron said. “This year, investor sentiment has turned and they’ve started looking for investors have started looking for profitability. Maybe Nutanix thought it would show a reduction in losses — which they’ve been showing — so investors would be more lenient towards looking at them.”
Aron said he expects Nutanix will do well in the long term. “Eventually, it’s about a technology that is ground-breaking, solves a real problem and customers are adopting it,” he said. “The technology makes sense. I see hyper-convergence getting adopted every day for primary and secondary storage. Markets go through temporary ups and downs. I think companies will do well when they have strong fundamentals.”
Aron calls Nutanix’s hyper-converged technology “my baby,” although he left in 2013 to start secondary data hyper-converged vendor Cohesity.
Nutanix investors will need patience if they want to see profit. In an SEC filing last week, Nutanix declared “we will continue to incur net losses for the foreseeable future.”
The company has lost a total of $442 million during its history, including losses of $84 million, $126 million and $169 million in the last three fiscal years. Nutanix lost $50 million last quarter after losing $49 million the previous quarter.
Those losses came despite impressive revenue growth. Its revenue increased 84% year-over-year to $445 million during the last fiscal year, which ended July 31. For the quarter that ended July 31, it reported $140 million in revenue – a 22% increase over the same quarter last year – and has $255 million of revenue in the first two quarters of this calendar year. Most of Nutanix’s expenses come from sales and marketing — $288 million of its $439 million in expenses last fiscal year and $88 million of its $133 million in the last quarter came from sales and marketing.
The Nutanix IPO filing indicated no plans to decrease that spending. Nutanix claimed: “We intend to grow our base of 3,768 end-customers, which we believe represents a small portion of our potential end-customer base, by increasing our investment in sales and marketing, leveraging our network of channel partners and furthering our international expansion. One area of specific focus will be on expanding our position within the Global 2000, where we currently have approximately 310 end-customers.”
Aron said Nutanix needs to pursue a growth strategy if it is to hold off competitors such as Dell EMC, Hewlett Packard Enterprise and Cisco. That includes research and development as well as sales and marketing. Nutanix is expanding its technology to become a platform of choice for companies looking to build internal enterprise clouds.
“I think we all know no company can just rest on its laurels and milk a technology for a while,” Aron said. “Others catch up eventually. You have to keep innovating.
“I think if they want to become profitable, they can do it next year. If a company wants to, it can put a complete break on growth, but what’s the point of profitability if you’re not growing? So there’s a healthy balance a company has to juggle.”