Avere Systems recently announced its latest hardware, the Cloud-Core NAS (C2N) hybrid system that is integrated with object storage and can scale up to five petabytes.
The system is comprised of the FXT 5000 nodes for NAS and the CX200 nodes for object storage that is based on OpenStack Swift software. A full system includes a minimum configuration of three 1U CX200 storage nodes for a total of 120 TB of usable capacity when using triple replication for data protection.
The other minimum configuration is six CX200 storage nodes for 480 TB of usable capacity when using erasure coding for data protection. The erasure coding offer N+4 availability, so four servers or four drives can be lost and the system will keep running. It also offers a geo-dispersal capability for disaster recovery using three sites. The CX200 nodes are loaded with 10TB disk drives and capacity can be expanded in 80 TB increments.
“It’s a scaleable system that can go from three nodes all the way to 72 1U servers that gets over 5 PB of capacity,” said Jeff Tabor, senior director of product management and marketing at Avere Systems. “It provides NAS simplicity but also provides the efficiency of the cloud and it’s all integrated.
“The key part of the operating system is the data protection. One is erasure coding and the other is triple replication. Triple replication can be inefficient so the erasure coding gives both resiliency and efficiency,” Tabor said.
The FXT compute performance tier for NAS, which supports NFS and SMB, is an all-flash configuration that scales to 480 TB using solid state drives. The system supports snapshots, data migration, mirroring, compression and encryption.
Tabor said Avere Systems is targeting customers who are dealing with large file data. The system integrates private and public object storage with an organization’s existing NAS infrastructure so it allows customers to create a hybrid cloud and manage an entire heterogeneous infrastructure as a single, logical pool of storage. The C2N is integrated with Avere Systems’ global namespace.
“Historically, you would store that on NAS but NAS has some challenges,” he said. “The trend is to move away from NAS and move to the cloud. But it’s difficult moving that data to the cloud. What C2N provides is a simple way to get into the cloud. This is a complete edge-to-core configuration supported by Avere. C2N has a built in operating system, so it’s our cloud.”
Enterprise storage containers aren’t about to supplant virtual machines, but the trend line for Docker data center adoption is going up. Hurdles of persistent storage and enterprise data protection are being removed, allowing organizations to move from “monolithic applications” to containerized microservices, according to a recent industry webinar sponsored by the Storage Networking Industry Association (SNIA).
The Oct. 6 event was the first of two events planned as part of SNIAs’s Cloud Storage Initiative. SNIA-CSI chairman Alex McDonald, part of NetApp’s Office of the CTO, moderated the session with panelists Keith Hudgins of Docker and Chad Thibodeau of Veritas Technologies.
Typical Docker data center use cases have mostly centered on application development and testing, but the panel said container storage is undergoing big changes.
“Micro-service architecture is designed to enable applications to be deployed extremely fast and make them much more portable to run on a variety of platforms. Containers really are optimized for speed of deployment, portability and efficiency,” Thibodeau, a principal product manager at backup vendor Veritas, told an audience of about 140 attendees.
He said companies often get started by launching containers inside virtual machines, “but ideally, containers are designed to (give you) the most advantage by running on bare metal.”
Containers are similar to virtual machines, yet also distinctly different. Whereas virtualization abstracts underlying hardware, Docker software virtualizes the operating system, eliminating the need to supply each virtual instance with a hypervisor and guest operating system. Multiple workloads share compute, operating system and storage resources, yet run in segregation on the same physical machinery.
According to Docker, data center downloads of its Linux-based software have topped five billion since its launch in 2013. It claims more than 650,000 registered users. Microsoft threw its support behind Docker containers as part of Windows Server 2016.
Sensing its growing importance, most major storage vendors now have tools to use their arrays as a persistent storage back end for Docker. Data center demand is ticking upward, albeit gradually. Financial services firms spawn persistent storage containers to authenticate end users.
Hudgins listed payroll-processing giant ADP and government IT contractor Booz Allen Hamilton among major firms using Docker in some fashion. Hudgins, the director of tech alliances at Docker, said ADP approached Docker to build nimble infrastructure for application microservices, using private and public cloud storage.
“ADP wanted a fast, easy way to change their payroll processing as needed. They deployed Docker Data Center internally to run all their data processing in a micro-services-based way… using Docker Data Center on both an internal OpenStack private cloud and public components running in Amazon for people to check their pay stubs. (ADP’s) entire system is now running on Docker Data Center,” Hudgins said.
Docker is a common service platform that Booz Allen uses to host customized applications for its government clients at the federal General Services Administration. Hudgins said Booz Allen wanted to migrate from “monolithic applications toward a smaller component-ized structure,” running a commercial version of Docker hosted in Amazon Web Services.
“They greatly reduced their time to market for (customer) applications… and also reduced the surface attack area and improved security,” Hudgins said.
SNIA said a Dec. 7 webinar will highlight best practices on Docker data management.
Druva has scored $51 million in new private financing to diversify its cloud backup platform and accelerate global marketing and sales.
The vendor said part of the proceeds will be used to introduce new features in Druva software., including machine learning in 2017 for analyzing multiple data sets in the public cloud.
Prying capital from investors is challenging in the current climate, making Druva’s $51 million a considerable haul. The new money brings its total private capital raised to $118 million since Druva launched in 2008.
CEO Jaspreet Singh attributed the new investment to Druva’s continuing focus on the cloud to eliminate separate hardware and software for different use cases.
“The timing to raise money isn’t great right now, but we have a strong story to tell. We have a strong tier of public cloud behind us for collaboration, disaster recovery and business intelligence. Part of DR is backup and recovery and part of it is information management. We do both,” Singh said.
“People are looking at cloud storage as a means to retain data longer. Druva software is a born-in-the-cloud, cloud-native technology that doesn’t require you to buy any dedicated hardware or software, which is pretty attractive if you are a growing enterprise.”
Singh said machine learning will be added to Druva software in January to allow customers to extract greater value from idle cloud backups.
Druva sells two branded cloud backup products. Druva’s software for backing up enterprise endpoints is called inSync, which converges backup and data governance across physical and public cloud storage.
Druva Phoenix is a software agent to back up and restore data sets in the cloud for distributed physical and virtual servers. Phoenix applies global deduplication at the source level and points archived server backups at a cloud target.
Druva in May added disaster recovery as a service (DRaaS) to Phoenix to continuously back up VMware image data to Amazon Web Services.
Druva’s software-based analytics works off a golden backup copy in the cloud. Users can search the single-instance storage and run multiple workflows off the same data.
Existing Druva investor Sequoia India headed a consortium that included new investors Singapore Economic Development Board, Blue Cloud Ventures and Hercules Capital. Other existing vendors to participate included Nexus Venture Partners, NTT Finance and Tenaya Capital.
Dell EMC plans to “stay the course” with its flash storage portfolio despite overlapping products at the midrange and low end, an executive at the newly combined company confirmed.
Daniel Cobb, a fellow and vice president of media strategy at Dell EMC, said the company would continue to invest in all of its the flash products. That includes support for emerging technologies such as nonvolatile memory express (NVMe), NVMe over Fabrics and 3D TLC NAND flash.
“You may not always see the newest technologies first in the lowest end platforms,” Cobb said. “That’s usually not the way it happens. But as things continue to go mainstream and suppliers get their volumes up and their costs down and under control, we’ll see the appropriate technologies end up across the whole portfolio.”
Cobb referred to Dell EMC’s DSSD rack-scale appliance as “the flagship in terms of performance and throughput” for real-time workloads. EMC’s original all-flash array platform, XtremIO, and all-flash VMAX target general-purpose enterprise workloads.
The greatest potential for all-flash overlap is in the midrange. The Dell EMC flash portfolio includes EMC’s new Unity-F and older VNX-F arrays. Dell holdovers include the SC Series, formerly known as Compellent, and PS Series, formerly EqualLogic.
“Our plans there are stay the course, keep those customers happy, keep them running on the media that they’re comfortable running with,” Cobb said. “Both platforms have already made the move to flash.”
Cobb said he expects VNX customers to “be delighted” with the new Unity product and ultimately move to that product. But he said they can stay with VNX as long as they want, much the same as Compellent and EqualLogic will be able to do.
“As [former EMC CEO] Joe Tucci liked to say, ‘I’d rather have multiple products in a portfolio and risk managing the overlaps than leave some gaps.’ We’re pretty comfortable doing that now. We’ve been doing it for a while,” Cobb said.
He said EMC is able to continue to invest in so many flash product lines because it is accustomed to sharing investments such as flash management, deduplication and compression across multiple product lines.
Yet another all-flash product is on EMC’s roadmap. Project Nitro, an all-flash version of its Isilon scale-out NAS array, is due to be equipped with more cost-effective 3D TLC NAND flash to target file and object workloads. Cobb provided no updated timetable for Project Nitro.
EMC already held a commanding 40% market share for the second quarter of 2016 in the all-flash array (AFA) market, according to a report released by International Data Corp. (IDC) this month. NetApp (16%), Hewlett Packard Enterprise (13.8%), Pure Storage (11.5%) and IBM (8.7%) trailed by considerable margins.
Dell’s SC and PS Series arrays do not qualify for IDC’s AFA stats, because they’re only all-flash configurations of hybrid flash arrays, according to Eric Burgener, a storage research director at IDC. EMC products factoring into IDC’s second quarter statistics were XtremIO, VMAX All Flash, Unity-F and DSSD D5, Burgener said.
MxSP is hyper-converged infrastructure (HCI) software designed to run on commonly used server hardware. Maxta said customers can download a free production license of its HCI software to implement a maximum three-node cluster with up to 24 TB of raw storage.
The “freemium” model gives enterprises a perpetual MxSP license that can be upgraded to a paid support contract with more storage. Customers choosing the no-cost download can get advice and self-help resources via on online community forum sponsored by Maxta.
Most hyper-converged vendors package their HCI software on branded appliances that consolidate computing resources, networking, storage and virtualization tools within a single piece of hardware. Maxta, on the other hand, licenses MxSP as a virtual storage appliance that pools storage on x86 servers. Maxta server resellers also prepackage MxSP on commodity storage servers as part of its MaxDeploy reference architecture.
MxSP is licensed on a per-server basis based on dual-socket or quad-socket servers. Maxta does not charge customers by processors, server class or by storage capacity. As is typical of hyper-converged vendors, Maxta deployments start at a minimum of three nodes. Maxta requires each server node to have at least one solid-state drive with 100 GB of available capacity, plus two 300-GB hard disk drives.
Maxta vice president of marketing Mitch Seigle said making the HCI software available as a free offering removes some barriers that discourage enterprises from hyper-converging resources.
“We are enabling them to stand up an HCI cluster using hardware they typically already have available in house, at no cost. They can evaluate and test it in their environment on their schedule, without the constraint of trial ‘time bombs’ or limited functionality,” Seigle said.
As an example, Seigle said an enterprise could test an MxSP test cluster with the free version, and subsequently take it directly into production by upgrading to a paid support contract and unlimited storage. Each node added to the existing cluster requires a paid license for MxSP HCI software, which includes 12 months of Maxta support.
Giving customers a free perpetual HCI software license is an attempt by Maxta to boost brand recognition, particularly as a way to emphasize how its HDI differs from appliance-based products. Maxta did not disclose how many paying customers it has, but Seigle said “hundreds of users” have registered to download the free version since its launch Aug. 29.
Spectra Logic is extending its BlackPearl Deep Storage Gateway line upwards and downwards. The tape vendor has added an entry-level model and a much larger offering that more than triples the number of LTO drives the original product could manage.
Spectra BlackPearl appliances, originally introduced in 2013, offer a direct interface from any workflow into an active archive, providing access to nearline storage, tape and cloud. The new additions are the BlackPearl V Series and BlackPearl P Series.
The V Series can store 300 million objects and transfer data at up to 300 MBps sustained to disk or tape, while the P Series can store more than 1 billion objects, transfer up to 3 GBps sustained to disk or tape, and manage more than 20 LTO-7 tape drives.
Those new products join the previously available BlackPearl S Series, which can transfer data at up to 800 MBps to disk or tape.
The V Series could fit well in a small post-production house that works on a job-by-job basis, perhaps doing color correction, said Spectra CTO Matt Starr.
“They’re not retrieving hundreds of terabytes per day,” Starr said.
The P Series is for large organizations. A typical customer might have 20 or 30 Avid edit stations running every day and hit the archives more often. The P Series is up to four times faster than the original product.
The goal of Spectra BlackPearl was to open tape up to places where the medium was rarely used or usage has greatly expanded because of recent trends. Starr pointed to data from body cameras as an example – police departments may need that footage for a month if it is not required as evidence, but may need to retain it much longer if it is.
Starr doesn’t anticipate adding a fourth unit, as he said the three types of Spectra BlackPearl appliances cover the market well.
The V Series starts at $12,000, plus a yearly maintenance fee. The P Series list price is between $80,000 and $90,000, depending on the configuration. The original S Series starts at $33,000.
Research firm IDC reports factory revenues for worldwide purpose-built backup appliances (PBBA) experienced solid growth in the second quarter while enterprise storage systems revenue remained flat during that same period.
Factory revenues for worldwide purpose-built backup appliances (PBBA) grew 11.5% year over year in second quarter to $871 million, according to IDC’s Worldwide Quarterly Purpose-Built Backup Appliance (PBBA) Tracker. Most of that growth came from open systems, which increased 12% to $788 million. The rest of the revenue came from backup systems for mainframes.
“After three consecutive quarters of year-over-year decline, mainframe systems revenue was up by 6.2 percent from a year ago,” according to the PBBA tracker report. “Total worldwide PBBA capacity shipped for Q2 2016 reached one exabyte, an increase of 35.3 percent from Q2 2015.”
EMC held on to its lead in the overall PBBA market, with $538 million and 62% revenue share. Veritas came in second at 13% with $116 million in revenues. IBM ranked third with $49 million (six percent) and HPE fourth with $32 million (four percent). Dell came in fifth with 3 percent market share and $25 million in revenues.
The overall picture was different for worldwide enterprise storage systems factory revenue as it posted zero year-over-year growth in the second quarter with about $9 billion in revenues, according to IDC’s worldwide Quarterly Enterprise Storage Systems Tracker. External storage systems still is the largest market but the $6 billion in sales represented flat year-over-year growth
However, total capacity shipments were up by about 13% year over year to 35 exabytes. Sales of server-based storage also were up at 10% during the quarter and accounted for almost $2.4 billion in revenue.
EMC and HPE came in at a statistical tie for the total worldwide enterprise storage systems market, accounting for 18% and 17.6% market share, respectively. (IDC considers anything less than a percentage point apart a statistical tie). EMC’s revenue dropped 5.5 percent to $1.599 million, while HPE’s business grew almost 9 percent and generated $ 1.556 million in revenue.
Dell came in third with 11% market share and increased its business by 14% year over year, generating 1 million in revenue, while IBM had 7 percent market share and lost 16 percent in year-over-year revenue, generating $600 million in overall storage revenue. NetApp came in fourth, showing a 3.2 percent decline in year-over-year in overall storage revenue growth with $595 million in revenues.
“As a single group, storage system sales by original design manufacturers, selling directly to hyperscale data center customers accounted for $9 percent of global spending during the quarter,” the report stated.
EMC was the largest external enterprise storage system supplier, accounting for 28% of worldwide revenues. All of EMC’s revenue came from external (networked) storage.
HPE and NetApp were tied for tmarket share. HPE had 10.6% share and $602 million in sales. NetApp captured 10. 5% of the worldwide market share and generated $6 million in sale.
IBM’S revenue declined about nine percent since last year and it stood fourth with $538 million, followed by Hitachi at $419 million and Dell at $395 million. Hitachi revenue grew 15% and Dell increased 10% year over year.
StorageCraft will use its new analytics technology to tell its customers to stop backing up certain data.
That’s right, the data protection vendor wants its customers to back up less. And StorageCraft’s acquisition of Gillware Online Backup from Gillware Data Services this week will help it do that.
StorageCraft’s flagship ShadowProtect SPX software backs up virtual and physical Microsoft Windows and Linux servers. It also sells Cloud Services replication software, GranularRecovery software for Exchange, and management and monitoring software.
The key piece of Gillware Online Backup is Backup Analyzer. The application can look at all of a customer’s files, suggest those that have not been backed up and which files may not need to be backed up.
StorageCraft CEO Matt Medeiros said Backup Analyzer technology will optimize ShadowProtect backups, and he expects the Gillware development team to expand its current technology.
“Knowing what you should back up can be difficult,” Medeiros said. “Backup Analyzer helps customers determine high, medium and low priority for backups. Now we can help customers intelligently tier their data.
“The storage industry wants you to believe that all data is equal. It’s not. Some companies are finding that 50 percent of their data is not even of value to the business anymore. Yet we back it up, buy more storage for it, and pay people to manage it.”
Draper, Utah-based StorageCraft sells its software through managed service providers and VARs.
The Gillware Online Backup team consists of around 30 people, mainly engineers. The acquisition brings StorageCraft’s total headcount to a bit over 350, Medeiros said. The Gillware team will stay in Madison, Wisconsin.
Gillware Data Services already resells StorageCraft software, and that partnership will continue.
StorageCraft did not disclose the sale price, but part of the $187 million equity investment that TA Associates made in StorageCraft in January will fund the deal. Medeiros joined StorageCraft from Dell SonicWall at the time of the TA Associates funding.
Zetta unveiled the latest version of its disaster recovery as a service (DRaaS) that promises data recovery in less than five minutes.
The company initially launched its DRaaS back in May 2015 to protect virtual and physical servers in the cloud. The Zetta cloud DRaaS service does not require a local appliance and allows managed service providers (MSPs) and companies to run a virtualized native network in the cloud.
Mike Grossman, Zetta’s CEO, said the latest version includes upgrades that customers have asked for since the service was initally introduced. The latest version also includes failback, automated disaster recovery testing that encompasses both backup confirmation and “bootable” servers in the cloud, and active directory integration.
“We really feel like we have learned what customers need,” said Grossman. “A lot of key pieces needed to be built. We’ve been focusing on all these things in the last year.”
Also, Zetta cloud storage built in a preconfiguration capabitilty for network, VPN and firewall configuration at the time of onboarding to the cloud. Zetta does up-front configuration of the network, firewall, VPN connectivity. The offering is priced on a monthly, per-gigabyte storage cost and the amount of RAM used.
“We are a cloud company and there are complications in network configurations,” said Grossman. “There are network configuration nuances that we have to deal with that application vendors do not. We found out that a lot of vendors don’t complete the set. Now, we are dealing with it upfront.
Zetta’s cloud DRaaS, which targets MSPs and enterprise businesses, can protect at least 100 physical or virtual servers. The DR as a service offering has built-in WAN optimization that moves up to 5 TB of data in 24 hours for backup and recovery. It’s also optimized for data sets that are up to 500 TB.
Zetta cloud DRaaS supports multiple servers, applications, native networks and heterogeneous operating system platforms. It can boot physical and virtual systems in the cloud via a virtual private network or Remote Desktop Protocol connection. It can replicate native file systems and map a local drive in the cloud to recover individual files or entire server images.
Grossman said they target small-to-medium enterprises and partners and they offer a “no-data corruption” guarantee. The management portal allows for fast configuration and a software agent does efficient and resilient transport to and from the cloud. It offers standards-based, stateless architecture, along with the ability to manage multi-tenant storage.
Kaminario Clarity will be available to any K2 customer. Kaminario has a target of the first quarter of 2017 for delivering Clarity.
Kaminario Clarity features include a quality of service that lets customers set service levels for specific types of workloads. For instance, the K2 array could prioritize small reads and writes over large ones in a transactional database to match usage patterns.
Clarity will also include a new portal customers can use to see insights into K2 performance, as well as suggestions to improve performance and capacity.
Josh Epstein, Kaminario VP of global marketing, said a future step will be to automate the service levels for applications. Kaminario intends to add Clarity agents that will integrate into specific applications, such as Oracle and Microsoft SQL databases. The agents will provide more granular metrics for those applications.
“We’re gathering statistics about the K2 and the storage ecosystem – databases, servers, networking – and providing analytics, trends and insights from across our installed base,” he said. “The analytics tell customers how to configure and optimize their storage infrastructure.”
Kaminario Clarity continues the trend of vendors providing tools that collect data from storage arrays, upload it to clouds and provide analytics reports from customers. Other cloud-based analytics include Nimble Storage Nimble InfoSight, PureStorage Pure1 Cloud Global Insight, EMC Unity CloudIQ, HPE StoreFront Remote, IBM Spectrum Control Storage Insights, and Tintri Analytics. These tools are gaining popularity with newer array models, specifically those incorporating flash.