Storage Soup


December 27, 2016  4:46 PM

Using multiple BDR products doesn’t always translate into faster data recovery

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Companies spend a lot of money on disaster recovery solutions but that doesn’t translate into faster data recovery, according to a survey conducted by Quorum titled the “State of Disaster Recovery.”

The report, which surveyed 250 CIOs, CTO and IT vice presidents, found that 80% of the companies surveyed claimed it takes more than an hour to recover from a server failure and 26% said it takes more than two hours for data recovery. Only 19% said it took less than an hour to recover. Seventy-two percent consider the speed of backup and data recovery as “critical.”

“All those backup and disaster recovery (BDR) products aren’t making their recovery any faster,” the report claimed. “While speed is essential for continuity and security… a staggering 80 percent of respondents need more than an hour to recover from a server failure. And its gets worse: more than a quarter need more than two hours.”

Sixty-four percent use more than three different disaster recovery solutions, with 26% using more than five and less than 40% using between one and three different disaster recovery products.

Moreover, a majority of the respondents said they wanted a method to simplify the management of all the BDR products they are using. Ninety percent of the respondents want to consolidate their disaster recovery solutions into one dashboard.

The report shows that the movement to the cloud has grown. Seventy-five percent of the survey respondents are using cloud-based disaster recovery solutions, while 36% use a hybrid model mixing on-premises and cloud DR. Thirty-nine percent use on Disaster Recovery as a Service (DraaS).

Eighty-nine percent have plans for more cloud-based disaster recovery solutions, with five percent stating they have no further plans and six percent stating they “don’t know.”

Disaster recovery products are growing in importance as as concerns about security increase. Seventy-seven percent said they have used their disaster recovery solutions after a security threat event occurred. Fifty-three percent respondents are worried about security threats compared to concerns about hardware failure, backup disk corruption or a natural disaster.

“Natural disasters crashing in on a data center, an employee error or a hardware failure can all pose immense problems for an organization,” the report stated. “But a skilled and willful attack can cripple a brand for years and could cost a literal fortune. Ransomware attacks particularly depend on a team’s inability to recover quickly.”

Companies are diligent about testing the production level with their current disaster recovery products. Eighty-eight percent of the respondents said they can achieve production-level testing with their current DR.

December 22, 2016  10:31 AM

Asigra’s data recovery report details how little gets recovered

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

How much data do you actually recover?

That’s a question that Asigra users answered in a data recovery report.

Featuring statistics gathered from nearly 1,100 organizations across eight sectors, between Jan. 1, 2014 and August 1, 2016, the backup and recovery software vendor’s report found that those users only recover about 5% of their data on average.

“People really don’t recover a lot of data,” said Eran Farajun, executive vice president at Asigra. “Ultimately they’re paying like they recover all their data.”

Farajun compared the situation to what many experience with cable bills – customers often pay for hundreds of stations but don’t watch all of them.

Broken up by industry, manufacturing and energy recovered the most, averaging about 6%, according to the data recovery report. Public sector and health care recovered the least, at about 2%.

Users picked file-level systems as the most common data type restored.

The most common reason cited for a data restoration request was to access a previous generation of data, selected by 52% of users. Ransomware was a major cause of that need, Farajun said.

The second most common reason for data recovery was user error or accidental deletion, with 13%. A lost or stolen device was third with 10%. Interestingly, disaster was only picked by 6% of respondents, according to the data recovery report.

Asigra is working on improving cybersecurity and how it can best combine with data protection, Farajun said. In the face of the growing threat of ransomware, Farajun also suggested organizations educate their employees, have strong anti-virus protection and back up their data.

The average size of a recovery across all sectors was 13 GB.

Farajun described cost as the bane of a company’s relationship with its backup vendor.

“Mostly [companies] don’t feel they can do anything about it,” Farajun said. “You can do something about it.”

In 2013, Asigra launched its Recovery License Model and now almost all of the vendor’s customers use it. Pricing is based on the percentage of data recovered over the course of a contractual term, with a ceiling of 25%.

Asigra did a healthy amount of research before launching the model. It looked into other markets, such as the music and telecommunications industries, and assorted “fair-pay” cases. Music customers, for example, can now buy one-song downloads vs. an entire album that they may not listen to in its entirety.

“What happened?” Farajun said. “People bought boatloads and boatloads of songs.”

Asigra had been nervous when undertaking the new model. It anticipated a three-year dip but revenue started to go up after 12 months, Farajun said.

So why hasn’t this model caught on more in the backup market?

“There’s no incentive for software vendors to reduce their prices,” Farajun said. “We’re trying to price based on fairness.”

Farajun said the data recovery report vindicates the vendor’s underlying premise.

“People don’t recover nearly as much as they think they do and they overpay for their backup software.”


December 21, 2016  7:29 PM

Druva Cloud Platform zeros in on inactive data

Garry Kranz Garry Kranz Profile: Garry Kranz

Data protection provider Druva has launched platform-as-a-service capabilities to support indexed search queries of data across local and public cloud storage.

The Druva Cloud Platform is designed to help enterprises better manage and use information related to analytics, compliance, e-discovery and records retention. More than 30 RESTful APIs are included to allow third-party vendors to access data sets in Druva storage.

The API calls allow disparate information management applications to pull data directly from Druva InSync and Phoenix storage platforms. Druva cloud storage uses Amazon Web Services or Microsoft Azure as a target destination for inactive data that companies need to keep for legal regulations.

Global source-side data deduplication creates a single gold copy in the cloud. The Druva cloud technology takes point-in-time snapshots of queried data and applies advanced encryption. Changed data blocks are synchronized to deduplicated data sets in Amazon Web Services or Microsoft Azure.

The APIs allow disparate information management systems to communicate directly with Druva to improve data hygiene, said Dave Packer, a Druva vice president of product and corporate marketing.

“We designed Druva Cloud Platform so your data doesn’t have to traverse across corporate networks.  We take care behind the scenes to ensure handoffs occur accordingly, without taxing internal systems,” Packer said.

Druva’s SaaS pricing is based on deduplicated data and starts at $6 per user per month.


December 21, 2016  2:54 PM

Panzura wants to provide archiving freedom, for a price

Dave Raffo Dave Raffo Profile: Dave Raffo

Cloud NAS vendor Panzura is expanding into archiving.

The vendor today made available Freedom Archive software that moves infrequently accessed data to public clouds or low-cost on-premises storage.

Panzura CEO Patrick Harr describes Freedom Archive as storage for “long-term unstructured archived data that now sits on-premises on traditional NAS or tape libraries. The key thing is, it’s for active data.”

Harr said target markets include healthcare, video surveillance, gas and seismic exploration and media and entertainment.

Freedom Archive is a separate application from Panzura’s flagship Cloud NAS platform, which caches frequently used primary data on-site and moves the rest to the cloud. Freedom Archive is available on a physical appliance or as software only. It uses caching algorithms and smart policy manager to identify cooler data and move it from on-premises storage to the cloud. Freedom Archive compresses, deduplicates and encrypts data at-rest and in-flight.

Freedom Archive supports Amazon Web Services, Microsoft Azure, Google, and IBM Cloud public clouds, and private object storage from IBM, Hitachi Data systems and Dell EMC. Customers can download the software from Panzura’s web site. Pricing begins at less than two cents per GB per month, which does not include public cloud subscriptions. There is a 30-day free trial period for the software. IBM Cloud is offering 10 TB of free storage for 30 days and AWS will give 10 TB of free storage for 14 days to Freedom Archive customers.

Harr said Chevron, American College of Radiology, NBC Universal, Time Warner Cable, and law enforcement agencies already use Freedom Archive. The product became generally available today.

An expansion into archiving was among the goals Harr laid out when he became Panzura CEO earlier this year.

Harr emphasized Freedom Archive is for active data rather than cold data that rarely if ever needs to be accessed. That means Panzura is not competing with public cloud services such as Amazon Glacier, Microsoft Azure Cool Blob and Google Coldline storage.

“This is complementary to what Google, Azure and AWS do, not competitive,” he said. “Glacier is not for active data, and it’s extremely expensive to pull data back from Glacier. Ours is a hybrid cloud where you still have a performant nature to your data.

“Chevron has to access data in real-time instead of waiting for a slow response that doesn’t meet the business need. In the medical space, you don’t want to have to wait when you pull back an MRI.”


December 14, 2016  12:58 PM

Violin Memory’s sad song leads to bankruptcy

Dave Raffo Dave Raffo Profile: Dave Raffo
Violin Memory

Cash-bleeding all-flash array vendor Violin Memory today filed for Chapter 11 bankruptcy, and will seek to sell off its assets at an auction next month.

After years of financial problems, Violin petitioned the U.S. Bankruptcy Court for the District of Delaware for Chapter 11 relief. In a prepared statement, Violin CEO Kevin DeNuccio said the vendor will continue operations during the bankruptcy period while hoping to sell its assets.

“We are taking this action, which should conclude by the end of January 2017, to bolster Violin’s ability to serve the needs of its customers,” DeNuccio said. “Violin intends to continue to sell solutions to customers and prospects as well as service and support customers during this restructuring.”

Violin’s problem is it doesn’t have enough customers, and has been unable to come close to profitability since becoming a public company in Sept. 2013. Violin has twice conducted extensive searches for a buyer without success, and its 2016 sales slowed to a trickle.

Failing to find a buyer, Violin has cut expenses through layoffs and pay reductions. The vendor reduced headcount from 437 employees in Jan. 2014 to 82 through several staff reductions.

DeNuccio took a pay cut last week, dropping his salary from $750,000 to $150,000. The bankruptcy filing placed Violin’s average monthly payroll at approximately $758,000 with another $109,000 per month in health benefits. The vendor said it also owes approximately $244,000 in sales commissions to employees.

According to the petition, Violin will have $3.62 million in cash at the end of this week and that total will drop to around $1.6 million by Jan. 20, 2017. Most of the operational expenses over those six weeks will be payroll-related. Violin has lost $25.5 million, $22.2 million and $20.1 million over the last three quarters.

Violin’s executive team and directors have tried in vain to find a buyer since late 2015. Violin hired Jefferies Group as its financial adviser in Nov. 2015. Jefferies contacted 39 strategic and eight financial sponsors, according to Violin’s court filing. Those contacts resulted in nine parties signing confidentiality agreements and 10 parties conducting management meetings. But none made offers and the search for a buyer ended in March, 2016 when Violin instead put restructuring plans in place.

Violin’s sales decreased, as it reported $10.9 million, $9.7 million and $7.5 million in revenue over the past three quarters. Those totals were especially disappointing considering the market for all-flash arrays is one of the hottest in the storage industry.

Violin’s last-ditch effort to increase sales with its launch of a new Flash Storage Platform array in September came too late to stave off bankruptcy.

Violin hired another financial adviser, Houlihan Lokey Capital, in September to seek a buyer for the company. Houlihan contacted 202 potential strategic buyers and 78 financial sponsors, according to court filings. A total of 26 parties signed confidentiality agreements and 13 conducted management interviews but none submitted letters of interest.

In recent weeks, Violin sought debtor-in-possession financing that would allow Violin to remain in control of its assets through Chapter 11. Again, it found no interest. Those failed proposals “have necessitated the filing of this case,” according to the filing.

Violin asked the court to approve a Jan. 13 bid deadline for the auction, and is looking to finalize any sale by Jan. 20.

One of the first all-flash vendors on the market, Violin claims 58 U.S. patents and 64 foreign patents. It has another 22 U.S. and 38 foreign patents pending.


December 13, 2016  11:48 AM

Nasuni files away $25 million in funding

Dave Raffo Dave Raffo Profile: Dave Raffo
Nasuni

Cloud file storage vendor Nasuni picked up $25 million in funding today, bringing its total to $80.5 million. The vendor’s executives expect the latest funding to bring it to cash-flow positive status in 2018.

Nasuni was one of the original cloud gateway startups, launching its Nasuni Filer in 2010. Nasuni software caches active data on-premises and moves other files off to public clouds, mainly Microsoft Azure, Amazon Web Services and IBM Cloud Services.

Nasuni’s software uses the UniFS cloud-native file system. It ships on Dell servers or runs as a virtual appliance to provide an edge connector. Customers can then expand capacity without adding hardware by sending data to a public cloud.

“We solve a storage problem, although we don’t actually store a gigabyte of data,” said Scott Dussault, Nasuni COO and CFO. “Our software enables customers to run a file system so they can have unstructured data management in the cloud.”

Dussault said the vendor will expand its sales footprint in North America and Europe to go with its 2016 push into the U.K. He said the expansion will help Nasuni attract larger customers.

“We started out in the SMB space, moved to the mid-market in 2014 and now we’re also selling to the uber-enterprise,” he said. “Companies are creating a strategy around the cloud, using Nasuni as their file system and mostly private cloud vendors for object storage.”

Other cloud gateway vendors include Panzura and Ctera. Microsoft acquired one-time Nasuni rival StorSimple and EMC bought TwinStrata. Dussault said Nasuni still competes mainly with traditional NAS products from NetApp and Dell EMC.

Dussault said Nasuni grew more than 75% in bookings and revenue in 2016. He said the company has 115 employees and he expects the funding to fuel greater than 25% headcount expansion in 2017.

“This round keeps us on the path of cash flow/break even in 2018,” he said.

The E funding round included $17.5 million in equity funding, led by new investor Sigma Prime Ventures, plus $7.5 million in venture debt financing from Eastward Capital.


December 8, 2016  5:42 PM

Primary Data contributes to NFS 4.2

Dave Raffo Dave Raffo Profile: Dave Raffo
Primary Data, Software-defined storage

Software-defined storage vendor Primary Data’s open standards parallel NFS contributions made it into the NFS 4.2 standard, which could help the startup make inroads with scale-out storage customers.

Primary Data’s contributions to NFS 4.2 include enhancements to the pNFS Flex File layout that allows clients to provide statistics on how data is used and the performance of the underlying storage.

NFS 4.2 enables clients and application servers to natively support data virtualization and mobility features. That plays well with Primary Data’s DataSphere software that virtualizes different types of storage into tiers using a single global data space.

“Data virtualization sits between hardware and storage arrays below us and the virtual machine space above us, virtualizing compute resources,” Primary Data CEO Lance Smith said. “We separate the logical view of data from where it’s physically stored. Now we can put data on the right type of storage without bothering or interrupting the application. To do that we need a touch point on the client, and that’s what this is about.  When you put our metadata software into the infrastructure, that’s where virtualization comes alive.”

DataSphere supports SAN and DAS as well as NAS, but the integration of Primary Data technology into NFS 4.2 fits with scale-out NAS customers. The NFS 4.2 spec was completed in November.

“We are heavily engaged with media and entertainment companies,” Smith said. “They can now do clustering of their storage, even if it’s from different vendors. Oil and gas is right behind, looking for performance and scale-out. Financial service firms have about 20 percent of their data that’s super hot and needs to be on the highest performance tier, they want stuff migrated to a cheaper tier and use cloud and object storage. But that migration has to be seamless and not disrupt the application.”


December 8, 2016  11:32 AM

Western Digital shows gains from acquisitions

Dave Raffo Dave Raffo Profile: Dave Raffo
SanDisk, western digital

Western Digital told financial analysts its business will be better than expected this quarter, and added solid-state drives (SSDs), hard-disk drives (HDDs) and an all-flash array platform to its portfolio.

WD updated its quarterly forecast to $4.75 billion from $4.7 billion at its analyst day this week. It also gave details on NVMe products from its SanDisk and HGST acquisitions and expanded its helium HDD line.

New NVMe all-flash array coming

WD previewed a 2U all-flash platform that the vendor claims will deliver 18 million IOPS using NVMe over PCIe fabric. The first system is due to ship in the first half of 2017. WD pledged to contribute software supporting the platform to the open source community.

WD will position the NVMe array for real-time and streaming analytics applications such as credit card fraud detection, video stream analysis, location-based services, advertising servers, automated systems, and solutions built on artificial intelligence or machine learning (ML).

The new system is part of WD’s InfiniFlash brand it gained as part of its SanDisk acquisition. Dave Tang, GM of WD’s data center systems, said WD will eventually expand the platform to include NVMe over Ethernet, which supports longer distances than PCIe but has more latency. Scalability of NVMe over PCIe is limited to either a single rack or an adjacent rack.

“We think customers interested in ultimate performance will go to NVMe over PCIe, and those looking for scalability may opt for NVMe over Ethernet,” Tang said. “They will co-exist and serve different purposes in the data center.”

Tang said he suspects NVMe over Ethernet support remains a few years away. Widespread adoption will require more expensive 100-Gigabit Ethernet. Also, NVMe over Ethernet standards are still evolving.

SSDs, HDDs expand capacities

WD added two SSDs, the Ultrastar SN200 and Ultrastar SS200 Series. The SN200 is an NVMe PCIe SSD and the SS200 is a SAS SSD drive. Both are available in 2.5-inch and Half-Height, Half-Length form factors and in capacities up to 7.68 TB.

The NVMe SSDs are built for cloud and hyperscale storage and big data analytics.The SAS SSDs are aimed at hyper-converged and other server-based storage that use a dual-port design and SAS interface. The UltraStar line comes from WD’s 2012 HGST acquisition.

 WD also said it would ship new higher capacity helium and Singled Magnetic Recording (SMR) HDDs. The Ultrastar He12 is a 12 TB 3.5-inch SAS/SATA drive. It will be the highest capacity helium drive on the market. WD and rival Seagate currently ship 10 TB Helium drives. WD will also add a 14 TB SMR He12 drive, which surpasses its current 10 TB SMR capacity drives.

The Ultrastar SN200 and SS200 SSDs are expected to be generally available in the first quarter of 2017 with the He12 Helium drive expected in the first half of 2017 and the 14 TB SMR drive around the middle of next year.


December 7, 2016  4:57 PM

SwiftStack public Cloud Sync starts with Amazon, Google

Garry Kranz Garry Kranz Profile: Garry Kranz

Object storage provider SwiftStack Inc. has added hybrid cloud synchronization to its OpenStack-based software controller.

SwiftStack Cloud Sync allows data or subsets of data to exist simultaneously behind a firewall and in multiple public clouds.  Customers with private SwiftStack cloud storage could create data copies in Amazon Simple Storage Service (S3) and Google Compute Platform and replicate between the two.

Cloud Sync replicates native objects between physical nodes and the public cloud. SwiftStack does not charge extra to support multiple copies of objects in different locations. The hybrid cloud topology places data in a single namespace.

Mario Blandini, SwiftStack’s vice president of marketing, said Cloud Sync policy management extends the storage of a physical enterprise data center.

“Cloud Sync moves your data close to where your application is running. We synchronize your data to the public cloud.  It’s not a one-time copy; it’s a continuous sync based on policies that are applied to S3 or Google (storage),” Blandini said.

Distributed users can collaborate securely by accessing the same data bucket via Cloud Sync. Other use cases include active archiving, cloud bursting, offsite disaster recovery and cold archiving with Amazon Glacier and Google Cloud Storage Nearline.

Blandini said SwiftStack customers requested additional cloud support beyond Amazon Web Services.

“Customers buy servers from multiple vendors,” he said. “They want the same consumption experience with the public cloud. They want to have data in Amazon and Google and be able to balance between them.”

Existing SwiftStack customers can obtain Cloud Sync at no charge. SwiftStack’s capacity-based licensing starts at $375 per terabyte annually for a minimum of 50 TB.


December 6, 2016  3:33 PM

Datrium snares $55 million, nearly year after DVX release

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Datrium secured $55 million in Series C financing as it approaches the one-year anniversary of the general release of its flagship DVX storage system for VMware virtual machines.

The new financing – led by New Enterprise Associates – raises the funding total to more than $110 million since the Sunnyvale, California-based startup launched in late 2012. Datrium’s early backers included founders and executives from VMware and Data Domain, which are now both part of the Dell-EMC empire.

Datrium CEO and founder Brian Biles said the company plans to use the additional funding to expand into Europe, from its initial sales base in the U.S. and Japan. He said Datrium will also add product features, and grow the support, engineering, sales and marketing teams. Datrium currently employs about 140, Biles said.

“I model on the early history of Data Domain, and we’re beating that regularly in units and revenue. So I’m feeling pretty good about that,” Biles said, commenting on another startup he co-founded.

Datrium claims more than 50 customers since the DVX product became generally available in January.  That includes users in banking, cloud hosting, health care, manufacturing, media and entertainment, technology and public sectors.

The Datrium storage system consists of software that runs on customer-supplied servers, and NetShelf appliances equipped with 7,200 rpm SAS hard disk drives (HDDs) for persistent storage. The NetShelf appliance is currently disk-only, but Biles predicted that Datrium would offer a flash system within three years, once the price of flash further plummets.

The DVX software orchestrates and manages data placement between the NetShelf appliance and host server, and uses customer-supplied, server-based flash cache to accelerate reads. The software also provides storage functionality such as inline deduplication and compression, clones, and RAID for the persistent backend storage.

The local host server can use up to 16 TB of flash for caching, so after deduplication and compression, the effective local flash cache capacity can reach 32 TB to 100 TB, Datrium has said.

So far, the main use case for Datrium storage has been database workloads. Biles said 63% of customers listed databases (primarily Microsoft SQL Server) as the dominant use case for the DVX system. He said the product has also seen strong uptake in mixed-use virtual machine (VM) and VDI environments and attracted a number of customers in data warehousing scenarios.

“They’ve told us that not only is it fast, but as they virtualize their warehouses, they’ve found that performance on Datrium is faster than the warehouse on physical servers,” Biles said.

Another emerging trend Datrium noted is the use of NVMe-based PCI express solid-state drives (SSDs) for server-based flash cache. Datrium vice president of marketing Craig Nunes  estimated that NVMe adoption is approaching 10%.

One customer, Northrim Bank in Alaska, uses 2 TB NVMe-based SSDs in each of the 16 VMware servers at its primary data center in Anchorage and its eight VMware servers at its secondary data center in Fairbanks.

Benjamin Craig, Northrim’s executive vice president and chief information officer, said the company’s Iometer testing showed a near doubling of IOPS and throughput with the 2 TB NVMe SSDs over enterprise-class 1 TB SATA SSDs.

Craig said that Northrim was able to procure the Intel NVMe SSDs from its server vendor, Dell, at about $1,000 per TB – within 10% of the price per TB as heavy write-intensive, enterprise-class SATA SSDs.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: