Storage Soup


February 6, 2017  10:17 AM

‘Alexa, provision my Tintri storage’

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Want to manage your Tintri storage the same way you turn on lights, set an alarm, or choose music with an Amazon Echo or Dot device?

Tintri Inc. launched a proof of concept that lets customers ask Amazon’s Alexa voice service to initiate tasks such as provisioning virtual machines (VMs), taking snapshots and applying quality of service.

Tintri storage engineers used Amazon’s software development kit to map its application programming interfaces (APIs) to the Alexa service to enable Echo and Dot devices to recognize and execute storage commands.

Chuck Dubuque, vice president of product marketing at Tintri, said Tintri will use feedback on the proof of concept to gauge the potential to turn the “cool demo” into a product.

A video demonstration shows a Tintri employee instructing Amazon Alexa to ask the system to provision a VM. Alexa prompts the user with questions such as “What type of VM would you like to create?” and “How many VMs would you like to create?”

Dubuque admitted that using Amazon Echo beyond home use cases might be “a little further out” in the future. But the proof of concept gives Tintri experience using Amazon’s voice recognition and natural language capabilities and making its self-service APIs more responsive to human commands, he said.

“It’s relatively easy to write an admin interface for the storage administrator or the VM administrator who already thinks about things at the low level around VMs and vdisks and other things,” Dubuque said. “But for people who aren’t experts on the infrastructure and just want to say, ‘Hey Alexa, create a test environment,’ what does that mean? Underlying all of the assumptions, a test environment means this set of 100 virtual machines is created from this template, put into this network with these characteristics. That’s more complicated.”

Chat option lets developers manage Tintri storage

At VMworld last August, Tintri demonstrated a text-based chat option to enable developers to collaborate with each other and manage Tintri storage. Dubuque said a customer in Japan used Tintri’s REST APIs to put together a simple robot to respond to system commands from within the Slack chat environment.

Developers in the virtual chat room could call out to a Tintribot — which appears as another “person” in the chat window — to tell the system to execute a command, such as firing up VMs to test new software.

“The Tintribot will acknowledge the command, maybe ask a few questions, and then once all of the VMs are up and running, reply back into the same chat window: ‘Hey, the 100 VMs are now ready. You can run your test,'” Dubuque said.

“It’s a way to enable self-service. In this case, it’s aligned to the developers who don’t really care about the details. They want to be able to do things on their own when they need to without having to hand it off to a third party,” to launch VMs.

Because the Slack-based ChatOps interface requires a username and password for login, the system can control what any given user is permitted to view and create a time-stamped chat audit trail in case they need to troubleshoot a problem.

“You get to see all the humans who were involved in the decision, as well as what the environment was telling you – what’s successful and what wasn’t,” Dubuque said.

Tintri is still gathering customer feedback and has not determined a general availability date for the Slack-based ChatOps that performs operations from within a chat.

“It’s definitely something that has sparked a lot of interest,” Dubuque said.

Dubuque said the Tintri storage architecture is conducive to plug-in integration with systems such as Slack and Amazon Alexa. He said the company’s key differentiator is a web services model “where the fundamental unit that we manage is around the virtualized or containerized application.

“Our file system, our I/O scheduler, all of our storage operations are at that same level that virtualization and cloud management systems use to control compute and networking,” Dubuque said. “You can think of us as finishing the trinity of network, compute and storage being all aligned to the same abstraction level, which is a virtual machine, or a container, not around physical constructs.”

Dubuque said Tintri exposes REST APIs and interfaces with PowerShell and Python through a software development kit. He said other storage vendors use REST APIs that focus on storage constructs such as LUNs and volumes and don’t directly map to an individual application. That causes complexity when trying to automate the storage component of an application.

January 31, 2017  3:56 PM

Report: Better data theft protection needed for employee exits

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

The processes for keeping data safe when employees leave a company are fundamental data protection best practices: backup, archive and encryption. Yet barely half of the organizations that took part in a recent survey have a plan that ensures data can be recovered if an employee changes or deletes it on the way out the door.

Osterman Research conducted a survey of 187 IT and human resources professionals in October 2016 and released the findings this month. The results show organizations are generally not prepared for data theft protection issues with departing employees, said Osterman Research president Michael Osterman. The report found that fewer than three in five organizations have a backup and recovery platform that ensures data can be recovered if an employee maliciously changes or deletes data before giving notice to leave.

“They know what to do, they’re just not doing it very much,” Osterman said.

Osterman suggested organizations should develop a plan for this issue and nail down who’s in charge of ensuring sensitive data is protected.

The report found that 69% of the business organizations surveyed had suffered significant data or knowledge loss from employees who had left.

Those employees may not have taken data mischievously. According to the report, there are three reasons employees leave with corporate data: They do it inadvertently; they don’t feel that it’s wrong; or they do it with malicious intent.

Mobilizing mobile protection

The BYOD movement has complicated matters. For example, an employee can create content on a personal mobile device and store it in a personal Dropbox account or another cloud-based system. That content never hits the corporate server.

“Get control over that kind of content,” Osterman said. One way to do that is to replace personal devices with ones managed by IT.

Virtual desktops can help data theft protection. Because they store no data locally, virtual desktops make it more difficult for employees to misappropriate data, the report said.

The report stressed it is important that “every mobile device can be remotely wiped” so former employees don’t have access to the content.

“Enterprise-approved apps and any associated offline content can be remotely wiped, even if the device is personally owned,” the report said.

Backup, archive, encrypt

A proliferation of cloud applications also makes it harder to recover employee data.

“While IT has the ability to properly back up all of the systems to which it has access, a significant proportion of corporate content, when stored in personally managed repositories, is not under IT’s control,” the report said. “Office 365, as well as most cloud application providers, do not provide backup and recovery services in a holistic manner, and so organizations can have a false sense [of] security about the data that is managed by their end users.”

To maintain complete visibility of sensitive corporate data across all endpoints, cloud applications and other storage repositories, the report suggests deploying a content archiving system.

“Email archiving is the logical and best first place to start the process of content archiving, but other data types — such as files, social media content, text messages, web pages and other content — should also be considered for archiving as well,” the report said.

The data theft protection report advocates encrypting data in transit, at rest and in use, regardless of its location. In addition to manual encryption, Osterman Research recommends encryption that automatically scans content based on policy and then encrypts it appropriately.

“Encryption alone can prevent much of the data loss that occurs when employees leave a company,” the report said.

Report ‘hit a nerve’

In a fairly decent economy, approximately one in four employees will leave a company in a year, Osterman said.

An Osterman Research client originally suggested the organization undertake the data theft protection report.

“I think it hit a nerve with a lot of companies,” Osterman said.

The sponsors of the report were Archive360, Druva, Intralinks, OpenText, Sonian, Spanning, SyncHR and VMware.

The fundamental goals of the report were to make people more aware of the issue and what can happen if they are not careful with data, and to raise awareness about backing up data and archiving, Osterman said.


January 27, 2017  10:05 AM

Quantum video storage customers range from police to pot growers

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum’s scale-out storage business is growing like a weed, with the help of a large weed grower.

While Quantum’s DXi disk backup library increased the most of all its product lines last quarter, the StorNext scale-out storage business excites CEO Jon Gacek the most.

You have to love a market where deals include tape plus disk, and range from law enforcement to legal marijuana merchants. The Quantum video surveillance storage business last quarter included all of that.

Gacek said Quantum closed the most video surveillance deals ever last quarter. Running through a list of large wins, he included police departments in Canada and India, as well as smaller law enforcement agencies and “a company focused on [the] emerging cannabis growth market, where surveillance of the facility is critical.”

Each large Quantum video surveillance deal included StorNext software, disk plus tape, “reinforcing the power of our tiered storage value and expertise,” Gacek said on Quantum’s earnings call Wednesday.

Flash, dense drives push disk backup deals

Quantum’s disk-based backup revenue grew 17% year over year to $22.9 million. That success came after the release of the enterprise DXi6900-S deduplication library that uses flash to speed up data ingest. The 6900-S also includes Seagate 8 TB self-encrypting hard disk drives. Gacek said DXi libraries won seven-figure deals at an Asian taxation department, a European insurance company and other large deals at a U.S. telecom and European supermarket chain.

“It’s a combination of flash that handles metadata and 8 terabyte drives that give it density. Nothing else looks like it,” Gacek said of the DXi6900-S.

Scale-out (StorNext) revenue increased 12% to $39.8 million, including Quantum video surveillance deals. Scale-out storage also includes media and entertainment, and technical workflows such as unstructured data archiving. Quantum claimed more than 100 new scale-out customers and a near-70% win-rate in the quarter in scale-out tiered storage.

Total data protection revenue, including tape, increased 3% to $83.1 million despite a small drop in tape automation.

Overall, Quantum’s revenue of $133.4 million for the quarter increased $5.4 million over last year, and its $5 million profit follows a slight loss a year ago.

Gacek forecasted revenue of $120 million to $125 million this quarter, which is Quantum’s fiscal fourth quarter. “We are teed up for a good one next quarter, but I am not using superlatives like great and fantastic yet, which I think we have potential for,” he said.

Quantum video surveillance, archiving deals include tape

Part of Gacek’s reason for optimism is new uses for tape in cloud archiving.

“We believe there is a shift in tape usage to the archive scale-out, cloud-like architectures,” Gacek said. “And I think you are going to see tape media usage go up quite dramatically as an archive use case.”

More legalized marijuana might help as well.


January 26, 2017  12:44 PM

Commvault products rollout promised throughout 2017

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Following a quarter of solid revenue growth to end 2016, Commvault Systems Inc. plans a string of product enhancements throughout 2017. The additions are designed to improve Commvault’s performance in the cloud, and with software-defined storage and business analytics.

Commvault Wednesday reported $167.8 million in revenue last quarter, a 7% increase from last year. Software revenue of $77.3 million increased 8% year over year, while service revenues of $88.5 million increased 5%. Commvault broke even for the quarter following two straight quarters of losses.

During the earnings call Wednesday, CEO Bob Hammer laid out plans for a Commvault products rollout that will culminate in the Commvault GO 2017 user conference in November.

Hammer said the company plans to add capabilities for business analytics, search and business process automation as part of its strategy to become a full-scale data management player for on-premises and in the cloud deployments.

“Next month, we will further enhance our offerings with new solutions with industry-leading Web-based UIs and enhanced automation to make it easy for customers to extend data services across the enterprise Commvault solutions,” Hammer said of the Commvault products roadmap. “[We will deliver] some of the key enhancements tied to the journey to the cloud and converged data management.”

The enhancements include new data and application migration capabilities for Oracle applications and the Oracle cloud, big data, fast data and SAP HANA. Commvault already supports Hadoop, Greenplum and IBM’s General Parallel File System.

Products for the AWS cloud

Commvault will also add tools for migrating and cloning data resources to the cloud. These include automated orchestration of compute and storage services for disaster recovery, quality assurance, development and testing, optimizing cloud protection, and recovery offerings inside and across clouds to secure data against ransomware risks.

Earlier this week, Commvault added optimized cloud reference architectures for Amazon Web Services (AWS) that will make it easier for customers to implement comprehensive data protection and management in the AWS cloud.

Commvault customers will have the ability to direct data storage to specific AWS services — such as Amazon Simple Storage Service (Amazon S3), Amazon S3 Standard-Infrequent Access and Amazon Glacier for cold storage.

Hammer said the amount of data stored using the Commvault software within public environments increased by 250% during 2016.

“When you look at our internal numbers, in both cases, we’ve had strong pull from both AWS and Microsoft Azure,” Hammer said. “The pull from AWS has been stronger, so there’s a higher percentage of customers’ data in AWS, but I will also say that we are gaining a lot of momentum and traction with Microsoft and Azure.”

Hammer said Commvault continues to make progress on its software-defined data service offerings that are in early release.

“More and more of our customers are replacing or planning to replace their current IT infrastructure, with low-cost, flexible, scalable infrastructures, similar to those found in the public cloud,” he said.

“Our teams have been hard at work to embed those cloud-like capabilities directly into the Commvault data platform, so we can ensure the delivery of a new class of active, copy management and direct data usage services across an infrastructure built with low-cost, scale-out hardware,” Hammer said.

Other upgrades to Commvault products include new and enhanced enterprise search, files sync-and-share collaboration, cloud-based email and endpoint protection during the middle of 2017.

Growth dependent on new products

Commvault has been working to dig itself out of a sales slump that began in 2014. Hammer said the company still faces some critical challenges, and continued growth depends on its ability to win more large deals. A lot of its success will turn on releases of new Commvault products.

“Our ability to achieve our growth objectives is dependent on a steady flow of $500,000 and $1-million-plus deals,” he said. “These deals have quarterly revenue and earnings risk due to their complexity and timing. Even with large funnels, large deal closure rates may remain lumpy. In order to achieve our earnings objectives, we need to prudently control expenses in the near-term without jeopardizing our ability to achieve our software growth objectives for our critical technology innovation objectives.”

Commvault added 600 new customers during the quarter, bringing its total customer base to 240,000. Revenue from enterprise deals, defined at sales of more than $100,000 in software, represented 57% of the total software revenue and the number of enterprise deals increased by 22% year-over-year.


January 20, 2017  8:41 AM

Veeam Software cracked $600 million revenue mark in 2016

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

While much of the storage market is stagnant or down, data protection vendor Veeam Software said it grew revenue 28% in 2016 by expanding its business into enterprises and the cloud.

Veeam, a privately held company, this week reported its financial results for 2016. It claimed $607.4 million in bookings in 2016, which included new license sales and maintenance revenue, compared to $474 million in 2015.

Doug Hazelman, Veeam  Software’s vice president for product strategy and chief evangelist, said the bulk of the growth came from its flagship Veeam Availability Suite. The suite handles backup, restores and replication through Veeam Backup and Replication along with monitoring, reporting and capacity planing in Veeam ONE for VMware vSphere and Microsoft Hyper-V deployments.

But the Veeam Cloud and Service Provider VCSP program, which offers Disaster Recovery as a Service (DRaaS) and Backup as a Service (BaaS) helped contribute to the revenue growth, Hazelman said.

VCSP generated 79 percent year-over-year growth in 2016 as Veeam pushed to move upstream into the enterprise. License booking grew by 57% annually from enterprise-level customers.

Veeam reported the VCSP program expanded to more than 14,300 service and cloud providers. The vendor claims 230,000 customers worldwide and its Veeam Availability Suite protects up to 13.3 million virtual machines, with  1 million virtual machines using the VCSP management product.. The company added 50,000 paying customers last year.

“They are not all enterprises customers,” Hazelman said. “It’s (still) a lot of SMB commercial accounts (but) we added 761 enterprise customers in 2016.”

Hazelman said the cloud portion of Veeam’s business helped close many deals. Veeam has four business segments — SMB, commercial accounts, enterprise-level accounts and the cloud.

“The VCSP product is the fastest growing,” Hazelman said. “It’s one of the fastest growing segments. It’s not the biggest in revenue but it’s the fastest growing.”

Last year Veeam added a fully functional physical backup server backup product. Veeam Software initially started as a virtual machine backup specialist but moved into physical backup due to customer requests as it moved into enterprise accounts.

“The physical server did help a lot on closing deals but it didn’t add a lot to the total year number,” he said.

 


January 16, 2017  6:04 PM

Alluxio in-memory storage software gains EMC mindshare

Garry Kranz Garry Kranz Profile: Garry Kranz

In-memory storage startup Alluxio has struck a partnership with Dell EMC.

The news marks Alluxio’s first formal alliance with a North American storage vendor. Alluxio in September integrated its software on Chinese vendor Huawei Technologies’ FusionStore elastic block storage.

The San Mateo-based vendor’s Alluxio Enterprise Edition software will be available on Dell EMC Elastic Cloud Storage (ECS) appliances. ECS is the successor to EMC Atmos object storage.

The Exabyte-scale ECS arrays are built using commodity servers and EMC ViPR virtualized storage. Since acquiring EMC, Dell gradually has started moving EMC software-defined storage products to its PowerEdge line.

The ECS private cloud uses active-active architecture to support Hadoop Distributed File System data analytics. Dell EMC sells ECS on turnkey appliances and also as a managed service.

Alluxio Enterprise Edition is a commercial version of the startup’s open source-based virtual distributed in-memory storage software. It allows applications to share data at memory speed across disparate storage systems.

The chief attributes are high availability and high performance. Alluxio allows data to persist in a host to speed real-time data analytics.

Alluxio software is designed to accelerate Apache Spark and other workloads that process data in memory.  Storage from disparate data stores is presented in a global namespace.

EMC is not expected to bundle Alluxio in its software stack. Alluxio CEO Haoyuan Li said the partnership allows EMC to better recommend the in-memory storage to ECS customers that need high performance.

“The benefit we bring is allowing EMC ECS customers to pull data from other storage systems. Previously, you had to either move the data manually into the new compute cluster. We automate data movement,” Li said. “We  also accelerate end-to-end performance of your applications with our memory-centric architecture, which manages the compute-side storage.”

Partnering with EMC is a feather in the vendor’s cap, although perhaps not as noteworthy as if Alluxio was qualified to run on Dell EMC Isilon scale-out NAS. Big data jobs tend to use HDFS as the underlying substrate, not EMC ECS.

“ECS as a storage platform does not have a huge share of the market at this point, so this partnership won’t have a material impact on Alluxio’s top line. But it’s an interesting partnership and definitely a win for them that could lead to other partnerships with Dell EMC,” said Arun Chandrasekaran, a research vice president at Gartner Inc.


January 11, 2017  4:58 PM

Panzura raises $32 million in funding

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Panzura

Cloud NAS vendor Panzura raised $32 million in Series E equity funding this week to expand its product and give organizations an alternative to what CEO Patrick Harr calls a “dying on-premises model.”

Harr said Panzura will add support for block, Internet of Things and Hadoop interfaces for data analytics to go with its original NAS protocols. He also plans expansion outside the U.S., into  the U.K. and Europe.

“We are focusing on scaling our business in two areas,” Harr said. “One is on the channel side and second is the continued expansion of our product portfolio. We are adding addition protocols to consolidate what we view is the dying on-premise model.”

Harr said the on-premises only storage model will collapse, and he offers Panzura as a cloud-first model for building a hybrid cloud. Since he became Panzura’s CEO in May 2016, the vendor has expanded its  its hybrid cloud storage controllers and added archiving capabilities. He said he has also hired 18 engineers during his eight months at Panzura.

“We are very much in growth mode,” he said.

Harr said in 2016 Panzura added 100 new enterprise-level customers and expanded its partnerships to include Amazon Web Services, Google, IBM and Microsoft Azure. It also added 26 petabytes of customer enterprise storage.

Panzura’s new Freedom Archive software that moves infrequently accessed data to public clods or low-cost, on-premises storage could  bring the vendor into new markets. Target archive markets include healthcare, video surveillance, gas and seismic exploration and media and entertainment. Freedom Archive is a separate application from Panzura’s flagship Cloud NAS platform, which caches frequently used primary data on-site and moves the rest to the cloud.

Last summer, Panzura launched a new series of cloud storage controllers with more capacity and the ability to expand to handle multiple workloads. The new 5000 hybrid cloud storage controllers replace Panzura’s previous 4000 series.

The E funding round brings Panzura’s total funding to around $90 million. The investment was led by Matrix Partners and joined by Meritech Capital Partners, Opus Capital, Chevron, Western Digital and an undisclosed strategic investor.


January 11, 2017  7:28 AM

Kaminario secures $75 million funding round

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

All-flash array pioneer Kaminario kicked off 2017 with a cash infusion of $75 million to accelerate global expansion and fuel product support for non-volatile memory express (NVMe) technologies.

Kaminario’s fifth funding round increased its overall total to $218 million since the company launched in 2008. Waterwood Group, a private equity firm based in Hong Kong, led the latest financing effort. Additional investors included Sequoia, Pitango, Lazarus, Silicon Valley Bank and Globespan Capital Partners. Kaminario’s most recent previous funding came in January 2015.

Founder and CEO Dani Golan said Kaminario has more than doubled revenue in each of the last four years. Global expansion and go-to-market efforts will focus on eastern and western Europe, Asia Pacific and the Middle East.

Kaminario will concentrate on incorporating NVMe technologies into the company’s K2 all-flash array. Kaminario currently uses SATA-based 3D TLC NAND flash drives. NVMe-based PCI Express (PCIe) solid-state drives (SSDs) can lower latency and boost performance, and NVMe over Fabrics (NVMe-oF) can extend the benefits across a network.

Golan said Kaminario does not support NVMe SSDs yet because “the price is too high.” He added that the NVMe-oF technology “is not mature enough to run in mission-critical and business-critical environments.”

A handful of new companies are starting to ship products with NVMe drives, but Golan said Kaminario’s NVMe support will probably wait until 2018.

“The ecosystem is not there yet,” he said.

Golan said startups that currently support NVMe use drives directly attached to servers. But, with a mature array platform on the market, Kaminario needs “to drive a full storage software stack over NVMe Fabrics,” he said.

“The big gain is [going to be with] NVMe over Fabrics, because NVMe drives are just media. That’s not interesting. The interesting part is NVMe over Fabrics and NVMe shelves,” Golan said.

Kaminario’s architecture allows customers to add controllers or shelves in any combination, scaling compute separately from storage, Golan said.


December 28, 2016  8:28 AM

Survey says: Business disaster recovery plans need work

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

While companies may be confident in their disaster recovery strategy, DR planning and testing still has a ways to go, according to results of a survey by data protection vendor Zetta.

Among 403 IT professionals, 88% said they were somewhat or very confident in their disaster recovery. But while 96% said they have some type of DR, 40% responded that their organization lacks a formally documented DR plan. For those with a plan, 40% said they test it once a year, while 28% rarely or never test it.

Similar to Zetta’s findings, a recent TechTarget survey found that companies are generally confident in their business disaster recovery plans. Also similarly, 65% test their DR plan just once a year or less, according to the TechTarget survey.

“Companies need to be more rigorous around how they develop their DR plans,” Zetta CEO Mike Grossman said. That’s especially important given that more than half of the companies in the survey experienced a downtime event in the last five years.

It’s not enough for business disaster recovery plans to ask, “What happens if a hurricane hits?” According to the survey, the most common type of downtime event for an organization in the last five years was a power outage. For those that had a downtime event in that time period, nearly 75% said their organization suffered a power outage. Only 20% experienced a natural disaster in the last five years. A hardware error was the second most common response, with 53%. Both a human error and a virus/malware attack registered at close to 35%.

When people think about disaster recovery, they often think of catastrophic events. “In reality, that’s not what causes the biggest impacts day to day,” Grossman said.

Several outages made news in 2016, including incidents at Delta Air Lines and Southwest. With proper DR testing, organizations can prevent major problems like outages.

Grossman recommends testing business disaster recovery plans a minimum of once per quarter and preferably once per month.

“Unless you have a continuous process, including testing, you’re not really protected,” Grossman said.

But “not all testing is created equal,” he warned. The more rigorous and real-life, the better.

A lot of companies like to think they’re protected but they’re not, Grossman said.

According to the “State of Disaster Recovery” survey, 55% changed their DR strategy after a downtime event. It’s a positive that companies are paying more attention to the issue but a negative that they underestimate the risk, Grossman said.

For those that made changes following their last downtime event, 55% of organizations changed their DR approach, 55% added DR technology and 39% increased their DR investment. Almost 1 in 4 respondents said they increased DR testing.

As IT gets more complex, companies like Zetta need to make DR easier to manage, Grossman said. But how do you take the complexity out of something that’s complicated?

The cloud helps. Zetta is “cloud-first,” Grossman said, with backup in the cloud and failover to the cloud. And security, which has traditionally been a challenge in the cloud, is getting better.

Ninety percent of IT professionals who are using the cloud in their disaster recovery strategy said they are confident in their DR, according to the survey. Seventy-four percent of organizations using only on-premises DR said they are confident in their plans.

“It’s simpler,” Grossman said of the cloud. “It provides better protection.”


December 27, 2016  4:46 PM

Using multiple BDR products doesn’t always translate into faster data recovery

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Companies spend a lot of money on disaster recovery solutions but that doesn’t translate into faster data recovery, according to a survey conducted by Quorum titled the “State of Disaster Recovery.”

The report, which surveyed 250 CIOs, CTO and IT vice presidents, found that 80% of the companies surveyed claimed it takes more than an hour to recover from a server failure and 26% said it takes more than two hours for data recovery. Only 19% said it took less than an hour to recover. Seventy-two percent consider the speed of backup and data recovery as “critical.”

“All those backup and disaster recovery (BDR) products aren’t making their recovery any faster,” the report claimed. “While speed is essential for continuity and security… a staggering 80 percent of respondents need more than an hour to recover from a server failure. And its gets worse: more than a quarter need more than two hours.”

Sixty-four percent use more than three different disaster recovery solutions, with 26% using more than five and less than 40% using between one and three different disaster recovery products.

Moreover, a majority of the respondents said they wanted a method to simplify the management of all the BDR products they are using. Ninety percent of the respondents want to consolidate their disaster recovery solutions into one dashboard.

The report shows that the movement to the cloud has grown. Seventy-five percent of the survey respondents are using cloud-based disaster recovery solutions, while 36% use a hybrid model mixing on-premises and cloud DR. Thirty-nine percent use on Disaster Recovery as a Service (DraaS).

Eighty-nine percent have plans for more cloud-based disaster recovery solutions, with five percent stating they have no further plans and six percent stating they “don’t know.”

Disaster recovery products are growing in importance as as concerns about security increase. Seventy-seven percent said they have used their disaster recovery solutions after a security threat event occurred. Fifty-three percent respondents are worried about security threats compared to concerns about hardware failure, backup disk corruption or a natural disaster.

“Natural disasters crashing in on a data center, an employee error or a hardware failure can all pose immense problems for an organization,” the report stated. “But a skilled and willful attack can cripple a brand for years and could cost a literal fortune. Ransomware attacks particularly depend on a team’s inability to recover quickly.”

Companies are diligent about testing the production level with their current disaster recovery products. Eighty-eight percent of the respondents said they can achieve production-level testing with their current DR.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: