HyTrust has revived DataGravity’s data-aware storage technology, six months after scooping up the startup’s assets.
Cloud infrastructure specialist HyTrust today launched the CloudAdvisor automated framework to detect, classify and protect compliance-sensitive data in multiple clouds and software-defined data centers. The CloudAdvisor virtual appliance integrates DataGravity’s data analytics and data tagging.
HyTrust CloudAdvisor continuously monitors content and notifies users of suspicious activities. It scans and classifies data stored in multiple clouds and physical data centers based on the data’s value. Automated policy enforcement provides data protection.
Target customers include data centers with a high volume of unstructured data storage and firms in regulated industries.
HyTrust CEO Eric Chiu said DataGravity’s automated data classification and data discovery provide key ingredients in CloudAdvisor.
“We took the DataGravity technology and repackaged it as a software appliance, starting with things like virtual machines and backup copies. Our goal is to define, detect and defend unstructured data, which is proliferating to the point that companies really don’t know where it exists,” Chiu said.
In another case, Chiu said one HyTrust customer lacked specific insight on the type of data in its files
“We did a scan of their VMs and found about 10,000 credit-card numbers and Social Security numbers in a public share,” he said. “The customer had no idea they were just sitting there. I think that’s going to be par for the course” as companies store increasingly vast data sets.
Using the cloud as a storage tier took a few years of getting used to, but companies have decided to use multiple hybrid clouds to reduce storage costs and boost disaster recovery. The multi-cloud approach is expected to gain further acceptance in response to General Data Protection Compliance that takes effect in European Union countries in 2018.
Ex-EqualLogic executives Paula Long and John Joseph launched DataGravity in 2014. The company emerged from stealth with data-aware Discovery Series hybrid arrays that combined metadata analytics with advanced data discovery, indexing and governance.
Long and Joseph dropped the hardware arrays in favor of a software-defined storage business model in 2015, but the move came too late. By that time, artificial intelligence, cognitive computing and machine learning were already taking hold. HyTrust swooped in to pick up DataGravity technology in an asset sale in June.
Flash storage vendor Tintri has had an inauspicious start to life as a public company. On Wednesday, during only its second earnings call, there were hints that Tintri may already be on life support.
The Tintri cloud customer base added 80 new logos last quarter, yet it won’t allay fears that the newly minted public company will struggle to outrun mounting losses. Despite recent restructuring, including cutting about 80 jobs last quarter, Tintri is rapidly burning through cash and will need supplemental capital to stay afloat.
CEO Ken Klein said all options are on the table, including entertaining potential buyers. “We are exploring strategic options available to the company to deliver value to shareholders, including the sale of the company” and further optimizations to hasten positive cash flow, Klein said.
Investors predictably pulled back on the news, with Tintri shares tumbling 13% at Thursday’s open to $4.50.
Tintri’s adjusted earnings per share of 79 cents was within its guidance, but quarter-over-quarter revenue fell 6% to $31.77 million. Tintri had forecast $36.5 million in revenue for the quarter. Tintri’s year-to-date revenue of $97 million is up 15% year over year.
Tintri carries an accumulated deficit of $439.2 million, up $101 million from last quarter. The revenue miss was blamed on “delayed and reduced purchases” of Tintri cloud storage gear. Tintri booked, but did not close, a number of deals ranging from $400,000 to $1 million.
“Our third quarter revenue was impacted by continued headwinds we have faced since our IPO in June,” Klein said.
Those deals remain in the pipeline, with “cautious” revenue guidance next quarter in the range of $25 million to $27 million, Tintri CFO Ian Halifax said.
Tintri cloud arrays package flash storage and web-scale software to design private cloud infrastructure that mimics the performance of the public cloud. Tintri Enterprise Cloud Series 6000 arrays, rolled out in September, are gaining traction with users, Klein said. The EC 6000 accounted for half of the vendor’s $22.8 million in product revenue last quarter. The vendor also rolled out the T1000 platform for remote branch office and departmental deployments.
Software sales accounted for 16% of total product revenue ($3.64 million), up 2%. The increase was fueled in part by the launch of Tintri Cloud Connector, an S3-compatible integration that allows customers to tie local Tintri storage to Amazon Web Services and IBM Cloud Object Storage. Tintri also includes predictive analytics for sizing compute and storage.
What will Tintri do next?
At the time of its initial public offering in June, Tintri claimed its revenue soared 150% percent between 2015 and 2017, although debt and expenses kept the balance sheet in the red. Tintri’s IPO also came at a time when investors started to sour on storage infrastructure equities.
Lukewarm interest forced Tintri to revise its share price from $11 to $7 per share. It wound up netting proceeds of about $60 million, slightly less than half its initial $109 million target.
The Tintri cloud hardware model faces a changing competitive landscape. More and more hardware-centric vendors are shifting to software-defined storage services. Klein said Tintri plans to stay the course for now.
“The feedback from our customers, particularly the use cases that we are addressing, is that we have the right model,” Klein said.
Toshiba plans to deliver storage node software designed to extend the high performance and low latency benefits of NVMe-based solid-state drives over a network fabric.
Toshiba Memory America’s non-volatile memory express over fabrics (NVMe-oF) target software is due in the first quarter of 2018. The University of New Hampshire Interoperability Lab recently certified the unnamed Toshiba storage software with RDMA over Converged Ethernet (RoCE) network interface cards (NICs) in the storage node.
The Toshiba software runs in a target storage server and virtualizes the NVMe-based PCI Express (PCIe) solid-state drives (SSDs) from the box into a single pool, according to Joel Dedrick, a system architect for NVMe-oF at Toshiba. He said the Toshiba storage node software would enable popular datacenter orchestration systems to provision the NVMe SSDs and manage the drives, their wear and various other functions.
“Our goal here is to make the world a better place for NVMe,” he said.
Vendors such as E8 Storage, Mangstor and Vexata bundle software on NVMe hardware, but few standalone software applications for NVMe exist. Toshiba’s software competition will include startup Excelero, whose NVMesh product also virtualizes and pools NVMe-based SSDs and aims to enable applications to access them at high speed and low latency over a network fabric. Excelero cites its patented Remote Direct Drive Access (RDDA) technology as a performance differentiator. Dedrick said Toshiba’s storage node software takes a different architectural approach, and the company’s expertise in managing physical flash will also set its product apart.
Impetus for Toshiba storage software
Dedrick said Toshiba decided to get into the storage software business because it views NVMe over Fabrics as “an enormously important development” that will spur data centers to convert from traditional SCSI to latency-lowering NVMe to transfer data between clients and storage devices.
SCSI was designed with hard disk drives (HDDs) in mind, and newer NVMe targets faster solid-state storage, using a more streamlined command set to process I/O requests. NVMe requires less than half the number of CPU instructions that SCSI does with SAS drives. NVMe also supports 64,000 commands in a single message queue and as many as 65,535 I/O queues, whereas a SAS device typically supports a maximum of 256 commands in one queue.
NVMe over Fabrics is designed to extend the latency-lowering, performance-enhancing advantages of NVMe over a network fabric. Toshiba recommends 100 Gigabit Ethernet for deployments with multiple NVMe-based SSDs in the storage node, although no minimum speed is required.
“The larger the number of drives that you aggregate in a single place, the bigger the network pipe is going to want to be going in and out of there,” Dedrick said.
The new Toshiba storage node software will target enterprises with high-performance databases, according to Dedrick.
Dedrick said Toshiba plans to test and certify servers that run its storage node software, and potential enterprise customers could request it through their OEMs. He said Toshiba also plans to license the software to ODM/OEM partners, which could sell it as a value-added option for their standard offerings.
Toshiba did not disclose pricing for the storage node software.
Western Digital and Toshiba settled their NAND dispute with an out-of-court settlement that allows Toshiba’s sale of its NAND chip business while WD keeps its stake in the companies’ joint venture.
Western Digital Tuesday agreed to drop pending court arbitration and litigation through the courts, and allow Toshiba’s proposed $17.7 billion sale of its chip unit to a consortium led by Bain Capital. Western Digital moved to legally block that deal, saying it violated terms of the joint venture agreement. WD came into the joint venture through its acquisition of SanDisk, Toshiba’s original partner in the 17-year joint venture.
The Western Digital-Toshiba dispute began early this year when Japanese-based Toshiba said it would sell its memory business to stave off bankruptcy. Western Digital claimed any sale of the chip business required its consent, but Toshiba rebuked WD’s attempt to buy the business. Instead, it reached agreement with the Bain group that excluded Western Digital.
The Western Digital-Toshiba settlement extends the companies’ NAND joint venture investments to Dec. 31, 2027 and beyond. Western Digital can participate in all future JV investments, including a new wafer fab facility in Japan. The joint agreements had been scheduled to begin expiring in 2021.
The deal allows Toshiba to sell Toshiba Memory Corp. (TMC) to the Bain group that includes Western Digital competitors Seagate, Kingston Technologies and SK Hynix, along with its customers Dell and Apple.
The Western Digital-Toshiba agreement also allows the consortium to eventually take TMC into the public markets.
Western Digital CEO Steve Milligan said his company never wanted to go through the courts to settle the dispute.
“As the process moved down the litigation path, that was not our preferred path,” Milligan said on a Tuesday night conference call. “We’re very pleased to be able to resolve this and put it behind us.”
Milligan called the agreement “a win-win for all parties” because it allows the TMC deal to go through while guaranteeing Western Digital access to NAND supply for at least a decade.
Milligan said the Western Digital-Toshiba joint venture is necessary for his company to manufacture NAND flash, a key asset as flash becomes a large part of enterprise and consumer storage. “We have no plans to manufacture NAND flash, nor do we have the ability to manufacture outside of the joint agreement,” he said.
Western Digital executives said WD will have access to intellectual property from the JV, but members of the Bain consortium will not share that IP.
Neither Toshiba nor Bain took part in the conference call with Western Digital executives, but both released statements confirming the agreement. Yasuo Naruke, senior executive vice president of Toshiba Corp. and CEO of TMC, said he expects the Bain deal to close by the end of March 2018.
“With the concerns about litigation and arbitration removed, we look forward to renewing our collaboration with Western Digital, and accelerating TMC’s growth to meet growing global demand for flash memory,” Naruke said.
Growth in Dell EMC all-flash storage is one of the bright spots in what remains a tough slog for legacy array vendors.
Dell Technologies on Thursday reported consolidated revenue of $19.6 billion for the last quarter. That’s up 2% on a quarterly basis and 21% year over year. Gross margin as a percentage of revenue was $6.4 billion, or 32.2%. Operating losses widened to $530 million, largely a result of debt related to the Dell-EMC acquisition in September 2016.
Dell EMC storage is part of the Dell Infrastructure Solutions Group (ISG), which also encompasses servers and networking. ISG generated $7.5 billion in revenue last quarter. Servers and networking sales jumped 32% year over year to $3.9 billion.
Storage was a different story. In a continuing industry trend, Dell acknowledged that demand for traditional networked storage continues to drop. Storage revenue of $3.7 billion remained flat. Increased demand for Dell EMC all-flash storage and hyper-converged infrastructure were offset by a softening market for legacy systems, Dell Technologies CFO Tom Sweet said.
Sweet said Dell EMC all-flash and Isilon scale-out NAS increased by double digits. HCI saw triple-digit growth, spearheaded by VxRail adoption. He declined to provide specific revenue breakdowns for those product categories
Dell EMC achieved “better pricing and better mix in storage, even (though) volume wasn’t quite where we wanted it,” Sweet said.
This was the first Dell EMC earnings call to include a full quarter of results for EMC and VMware products. In February, VMware moved to the Dell Technologies’ fiscal calendar after previously reporting results on a calendar-quarter basis. VMware virtualized storage software contributed $1.9 billion on operating income of $638 million.
Dell closed the quarter with $18 billion of cash and equivalents on the books, including the proceeds of VMware’s recent debt issuance. Dell debt maturities of about $3 billion start becoming due in April.
Dell has paid down $9.7 billion of the gross debt it used to acquire EMC. That includes $1.7 billion in debt satisfaction during the third quarter.
Sweet said flexible consumption models are expected to account for an increasing percentage of Dell revenue. Consumption-based services realize recurring revenue incrementally across the length of a multiyear customer contract.
“These tend to have better profitability, but it does change the timing and pattern of when (revenues) are recognized,” Sweet said.
Jeff Clarke, a Dell vice chairman of products and operations, said Dell EMC midrange storage is receiving increased attention as a way to shore up sagging storage growth. The focus involves reshaping sales incentives and expanding product features of Dell EMC all-flash and hybrid Unity, SC Series and PS Series arrays.
“We increased our go-to-market capacity by adding storage specialists and are ensuring our sales compensation plan spurs the appropriate behavior to drive long-term strength in our results,” Clarke said.
Dell EMC all-flash SC Series array models launched in November. Due out soon are software enhancements for midrange Dell EMC Unity arrays, including the addition of inline data deduplication, synchronous file replication and in-place storage controller upgrades.
Dell also has launched an Internet of Things division to coordinate development of products and services across its business lines.
Santa Clara, Calif. — By now, most people realize this is the age of convergence in IT – especially as it applies to storage. We have converged infrastructure mixing storage, compute and networking; hyper-converged infrastructure integrating compute, storage and virtualization in one box, and converged secondary storage putting backup, DR, archiving, test/dev, copy and cloud data on one platform.
Now startup Hedvig is pushing a new kind of convergence – primary and secondary data together in one distributed platform.
Hedvig designed its software-defined storage as scale-out, multi-cloud primary storage. But the startup finds early customers sometimes use it as a backup data deduplication target running on x86 servers. Hedvig CEO Avinash Lakshman said Hedvig software can drive primary storage that requires no separate backup.
“One capability we can bring to the table naturally is, if Hedvig is chosen as a primary storage platform, then you don’t need to take backups at all,” Lakshman said during a press briefing at Hedvig’s headquarters this week. “You can take scheduled snapshots in your primary environment, and go back to any snapshot from your primary environment. Think of it as converged where you have primary and secondary storage built it. We also provide the capability of moving snapshots to the public cloud as they age.”
Old-school backup admins will tell you this violates a cardinal rule of data protection. “It used to be, ‘Thou shalt not put backup data on the same box as primary,’” said Eric Carter, Hedvig senior director of marketing. “But distributed systems are no longer the same box.”
Hedvig also positions itself as a good fit for dev/ops because it includes self-service APIs to program and integrate applications.
Hedvig claims its software can run any workload on any infrastructure and over any cloud.
“We have been multi-cloud even before that term was coined,” Lakshman said.
Hedvig software forms a universal data plane supporting block, file and object storage. It installs on x86 nodes and cloud instances and forms a scale-out storage cluster over multiple sites and private and public clouds. Its storage proxy presents virtual disks at the application layer, routes I/O to the storage cluster, enables local flash-optimized services, and includes APIs for plug-ins and direct application integration.
Lakshman, who created Casandra and helped create Amazon DynamoDB as a developer, founded Hedvig in2012. Hedvig 1.0 software started shipping in 2015, and Lakshman said the company still has less than 50 customers. However, it has a few large customers since joining Hewlett Packard Enterprise Complete Program last June, a few months after HPE participated in a $21.5 million funding round.
Lakshman said the HPE reseller deal “has been a shot in the arm for us. They walk us to the table for deals we never could be part of, with Fortune 100 companies. We have a least half a dozen of those customers now. All those companies are pivoting toward hybrid and multi-cloud.”
Nutanix is hard selling the value of its software.
While the hyper-converged vendor stopped short of re-naming itself Nutanix Software, CEO Dheeraj Pandey used its earnings call last week to emphasize that Nutanix software drives its products. And it’s not just what the software does for customers; Pandey focused on how Nutanix is building its accounting and sales practices around being a software company.
Pandey went back to Nutanix’s roots, explaining why it started selling its software on integrated appliances and how it has slowly moved off that stance.
Nutanix will still sell its appliances, but will recognize revenue only from software and continue its push to sell that software on any x86 vendor’s hardware. That model is working, judging from last quarters’ results. Nutanix revenue of $276 million last quarter increased 46% over last year and beat expectations. The vendor also cut its losses to $65.1 million from $140 million a year ago.
But the Nutanix software transformation dominated the discussion from Pandey and CFO Duston Williams. While it is mostly an accounting move designed to make Nutanix look more attractive to investors, it also accelerates the company’s recent strategy of partnering closely with all major x86 server vendors.
Pandey said when Nutanix came to market in late 2011, the IT world was not ready for a software-only delivery model. That meant Nutanix software needed to ship on a pre-built appliance. It chose Supermicro as its hardware partner.
“Software-defined anything was too abstract for our customers to put their arms around,” Pandey said. “Our only route to market was to take full control of our own destiny. The Nutanix appliance was born.”
Nutanix eventually found OEM partners, beginning with Dell in 2014 and extending to Lenovo and IBM. It also forged partnerships with resellers to install Nutanix software on servers from Cisco and Hewlett-Packard Enterprise so customers can run Nutanix software on any major x86 platform.
“We now have a meaningful competitive advantage in being the most portable operating system built for the enterprise cloud,” Pandey said.
Nutanix will change the way it recognizes revenue, emphasizing software licenses instead of the hardware to raise margins that investors watch closely.
Pandey said 10% of its revenue last quarter came through OEM deals, and 30% of its HCI nodes run on OEM hardware.
CFO Williams added: “Today, we are a software company, more specifically an enterprise cloud operating systems company that up until now has delivered a majority of its software via its own branded appliance and recognize the associated hardware revenue.”
Williams said Nutanix is in a years-long transition, and “will emerge as exactly what it is, an enterprise cloud operating systems company.”
The goal is to do that in a way that there will be “absolutely zero change from what the customer sees,” Williams said. “So that process from a customer standpoint is left intact and exactly the same as it has been in the past.”
The Nutanix software-centric approach resembles the VMware business model. VMware vSAN is Nutanix’s primary hyper-converged software competition, as well as a frequent partner. Nutanix also sells an AHV hypervisor that competes with VMware’s flagship ESX product. VMware’s success has always depended on its relationship with all major server hardware vendors. That is still the case, even now that it is owned by one of those server vendors, Dell.
On VMware’s earnings call Thursday, VMware CEO Pat Gelsinger reported vSAN license bookings grew over 150% year-over-year last quarter.
VMware’s parent is also one of Nutanix’s biggest hardware partners. Dell EMC sells the its XC hyper-converged appliance based on PowerEdge servers through an OEM deal that pre-dates the Dell-EMC merger. Dell EMC also sells VxRail HCI appliances running vSAN on PowerEdge servers.
Bolstered by lead investor SoftBank Corp., Nexenta secured $20 million in financing this week to fund its new cloud portfolio.
The Santa Clara, California-based storage software vendor next month plans to unveil its NexentaCloud for Amazon Web Services (AWS), according to CEO Tarkan Maner. NexentaStor CloudNAS and NexentaFusion CloudManagement will be the first two Nexenta options available through the AWS Marketplace.
Customers using NexentaStor on premises would have the option to back up data to a NexentaStor instance running on Amazon’s Elastic Cloud Compute (EC2), with connections to Amazon’s Simple Storage Service (S3), Maner said. They would be able to use NexentaFusion for management and analytics across both environments, he noted.
“This is more like a DR and backup service for Nexenta customers to move their backups to a cloud environment for certain data types,” Maner said. “This is not necessarily a full-blown cloud product running by itself on the cloud. It’s a hybrid cloud technology.”
The company also plans a NexentaCloud option through AWS Marketplace that will not require an on-premise Nexenta deployment.
Nexenta, SoftBank reach strategic agreement
Maner said SoftBank Cloud is due to roll out NexentaCloud for Japanese customers in 2018, and support for additional clouds will follow.
Tokyo-based SoftBank also struck a strategic distribution and go-to-market agreement with Nexenta. SoftBank plans to distribute Nexenta software in Japan, and its affiliated companies gain preferential purchasing rights to Nexenta software running on hardware from Dell, Lenovo, Supermicro and other vendors. SoftBank Cloud also plans to use Nexenta software in partnership with new OEMs such as Huawei and Foxconn, according to Nexenta.
“That’s a big story for us because Japan is the second-largest storage market,” Maner said. “SoftBank has a very big customer base as a telco and mobile service provider, and they have 1,300 affiliated companies from Sprint to Yahoo Japan to Arm Holdings.”
“That’s a big story for us because Japan is the second-largest storage market,” Maner said of the . “SoftBank has a very big customer base as a telco and mobile service provider, and they have 1,300 affiliated companies from Sprint to Yahoo Japan to Arm Holdings.”
SoftBank will have access to Nexenta’s full product portfolio, which includes NexentaStor scale-up block and file storage, NexentaEdge scale-out object storage, and NexentaFusion reporting, monitoring, storage analytics and orchestration through a single pane of glass.
Nexenta has raised about $100 million since 2005. Other investors in the new round included Javelin Venture Partners, SV Booth Investments, SAB Capital, Lake Trail Capital, TRB Equity, and Nexenta CEO Maner.
Maner said he expects Nexenta to reach profitability next year. He said 2017 was a good year for the company, with 80% year-over-year growth in revenue. He said the revenue is split roughly 50-50 between new customers and renewals from older customers.
Nexenta has about 3,000 customers with a collective storage capacity of about two exabytes in production, and this year went over $100 million in cumulative bookings since its 2005 founding, Maner said.
“We believe hardware-centric storage companies are going to struggle because everything’s going server-based and software-only,” he said. “We see a huge opportunity in that space, and this investment round obviously will help us achieve that goal.”
Barracuda Networks became the latest public technology vendor to go private when equity firm Thoma Bravo agreed to pay $1.6 billion this week to acquire the security and data protection vendor.
The Barracuda Networks acquisition is expected to close in February, before the end of Barracuda Networks’ fiscal year. The security and data protection company said it does not expect any executive changes while all the current Barracuda employees will remain as part of the private company.
Barracuda Networks offers backup and email security for mid-market companies. It offers the Barracuda Essentials for cloud-based security, archiving and backup for Microsoft 365 and Exchange. It also provides the Barracuda Message Archiver for compliance and e-discovery and a Cloud Archiving service.
The Barracuda backup and recovery appliances handle physical-to-physical systems, physical-to-virtual systems and virtual-to-virtual configurations. The cloud-to-cloud backup service protects Microsoft Office 365, SharePoint Online and OneDrive. Data is deduplicated and compressed to reduce backup windows before it is stored in the Barracuda Cloud.
Barracuda Networks declined a request to comment on the acquisition.
Dave Russell, vice president and distinguished analyst at Gartner, said he is skeptical of private equity buy outs but Thoma Bravo has a good track record with its technology acquisitions.
“My angle on this is whenever a private equity firm (buys a vendor), is it really good news?” Russell said. “That is not usually what happens. They tend to milk (the company). If I look for a silver lining in this, it’s if Barracuda can get out of the distraction of quarterly reports, they can replicate what SonicWall and Blue Coat got from Thoma Bravo, which is investment.”
“Thoma Bravo has a history of investing and increasing R&D over time.”
This past summer, Barracuda Networks took its first step in delivering disaster recovery in the public cloud by allowing customers to replicate to an Amazon Simple Storage Service (S3) cloud. Customers can replicate data from an on-premises physical or virtual appliance to an Amazon S3 bucket. Barracuda previously offered appliances with built-in software replication to either the Barracuda Cloud Storage or to another Barracuda Backup appliance in an off-site location or to external disk or tape.
The support of the Amazon S3 cloud allows customers a choice to store data off-site either in the proprietary Barracuda cloud or use the Amazon Web Services (AWS) public cloud as a data protection target.
Barracuda Networks’ Essentials for Email Security has had promising growth in the past few quarters. The product has integrated backup and email archiving embedded in it. Rod Mathews, senior vice president and general manager of data protection business at Barracuda, called that Barracuda’s first step with a major public cloud offering. The AWS offering began in North America and they planned to expand in Europe later this year.
“Down the road we will be able to do multi-cloud for Microsoft Azure,” he said in August.
In October, Barracuda claimed to support more than 85 PB of storage in its cloud, and said it helps customers with approximately 3 million backup jobs and more than 12,000 recoveries per month.
Barracuda acquired managed service provider Intronis for $65 million in October 2015.
Barracuda last month reported income of $94.3 million for its last full quarter, up from $87.9 million from the same quarter a year ago. Most of its revenue — $76 million – came from subscriptions. Appliance revenue slipped from $21 million a year ago to $18.3 million, and its profit of $1.6 million dropped from $2.4 million the previous year.
Russell said Barracuda Networks’ previous success was attributed to being early to market with appliances, owning its supply chain and “almost concierge-like service.”
“Three or four years ago, that was unique,” he said. “Now you have many kinds of appliances and vendors. The market has gotten more crowded. Barracuda has been having on and off financial challenges for a year now. They needed a shot in the arm.”
Too much data is weighing heavily on data recovery plans, according to a recent survey.
The problem has scaled beyond what many organizations can handle, said Douglas Brockett, president of backup and recovery software vendor StorageCraft, which commissioned the survey.
“People are choking on the volume of data” that’s expected to be backed up, Brockett said. “I think we’re seeing a breaking point.”
The survey of more than 500 IT decision makers — which include management-level employees and above, according to StorageCraft — found that 43 percent are struggling with data growth and believe it is going to get worse. Fifty-one percent are not confident that their organizations can perform instant data recovery after a failure.
Difficulties with data recovery plans hit organizations of all sizes, from small businesses to large enterprises, Brockett said.
- 58% of companies with revenues under $1 million are not confident of instant recovery
- 50% of companies with revenues between $1 million and $500 million are not confident of instant recovery
- 51% of companies with revenues of more than $500 million are not confident of instant recovery
In addition, 51% of larger organizations said they would benefit from more frequent data backup but their infrastructure doesn’t allow it, according to the survey. And among businesses with less than 500 employees, 65% are not confident they can get their systems back up in minutes.
Certain business areas feel the data recovery burn more than others. Top examples include healthcare and its needs in data retention and mission-critical data, and financial services with time-sensitive data, Brockett said.
Specifically, 56% of healthcare and 54% of finance IT decision makers in the survey said they would benefit from more frequent backups, but the scale of data growth and backup technology infrastructure doesn’t allow it.
“You see a lot of anxiety in these types of industries,” Brockett said.
While the cloud is a popular place for off-site copies, data recovery plans suffer because the process to retrieve data from a provider like Amazon Web Services can be expensive and slow, Brockett said.
Organizations should be thinking, “Can I spin up a virtual machine in the cloud?” As in, if you have cloud backup, you should have cloud recovery, in the form of disaster recovery as a service.
“Make sure your off-site backup is bootable,” Brockett said.
To improve data recovery plans in the face of exponential data growth, Brockett suggests organizations use integrated, scalable data management and protection platforms. And organizations should be smart with data, using tiered storage and data reduction techniques.
“Data analysis is a critical part of getting a strong data protection infrastructure,” Brockett said.