Ctera Networks CEO Liran Eshel said his cloud file system company became cash flow positive this year, but it grabbed $30 million in new funding to grow as part of a booming market.
Ctera Networks raised $30 million in Series D growth equity funding to expand its global sales and delivery organization, especially in Southeast Asia and Singapore, and continue development of its enterprise file services technology. The latest financing round boosted the startup’s overall total to $100 million since 2008.
Ctera sells enterprise file software designed to cache active data on premises and shift colder data, in compressed and encrypted form, to object storage located in private and public clouds. In addition to translating data from file-to-object format, the software offers additional capabilities such as authentication, orchestration, synchronization and sharing.
Eshel said profitability is not Ctera’s top priority now. Neither is an IPO, although Eshel said “it’s definitely something we’re looking at.”
“We are investing significantly and will continue to invest in order to get more high growth and reach more customers,” Eshel said. “We could have just remained cash flow positive and be happy with where we are. But we think there’s much more in this market, and there’s much more land grabbing to be done. That’s why we will need to invest.”
Ctera customers have the option to use their own hardware or buy cloud gateway appliances that package the software. Ctera Networks introduced more powerful new HC Series Edge Filers on Dell and Hewlett Packard Enterprise (HPE) servers last summer.
“We are able to cover additional use cases and workloads that were traditionally solved by NAS systems. Now you could replace them with a more powerful cloud gateway,” Eshel said, claiming the new HC Series Edge Filers are doing well.
Eshel said Ctera Networks generally sells its software or gateways as part of deals with other infrastructure providers. He said the company often works with vendors such as Cisco Systems, Dell EMC, HPE and IBM.
“The bigger part of our business today comes from these infrastructure providers while we go to the market with complete solutions,” Eshel said. Ctera also has strategic reselling agreements with HPE and IBM.
Ctera Networks claims to have more than doubled its enterprise software subscription revenue during the last year. The company sells to cloud providers and enterprises, and its software is currently deployed in more than 200 private clouds, according to Eshel. Some of Ctera’s largest customers include McDonald’s, WPP, and the U.S. Department of Defense.
Eshel said the new funding would finance Ctera’s ongoing work to connect hyper-converged systems to a cloud file system. Ctera’s research and development arm is based in Israel, and the company’s sales headquarters is in New York.
Ctera’s competition in the cloud gateway space includes Nasuni and Panzura, but those vendors have all expanded their product lines with additional capabilities beyond mere file-to-object protocol translation.
Israel-based Red Dot Capital Partners led Ctera’s Series D funding round. Red Dot receives its funding from Temasek Holdings, an investment company owned by the Singapore government. Additional investors included Singtel Innov8, the VC arm of the Singapore-based Singtel Group telecommunications company. Also participating in Ctera’s Series D round were previous investors Benchmark Capital, Bessemer Venture Partners, Cisco, Venrock, Vintage Investment Partners and Viola Group.
Other recent funding rounds in the cloud market include $94 million for file and object storage vendor Cloudian, $75 million for cloud file sharing and content collaboration specialist Egnyte, $68 million for public cloud storage provider Wasabi Technologies, and $60 million for hybrid cloud computing and data management startup Datrium.
Quest backup has vaulted into the Office 365 workspace.
NetVault Backup 12.1 includes a plug-in that enables full and incremental backup and recovery of Office 365 Exchange Online mailboxes. Customers can back up to the cloud and on premises. They can restore individual, shared and resource mailboxes. The Office 365 plug-in provides flexible restore options and customers can restore only the data they need.
Quest built the plug-in with the Microsoft Graph API. While other vendors may be using old scripting, Quest is using new technology pushed by Microsoft, said Adrian Moir, senior consultant of product management.
“It allows us to grow across the Microsoft platform a lot faster,” Moir said.
Quest backup customers can restore emails, attachments, contacts and calendars.
Good timing for Office 365 backup
Don McNaughton, vice president of sales for Quest reseller HorizonTek, said many customers are using Office 365 and need backup for the SaaS app. Adding the backup support enables NetVault to remain a single data protection offering for those customers on Office 365. Standout features include the full or incremental backup options, full mailbox recovery and granular recovery, he said.
“So the timing was good,” McNaughton said.
Customers “want everything done in one place,” Moir said. That protection includes cloud and on-premises workloads, as well as hybrid approaches.
The Quest backup update builds on what the vendor launched with its NetVault 12.0 release, which aimed for more enterprise adoption. Moir said he expects Quest to add more technology focused on Office 365.
Competition includes some vendors purely focused on SaaS backup and others that incorporate it as part of an overall data protection platform.
“It’s a crowded market. Trying to differentiate is never easy,” Moir said, adding that he feels the Quest backup product’s flexibility, API incorporation, scalability and ease of use are standouts.
Beyond backup for Office 365
The NetVault Backup update also provides a multi-tenant architecture for managed service providers. In addition, an update to its VMware plug-in features vSphere 6.7 support.
McNaughton said HorizonTek is still analyzing the potential benefits of the other updates to 12.1 beyond the Office 365 backup.
McNaughton’s company has been a Quest partner since 2010. HorizonTek has actually been selling NetVault for about 20 years, predating when it became part of Quest backup. Quest acquired the NetVault platform from BakBone in 2010.
“After all this time, I’m still very happy introducing it to my customers,” McNaughton said. “NetVault has done a great job keeping up as technologies come out.”
Quest backup is on top of major trends in the industry, he said, including cloud integration and keeping everything under a single pane of glass.
McNaughton said he also likes how well NetVault integrates with Quest’s new QoreStor software-defined product as well as other secondary storage platforms.
Quest claims thousands of NetVault customers.
What are high availability applications if they’re not highly available?
According to a report released this month by SIOS, in partnership with ActualTech Media, one-quarter of respondents say their high availability applications fail every month. Only 5% said they never suffer an availability failure.
“An organization’s highly available applications are generally the ones that ensure that a business remains in operation. Such systems can range from order-taking systems to CRM databases to anything that keeps employees, customers and partners working with you,” the report said. “… The news is mixed when it comes to how well HA applications are supported.”
The report, “The State of Application High Availability,” gathered responses from 390 IT professionals in the United States and focused on tier-1 mission-critical applications, including Oracle, Microsoft SQL Server and SAP/HANA.
Twenty-six percent said their availability service fails at least once a month.
“This is a difficult statistic to grasp, as it would seem that there’s a fundamental flaw somewhere that needs to be corrected,” the report said. “Fortunately, not everyone is faring this badly.”
Among the rest of the 95% that said they suffer failures in high availability applications, 28% said it happens every three to six months, 16% said it happens every six to 12 months and 25% said it happens once per year or less.
High availability requires expertise, said Jerry Melnick, president and CEO of SIOS, a software company that manages and protects business-critical applications. That includes getting the right software to match requirements, getting the system configured correctly, plus discipline and management in how organizations approach the cloud, he said.
Is high availability up in the cloud?
As with many other uses, organizations are exploring the use of the cloud for high availability applications.
“Modern organizations are embracing the hybrid cloud and making strategic decisions around where to operate critical workloads,” the report said. “But not everyone is keen on moving applications into an off-premises environment.”
Twelve percent of respondents have not moved any high availability applications to the cloud. Twenty-four percent are running more than half of those applications in the cloud.
“Putting all those pieces together … requires a higher set of IT skills,” Melnick said.
Once an organization gets there, though, the cloud can help streamline high availability operations.
“The cloud offers a unique opportunity to cost effectively get to disaster recovery and handle disaster recovery scenarios,” Melnick said.
Sixty percent of organizations that haven’t made the full move to the cloud said they prefer to keep high availability applications on premises where they have more control over the infrastructure.
Melnick said he thinks some of those respondents will eventually move to the cloud.
Datrium’s latest $60 million funding will fuel its hybrid cloud computing and data management product line and business expansion into Europe.
The Series D funding round boosted the Sunnyvale, California-based startup’s overall total to $170 million since 2012. New CEO Tim Page closed the round as he tries to pivot the company from its SMB and midmarket roots to enterprise sales of Datrium DVX.
Former CEO Brian Biles, a Datrium founder who is now chief product officer, said the startup is having a great quarter, and Page has “re-energized a lot of our focus on go-to-market.” Page’s experience includes building out an enterprise sales organization while COO at VCE, the VMware-Cisco-EMC joint venture that produced Vblock converged infrastructure systems.
Datrium DVX first hit the market in early 2016 with server-based flash cache to accelerate data reads and separate data nodes for back end storage. DVX software orchestrates and manages data placement between the Datrium Compute Nodes and Data Nodes and provides storage features such as inline deduplication, compression, snapshots, replication, and encryption.
Separate Compute and Data Nodes
Datrium now pitches its on-premises DVX as converging “tier 1 hyper-converged infrastructure (HCI) with scale-out backup and cloud disaster recovery (DR).” But Datrium DVX is not HCI in the classic sense with virtualization, compute, and storage in the same box. The Datrium DVX system’s Compute Nodes cache active data on the front end, and separate Data Nodes store information on the back end, enabling customers to scale performance and capacity independently.
Customers have the option to buy Datrium Compute Nodes, supply their own servers, or use a combination of the two, so long as they’re equipped with solid-state drives (SSDs) to cache data. The compute nodes support VMware, Red Hat and CentOS virtual machines. Disk- or flash-based Datrium Data Node appliances handle the backend storage.
This year, Datrium added a software-as-a-service Cloud DVX option to back up data in Amazon Web Services (AWS) and CloudShift software for disaster recovery orchestration. The company claimed that more than 30% of its new customers adopted Cloud DVX within the first three months of its availability. Biles said Cloud DVX could lower backup costs in AWS because Datrium globally deduplicates data.
Biles characterized Datrium’s Series D funding as a “standard round” that will help to grow all parts of the company. He said Datrium currently operates in the United States and, to a lesser degree, in Canada and Japan, and the company plans to expand to Europe next year. Datrium has more than 150 employees and more than 200 customers, according to company sources.
“We have good momentum now, but we want to keep feeding that,” Biles said. He offered no estimate on when the company might become cash-flow positive. “A lot depends on the next couple of years of sales acceleration.”
Samsung Catalyst Fund led the latest funding round, with additional backing from Icon Ventures and prior investors NEA and Lightspeed Venture Partners. Icon’s Michael Mullany, a former VP of marketing and products at VMware, joined Datrium’s board of directors.
Dell EMC extended its lead over Nutanix in hyper-converged systems sales in the second quarter, although Nutanix crept ahead of Dell-owned VMware into first when the market is measured by HCI software.
That was the verdict from IDC in its worldwide converged systems tracker report released last night.
IDC measures the hyper-converged infrastructure (HCI) market two ways: by the brand of the systems and by the vendor whose software provides the core hyper-converged capabilities. Dell-owned technologies led both HCI market categories in the first quarter with Nutanix second in both. Nutanix, which moved to a software-centric reporting model earlier this year and is getting out of the hardware business, jumped up in software revenue but lost ground to Dell EMC in systems.
Overall, IDC said the HCI market grew 78% year-over-year to $1.5 billion in the second quarter. Dell EMC’s $419 million in revenue gave it 28.8% share. That represented 95.2% year-over-year growth, outgrowing the market. Nutanix placed second in branded revenue with $275.3 million, up 48.5% year-over-year and basically flat from its first-quarter branded revenue of $273 million. Nutanix had 18.9% of the branded revenue, down from 22.7% a year ago and 22.2% in the first quarter of 2018.
On the software side, Nutanix revenue grew 88.9% year-over-year to $498 million and 34.2% of the HCI market. It slipped past VMware, which grew 97% year-over-year to $496 million and 34.1% share. IDC considers Nutanix and VMware in a statistical tie because they are within one percent in share. VMware’s share jumped from 30.9% in the second quarter of 2017 to 34.1% a year later. But it dropped from 37.2% share in the first quarter while Nutanix increased from 35.2% to 34.2% quarter-over-quarter to catch VMware. However, Dell did receive part of Nutanix’s revenue gains because the Dell EMC XC platform uses Nutanix software through an OEM deal,.
Dell had $79 million in HCI software, putting it in a statistical tie Cisco ($77 million) and Hewlett-Packard Enterprise ($72 million). Dell had 5.4% share, Cisco 5.3% and HPE 4.9% — all within one percent. Because Cisco and HPE sell their software on their own servers, they had the same revenue and share in systems as in HCI software. HPE had the largest year-over-year growth of any systems vendor, increasing 119.4%. However, Cisco grew more since the first quarter, jumping from $60 million to $77 million and increasing share from 4.9%. HPE dropped share quarter-over-year, slipping from 5% to 4.9% while its revenue went from $61 million to $72 million.
Hyper-convergence was the only three of the converged markets that increased year-over-year. The certified reference systems/integrated infrastructure market declined 13.9% year-over-year to $1.3 billion in revenue. Integrated platform sales slipped to $729 million for a 12.5% decline. Dell led the certified reference systems market with $640 million, with No. 2 Cisco/NetApp at $481 million. Oracle led in integrated platforms with $441 million and 60.4% share. The HCI market is also now the largest of those three converged markets for the first time.
NetApp launched its Data Fabric architecture to adapt its storage to manage applications built for the cloud. Container orchestration had largely been a missing aspect in Data Fabric, but the vendor has taken a step to try and plug the gap.
NetApp has acquired Seattle-based StackPointCloud for an undisclosed sum. StackPointCloud has developed a Kubernetes-based control plane, Stackpoint.io, to federate trusted clusters and sync persistent storage containers among public cloud providers.
The first fruit of the merger is the NetApp Kubernetes Service, which the vendor claims will allow customers to launch a Kubernetes cluster in three clicks and scale it to hundreds of users. NetApp said it will levy a surcharge of 20% of the overall compute cost for the cluster to cover deployment, maintenance and upgrades. That equates to about $200 on $1,000 of overall compute.
The NetApp Kubernetes Service engine will allow customers to deploy containers at scale from a single user interface with underlying NetApp storage, said Anthony Lye, a NetApp senior vice president of cloud data services.
The Cloud Native Computing Foundation took over management of Kubernetes development earlier this year from Google. Docker Inc. popularized container deployments with its Docker Swarm orchestration management. Other open-source container tools include Apache Mesos and Red Hat OpenShift.
NetApp customers will still be able to use their preferred deployment framework, but Lye said Kubernetes is “the clear winner” among container operating systems.
He said Stackpoint completes the work NetApp started with its open source dynamic container-provisioning project, codenamed Trident. NetApp Kubernetes Service is available immediately.
Lye said his internal development teams were using the Stackpoint engine to deploy NetApp storage infrastructure at global cloud data centers run by Amazon Web Services (AWS), Google Compute Platform (GCP) and Microsoft Azure. In addition to the big three, StackPointCloud supports Digital Ocean and Packet clouds.
“My engineers were telling me this was the best thing they’d ever seen, plus the market was telling us that storage and containers need to go together and (enterprises) are using multiple clouds. Those three reasons led us to make the acquisition,” Lye said.
The DevOps trend has been fueled by container virtualization for writing cloud-native applications with specialized microservices. Linux-based containers also are gaining attention for the ability to “lift and shift” traditional legacy applications to hybrid cloud environments. Unlike a virtual machine, a container does not require a hosted copy of a full operating system.
Designed on Kubernetes Storage Classes, NetApp Trident was developed to simplify persistent-volume provisioning for OnTap-based storage, SolidFire and E Series arrays. Lye said the NetApp Kubernetes Service allows developers to run canary environments to test new applications with mixed nodes of graphics processing units and regular CPUs.
StackPointCloud launched in 2014 with bootstrapped funding. The transaction brings CEO Matt Baldwin to NetApp, along with an undisclosed number of StackPointCloud employees.
Stackpoint integration will start with NetApp HCI hyper-converged infrastructure and FlexPod converged systems. The plan is to extend NetApp Kubernetes Service across all of NetApp’s storage, Lye said. “Our strategy is to continue to build tighter connections between our cloud protocols and containers and extend the control plane from the public clouds down to support NetApp HCI or NetApp’s private clouds.”
Newcomer Wasabi Technologies will try to build up its brand recognition and take on the Big Three public cloud providers after raising $68 million in Series B funding.
“You don’t go up against Microsoft, Google and Amazon with pocket change,” said David Friend, Wasabi CEO and founder.
The Boston-based cloud storage provider launched in 2017 with about $8.5 million in its coffers from a 2016 Series A funding round, when the company was known as BlueArchive. Wasabi added another $10.8 million a few months later through a convertible note that folded into the recently closed $68 million Series B round.
Friend said Wasabi Technologies needs senior sales staff with expertise in vertical markets such as genomics, media and entertainment, and surveillance. The startup also plans to add in-house sales representatives to handle the growing volume of calls. Friend said Wasabi currently employs about 45 and could have 60 to 65 staffers by year’s end.
Friend wants to expand quickly to make it hard for newer competitors to get into the market. He said he took the same approach while CEO at Carbonite, one of the early cloud storage success stories in the consumer and SMB space.
“We’ve got about 3,500 paying customers, so it’s time to really turn up the heat and start building a sales force and the Wasabi brand and all that sort of stuff,” Friend said. “I wanted to start ramping up now that I feel comfortable that the technology is really solid and I could spend a buck on marketing and get four or five bucks back in terms of customer value.”
Wasabi Technologies claims business is growing at a rate of 5% to 10% per week. The startup stores data in its own equipment at colocation facilities in Ashburn, Virginia, and Hillsboro, Oregon. Friend said a new European data center will operate the same way when it opens in the fourth quarter. He declined to disclose the location of the new data center.
Friend said he knows what only about 100 or 200 of Wasabi’s 3,500 customers are doing with the cloud storage. He said the smallest customer might store a few TB, and the largest has 5 PB to 10 PB, with plans to expand to 50 PB to 100 PB.
Wasabi Technologies claims its cloud storage is cheaper and faster than Amazon’s Simple Storage Service, with no egress charges to extract data. Friend said most large customers with significant IT budgets shift data from huge tape libraries or on-premise storage reaching end of life.
Wasabi’s biggest customers are Hollywood movie studios and other media and entertainment companies, Friend said. With those customers in mind, the startup introduced a Direct Connect option in Hollywood and San Jose, California, to transfer data via a high-speed, dedicated pipe. Wasabi Technologies also offers Ball Transfer Appliances that large customers can fill up and ship back, similar to the Snowball option that Amazon has.
Who ponied up for Wasabi Technologies?
Instead of typical venture capital financing, Wasabi’s new funding comes from individual investors and family-run firms such as Forestay Capital, the technology fund of Swiss entrepreneur Ernesto Bertarelli. Friend said 117 investors contributed to Wasabi’s Series B round, including many repeat backers from the Series A funding.
“I started out expecting to raise more like $40 million, but even at $68 [million], I had to turn a whole bunch of people away. People were just flocking in. It was unbelievable,” he said.
New Wasabi investor Bertarelli is worth $8.7 billion, according to Forbes. His family sold biotech firm Serono to Merck for more than $13 billion in 2007, and launched the Waypoint Group five years later. Forestay Capital is Waypoint’s tech fund.
Friend said Wasabi’s investors are engaged and helpful. “Most of them are self-made people who have built businesses of their own, and they’re excited about being part of the company,” he said. “They are opening doors for us at customer sites. In a couple of cases, they’ve helped us recruit some senior people.”
I can’t tell you exactly how many storage products have launched in the past year, but I know it was in the hundreds. I can tell you it was more than I can count. That’s because hardly a day goes by when I don’t receive a briefing, press release, or pitch for a briefing from a storage vendor. And the rest of the TechTarget storage editorial staff can tell you the same.
I do know how many storage products won Storage Magazine/SearchStorage Storage Products of the Year awards last year: 14. That’s not many considered all the new products that ship in a year.
Now it’s time to start judging the hundreds of products that came out in 2018, and pick the 14 or so that deserve the honor this year. If one of those products is yours, you can enter it for consideration by our judging panel, made up of our editors, independent storage analysts and end users. You can find the entry form here. The deadline has been extended to Friday, Sept. 28, 2018, at 5:00 p.m. The form also includes the judging criteria and tips for completing the form if you’re new to this. This is the 17th year we’ve been giving these awards, so many of you have been through this before and know how prestigious they are.
Check out this year’s categories: Storage Arrays, Software-Defined/Cloud Storage, Storage Management Tools, Backup and Disaster Recovery Software/Services and Backup and Disaster Recovery Hardware. That pretty much covers the gamut of storage products, so you should be able to find your category. But you have to be in it to win it, so make sure you don’t miss the deadline and get shut out.
Besides proving there is plenty of growth still going on in the hyper-converged infrastructure market, Nutanix’s last earnings report showed HCI has moved well beyond VDI and niche applications.
Nutanix closed a $20 million deal last quarter, its largest ever. The unnamed Department of Defense agency uses the Nutanix platform “to power combat edge clouds around the world,” according to Nutanix CEO Dheeraj Pandey. He said the customer will use Nutanix in 15 remote sites.
Then there was a financial services firm that spent $5 million on a Nutanix deal last quarter. The HCI pioneer claimed 23 deals worth more than $1 million in the quarter.
It’s not only Nutanix that’s seeing larger HCI deals. VMware also closed its largest deal for its vSAN HCI platform last quarter – a 1,200-retail store server refresh involving Dell EMC VxRail appliances.
“It’s part of the journey of maturation of hyper-convergence,” Pandey said in an interview after the earnings call. “We’re solving big problems for the enterprise. And there are different approaches. The approach we take is, we don’t bundle anything, we don’t have the luxury to bundle things with larger deals that are beyond just hyper-converged. For us, it’s all product and quality and customer service, and how we handle the highest-end workload.”
Nutanix is now nine months into its shift to a software-centric business model. It still sells branded appliances but has flexible licensing options. Customers can choose to spread their software licenses across Nutanix appliances or server purchased through OEM deals with Dell EMC and Lenovo or other hardware partners.
The change to a software model hasn’t changed Nutanix’s pattern of growing revenues while losing money. Nutanix reported $304 million in revenue last quarter, up 20% year-over-year. It lost $87.4 million compared to $66.1 million in the same quarter a year ago. For the full year, Nutanix generated $1.16 billion in revenue compared to $845.9 million in the previous year. Its full year loss of $297.2 million was less than the $379 million loss the year before.
Nutanix customers now number more than 10,000, including 1,000 last quarter.
“The biggest change is around consumption,” Pandey said of the move to a software model. “When people have portability, they can take these software licenses and run them on different hardware platforms. Many companies don’t want to buy hardware up front because of Moore’s Law and the commoditization of hardware. They want to buy more software and less hardware up front.”
Nutanix’s consumption model will likely change again over the next year with more cloud-based subscription coming. Its Xi Leap – a cloud-based disaster recovery service with one-click failover and failback – is due to be generally available the end of the year.
Beam, Nutanix’s multi-cloud management dashboard, is available now. And Nutanix in August spent $165 million to acquire Frame, a startup developing desktop-as-a-service that Nutanix intends to make available as part of Xi. Pandey said the goal for making Frame available is early 2019.
So now that Nutanix has $1 billion in annual revenue and projects to hit $3 billion in three years, what’s its timeframe for profitability? Pandey refers to the Nutanix strategy as “measured growth” and it funds its spending through free cash flow. Nutanix had $22.7 million in operating cash flow from last quarter.
“Right now we believe Nutanix customers are willing to pay us more, and it’s important that we create the foundation of a customer base that continues to buy from us,” he said. “We’re going to spend a lot of money this year, but we don’t want to touch the bank. How do we do this at cash-flow break-even rather than touching the bank? That’s one guardrail that helps balance the two paradoxes.”
Quad level cell flash technology is leading to high capacity flash devices that have applicability in archival storage or content repositories that are primarily read access. Currently at 96 layers, QLC devices are different than the flash devices used in primary storage systems. The opportunity for this technology in archival storage and content repositories has led to the term “archival flash.”
Toshiba, Samsung, and Intel will most likely be the first vendors delivering the archival flash devices, and other flash device vendors may follow with their own devices. Archival flash devices are projected to have capacities in the 100 TB range – far higher than almost any SSDs on the market now. While Nimbus Data already ships 100 TB SSDs, it is not using QLC and may not have the much lower cost of the newer technology.
The large capacity is warranted given the primary use for storing archival data. Some in the industry remember the issues in primary storage when disks increased in capacity, raising concerns about rebuild times. Those familiar with the protection from device failures used in object storage systems — the most likely systems to use archival flash — will understand how the circumstances are different. Object storage systems protect device failures through information dispersal algorithms and erasure codes within a node. Node failures are protected with N+1 node protection using data distributed across those nodes. A site is protected with either replication or geographic dispersion, adding another level of protection to the immutable data stored in object storage systems.
Archival flash economics
Unfortunately, some still look at the economics of storing data with a one-dimensional view of data-at-rest economics with the focus on acquisition costs. Using data-at-rest economics effectively says that all data is equal and low cost is the only value a user would achieve from acquiring a system. It ignores different types of devices that have many different attributes. With achieving value from performance that has been demonstrated with current SSDs (not using QLC technology), data-at-rest economics should have been discounted. For archival flash devices, the main value for customers will be in the longevity of the QLC devices. Having 12 to 15 years of expected lifespan, the longevity changes the value for customers.
Data has gravity, meaning that data stored for a period of time tends to persist and incurs costs to move to new systems. Archival flash longevity will invalidate the one-dimensional acquisition cost economics and require evaluation of TCO that include technology lifespan.
Longevity requires the storage system to have the capability to disaggregate the storage devices in an enclosure independent of the controller function. Disaggregation allows the data to remain in place on the longer lifespan devices while the controller is updated based on the technology change rate for processor, adapters, etc. in the controller. Many vendors have already accomplished this and feature the capability with “Evergreen” programs. This allows the economics of the different technology change rates to be optimized.
To show the effect of the longevity of archival flash used in an archival storage system, let’s look at an economic model to show the total cost of ownership over a 15-year time span. The model includes a wide variety of parameters and what-if scenarios:
- Initial capacity
- Annual capacity growth rate
- Lifespan of archival flash device and lifespan of standard storage device
- Capacity of archival flash device and capacity of standard storage device
- Costs and requirements used in TCO calculations for administration, deployment, space, power, rack, and cables.
- Average yearly price declines for archival flash device and standard storage device
- Average yearly price decline for controller/node
- Number of devices attached to controller/node
- Hardware and software average discounts
- Controller/node costs
- Software costs – capex and software capacity-based license charges
- Initial per GB price of archival flash and standard storage devices
- Device level – protection – number of devices data is distributed across and number of segments of data protection (for example – Archival flash default of 12 devices with toleration of three devices failures in addition to protection of data distributed across N nodes). For standard devices the example would be 8 devices with two used protection.
The first diagram shows the comparison with capacity and longevity values, which are shown on the right side. This diagram has the acquisition costs of the archival flash to be 50% higher than that of the standard devices, which in this case are large capacity disks. For simplicity, price declines are set to zero for this example. Object storage systems capable of retiring a node and automatically redistributing data is assumed, which will avoid migration costs that would be evident in other storage systems. In this model diagram, data reduction was enabled for both types of devices with equivalent effectivity. Discounts were at 30% of list price. Premium support services were not added to the costs. Technology transitions occur in the final year of the lifespan, accounting for overlap in the technologies.
It should be clear that the longevity of archival flash has a dramatic effect compared to the shorter lifespan of standard devices. This also illustrates the inadequacy of using a one-dimensional, data-at-rest cost of storage measure. Savings come from having to purchase fewer replacement devices with archival flash over the 15-year span because of the longer lifespan and from the higher capacity with archival flash requiring fewer nodes as the system scales. Increasing capacity of devices over time was not included in the projection because of the limited history with archival flash devices.
The following diagram shows where the economic savings are realized:
The chart shows significant cost differences, with the largest contribution coming from the avoidance of acquiring replacement devices.
Looking at the “what-ifs,” the following chart includes changes in some parameters. Specifically, the device prices were set to be equal and the lifespan of archival flash was set to 15 years.
Another “what-if” adds the price decline experienced over the last 5 years. Those results show another 8% improvement in costs for archival flash over standard devices.
Given that a large amount of data is retained “forever” in archive and content repository systems, the economics of the large capacity requirements will impact archival flash.
It will be difficult to overcome initial thoughts such as the one-dimensional data-at-rest economics and limited understanding of data protection for large capacity devices used in object storage systems. But that will only delay the rise of archival flash for storing and managing data.