Storage Soup

June 10, 2016  4:15 PM

HDS storage ‘freezing’ really a focus on future

Rodney Brown Rodney Brown Profile: Rodney Brown
Hitachi Data Systems

There has been some noise in the storage space lately, based on the English translation of a Japanese IT website that said Hitachi Data Systems was “freezing” investment in its high-end storage products. Lost in this declaration is that HDS gets it that arrays alone aren’t the future of enterprise storage.

On June 1, the site IT Pro Nikkei published a report based on an HDS briefing that said the vendor would be “freezing the investment in the high-end model of the storage business.” That led media sites and competitors to speculate that perhaps HDS was going to let its high-end storage business languish and die.

HDS has been quick to deny it will exit the high-end storage market. But the strategy outlined in the IT Pro Nikkei story is not new or surprising. High-end enterprise disk arrays are far from a growth market in this age of flash, cloud, hyper-convergence and software-defined storage.

HDS CTO Hu Yoshida revealed the vendor’s new strategy in a January interview on our SearchStorage site with senior writer Carol Sliwa.

In that interview, Yoshida said, right out of the gate “The infrastructure market is not going to be a growth market. Instead of trying to compete on infrastructure, we’re going to have to compete on application enablement.” Translation: HDS’ new Lumida line for handling data from IoT sources, based on the Pentaho IoT data analytics technology HDS acquired in 2015, will play a major role in the company’s future.

Yoshida specifically calls out IoT as a vital component of the future of HDS. “We have an overall corporate strategy with Hitachi, called Social Innovation, where we are moving toward the Internet of Things (IoT), trying to build smart cities and provide more insights into data centers, telco [and] automotive,” he said.

IoT was also a big topic at the HDS Connect partner conference a year ago.

On his own blog, Hu’s Place, Yoshida this week clarified what “freezing” investment in high-end storage means for HDS. He wrote that hardware investments in the flagship high-end HDS hard disk drive arrays are no longer necessary due to the ability to use standard Intel processors and flash for performance, with storage features running in software.  HDS will shift research and development from its Virtual Storage Platform (VSP) hardware to Storage Virtualization Operating System (SVOS), flash and storage automation.

“There is no need to build separate hardware for midrange and enterprise customers,” Yohsida wrote. “They all have access to enterprise functionality and services like virtualization of external storage systems for consolidation and migration, high availability with Global-Active Device, and geo-replication with Universal Replicator.”

In April HDS added to SVOS what it apparently considers to be the last hardware  puzzle piece, building NAS functionality (or, as Yoshida described it “embedded file support in our block storage”) into its G series of hybrid flash  arrays.

So the upshot is, HDS will invest in making SVOS and VSP perform better for specific applications, like IoT data storage and analytics. That sounds much less like the sky is falling on HDS, and a lot more like a strategic investment in what it sees as the future in which application integration rather than storage features becomes the major differentiator between storage vendors.

June 9, 2016  9:36 AM promises AWS-like storage in the data center

Dave Raffo Dave Raffo Profile: Dave Raffo

Newcomer is the latest software startup that will try and deliver the Holy Grail of storage  — the ability to provision and manage on-premise capacity the same way as in Amazon Web Services (AWS). calls its software a virtualized data services architecture, similar language copy data management vendor Actifio used when it launched in 2010 and others have adopted. There does seem to be some copy data management in’s software along with features that help developers and application owners provision and manage storage. gave a peek under the hood this week but is still months away from a shipping product.

“When people go to Amazon, they don’t know anything about the infrastructure,” founder and CTO Haron Yaviv said. “You go through APIs and define policies. Enterprise storage today is legacy storage – you go to IT, say ‘Provision this stuff for me, this is the performance I need, go run backups against my data’ and so on. We said, let’s take Amazon features and extend it to enterprise storage. It’s all self-service. Most of the work is for the application users and developers. They create policies and provisions, just like they’re using Amazon.”

Haviv said software will be sold either as software-only or on an appliance, and he expects cloud providers to be a target customer as well as enterprises looking to build private clouds. He gives no target ship date but said plans to launch by the end of 2016.

Here is what promises its software will do:

  • consolidate data into a high-volume, high-velocity, real-time data repository designed that virtualizes and transforms data on the fly, exposes it as streams, messages, files, objects or data records consistently, and stores it on different memory or storage tiers;
  • seamlessly accelerate popular application frameworks including Spark, Hadoop, ELK, or Docker containers;
  • offer enterprises a 10x-to-100x improvement in time-to-insights at lower costs;
  • leverage deep data insights to provide best-in-class data security, a critical need for data sharing among users and business units.

Its goals include the ability to enable stateless application containers in a cloud-type approach, provide access to data from multiple applications and users, and to simplify deployment and management. Haviv said it will run on flash, NVM, in the cloud, and on block and file storage.

If you’re wondering, the vendor’s name comes from the cascading Iguazu Falls on the border of  Argentina and Brazil – signifying data cascading into a single stream. The Israel-based startup was founded in 2014 and has $15 million in funding. Its other founders include CEO Asof Somekh, formerly of Mellanox and Voltaire, and COO Yaron Segev, who founded all-flash array pioneer XtremIO and sold it to EMC.

June 3, 2016  12:40 PM

IDC: Q1 HPE storage sales soared, rivals tanked

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett Packard Enterprise (HPE) bucked the trend of storage revenue declines in the first quarter of 2016, according to IDC’s quarter enterprise storage systems tracker. HPE was the only member of the top six vendors to increase its storage revenue over the first quarter of the previous year.

Industry-wide total external storage (SAN and NAS) declined 3.7 percent to $5.4 billion for the quarter. Overall storage, including servers and server-based storage, declined seven percent to $8.2 billion.

HPE’s external storage revenue increased 4.6 percent to $535.7 million, ranking third behind EMC and NetApp. HPE increased market share from 9.1 percent in the first quarter of 2015 to 9.9 percent in the first quarter of 2016.

EMC revenue declined 11.8% to $1.349 billion as it awaits its $67 billion acquisition by Dell. EMC’s market share fell from 27.2% to 24.9% over the year.

NetApp had a bigger fall, dropping 15.6% to $645.5 million and a share decline from 13.6% to 11.9%.

HPE was followed by Hitachi in fourth, IBM in fifth and Dell in sixth. All three declined in revenue year-over-year, although Hitachi gained share from nine percent to 9.2 percent. All other vendors combined for $1.59 billion, a 7.8% increase from the previous year. “Others” market share grew from 26.2% to 29.3%.

In the total storage market, HPE grew 11% and jumped ahead of EMC into first with $1.42 billion. EMC, which generates all its revenue from external storage, fell 11.8% in the overall market to a 16.4% market share compared to HPE’s 17.3% share. No. 3 Dell, No. 4 NetApp, No. 5 Hitachi and No. 6 IBM all declined in overall storage revenue. Other vendors increased 5.3% but storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale data center customers slipped 39.9%.

IDC put all-flash arrays at $794.8 million in the quarter, up 87.4% for the year. Hybrid flash arrays accounted for $2.2 billion and 26.5% of the overall storage market share.

HPE continued its momentum into the second quarter, according to recent earnings reports from the major vendors. HPE reported two percent year-over-year growth while EMC, NetApp and IBM all said their storage revenue declined. HPE CEO Meg Whitman said 3PAR all-flash revenue nearly doubled from last year.

“We estimate that we gained market share in the external disk for the tenth consecutive quarter and expect storage to gain shares throughout the remainder of the year on the strength of the 3PAR portfolio and new logo wins as we take advantage of the uncertainties surrounding the Dell-EMC merger,” Whitman said on HPE’s May 24 earnings call.

Top 5 Vendors, Worldwide External Enterprise Storage Systems Market, First Quarter of 2016 (Revenues are in Millions)
Vendor 1Q16 Revenue 1Q16Market Share 1Q15 Revenue 1Q15 Market Share 1Q16/1Q15 Revenue Growth
1. EMC $1,349.4 24.9% $1,530.4 27.2% -11.8%
2. NetApp $645.5 11.9% $764.9 13.6% -15.6%
T3. HPE* $535.7 9.9% $512.1 9.1% 4.6%
T3. Hitachi* $497.1 9.2% $506.9 9.0% -2.0%
T5. IBM* $429.0 7.9% $446.2 7.9% -3.8%
T5. Dell* $376.2 6.9% $395.1 7.0% -4.8%
Others $1,590.6 29.3% $1,475.8 26.2% 7.8%
All Vendors $5,423.6 100.0% $5,631.4 100.0% -3.7%
Source: IDC Worldwide Quarterly Enterprise Storage Systems Tracker, June 3, 2016


June 3, 2016  7:34 AM

Qumulo pockets $32.5M to fund data-aware storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Qumulo, the scale-out data-aware NAS startup founded by Isilon veterans, today added $32.5 million in funding to expand its sales operation. Qumulo’s C funding round brings its total funding to $100 million.

The vendor launched its Qumulo Core storage platform in March, 2015. It added a major upgrade last April with support for 10 TB drives, erasure coding and advanced performance analytics.

“We’ve had a great year,” Qumulo CEO Peter Godman said. “We’ve been launched for about a year and we have more than 60 customers who continue to deploy bigger and bigger systems.”

Godman said Qumulo’s goal is to generate three-times as much revenue over the next year, so a significant amount of the funding will go towards sales and marketing. He said field sales operations will nearly double over the next year. The start-up has around 135 employees.

Godman said about half of Qumulo’s sales come from customers adding systems to their original purchases. A lot of repeat buys come from media and entertainment whose capacity needs expand rapidly due to new higher-definition video formats.

On the product development front, he said Qumulo will continue to shoot out upgrades to its Core software every two weeks.

The daunting part for Qumulo is it competition comes from two of the largest storage vendors, EMC and NetApp. In EMC’s  case, Qumulo usually goes head-to-head with the Isilon platform that Qumulo founders helped develop. Godman said Qumulo competes frequently with Isilon in use cases such as animated movies where high performance is required. He said Quantum StorNext is another competitor in media but “90-plus percent of the time, we compete with NetApp and EMC.”

Allen & Company, Top Tier Capital Partners, and Tyche Partners were Qumulo investors with the C round, joining previous investors Kleiner Perkins Caufield & Byers (KPCB), Madrona Venture Group, Highland Capital Partners, and Valhalla Partners.

June 2, 2016  11:43 AM

IT resilience vs. disaster recovery: What’s better for your business?

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

DR may be dying. The term DR, that is, not the actual process of disaster recovery. There is a move in the industry to replace the phrase with “IT resilience.”

At last week’s ZertoCON business continuity conference, analysts from Gartner and Forrester both threw their support behind using the term resilience over disaster recovery.

Stephanie Balaouras, vice president and research director at Forrester, said she dislikes the term “disaster recovery” because it tends to focus on catastrophic events, which can cause management to think it’s too expensive and rare.

Organizations need to move beyond disaster recovery and embrace resiliency, which is more concentrated on continuous availability and continuous improvement, Balaouras said. Customers don’t care what happened to cause an outage, they just want “always-on.”

Balaouras outlined three actions to improve IT resilience.

  • Calculate the cost of downtime. 57 percent of companies have told Forrester that they haven’t calculated that expense. And downtime is more than lost revenue — it’s loss of employee productivity and morale, as well as lost business opportunities. Organizations should calculate revenue and productivity losses plus customer impact, and present several loss scenarios.
  • Measure availability end-to-end. Availability is not about individual components, it’s the whole service, Balaouras said. When making your business case, take everything into account. As an example, Balaouras noted that the recent New York Stock Exchange outage was human error.
  • Match business objectives to the right mix of technologies. Balaouras suggests planning an evolution to active-active sites, which takes some time. Businesses should maximize virtualization investments for resiliency. And rethink failover and replication options, as the technologies are not “one size fits all.”

In his keynote address, John Morency, a research vice president at Gartner, said that IT resilience is becoming the new disaster recovery.

Most Gartner clients don’t use the term “disaster recovery” anymore — they want to focus more on IT resiliency, Morency said.

Newer technologies, such as replication, continuous data protection and snapshotting, are helping organizations enhance resiliency and proactively avoid recovery situations. While recovery time objectives used to be six to 18 hours for many, they’ve dropped to four hours or below, Morency said.

In her presentation, Balaouras also stressed the importance of time. With disaster recovery, downtime is measured in hours to days, while with IT resiliency, downtime is measured in minutes to hours.

Investments in disaster recovery are seen as expensive insurance policies and there isn’t enough emphasis in DR on the everyday events that cause the majority of business disruptions, Balaouras said. IT resilience investments, on the other hand, are driven by the need to serve customers and stay competitive, and resiliency is focused on all likely business disruptions.

Which term do you prefer?

May 27, 2016  10:02 AM

Pure Storage’s impressive sales growth diluted by losses

Dave Raffo Dave Raffo Profile: Dave Raffo
Pure Storage

Pure Storage is finding that it’s expensive to grow sales in the all-flash storage market these days.

While Pure increased revenue last quarter by 89% year-over-year, it was unable to reduce its losses. Pure reported $139.9 million in sales, which was better than analysts expected and impressive growth in the current storage market. Still, the all-flash vendor lost $40.8 million, the second largest quarterly loss in its history.

For now, Pure is focusing on expanding sales and the launch of its second all-flash platform, the FlashBlade for unstructured data. The spike in sales from Pure’s FlashArray SAN system last quarter prompted Pure CEO Scott Dietzen to declare victory despite the income drop.

“A $600 million revenue run rate, and being able to grow the business at 89%, I would say is unprecedented in storage industry history,” Dietzen said Wednesday on Pure’s earnings call. “We’re playing now as one of the top 10 storage providers in the world. And we are profoundly differentiated and distancing ourselves from the rest of the pack in terms of the market we are serving and growth.”

“We have aspirations to be number one in a $24 billion [all-flash] market. But we are supremely happy with the quarter we just turned in and we are on track to hit all of our targets.”

Although overall storage sales are declining throughout the industry, all-flash systems are picking up. Pure faces tough competition now from larger and smaller vendors.

EMC claims its XtremIO all-flash platform alone generated more than $1 billion in revenue last year and expects to approach $2 billion this year. Hewlett Packard Enterprise this week reported its 3PAR StorServ all-flash sales last quarter nearly doubled from last year, and HPE CEO Meg Whitman claimed all-flash revenue was higher than Pure’s. NetApp CEO George Kurian said his company is on pace for about $700 million in all-flash revenue over the next year and Nimble Storage said it added 55 all-flash customers in the first six weeks of selling its new platform.

The competition is prompting Pure to accelerate spending. Its sales and marketing expenses were $75.6 million last quarter, compared to $44.9 million a year ago. That led to a loss of $40.8 million last quarter last quarter, slightly more than the $40.2 million loss than a year ago and significantly more than the $22.3 million loss in the fourth quarter of 2016 when sales and marketing spending was $62.5 million.

Dietzen said Pure added nearly 300 customers in the quarter, bringing its total to more than 1,950. New customers included the World Bank, Bank of New York Mellon, Softbank and the University of Melbourne. He said cloud companies make up about one-quarter of Pure’s business, mainly service-as-a-software providers using FlashArray as their storage platform.

Pure has $607 million in cash to cover losses for a while but the vendor will eventually need to make money. CFO Tim Riitters said this year will mark “the turning point in terms of absolute losses.” By that, he doesn’t mean the losses will stop, but they will not accelerate over last year. That would require Pure to keep losses below $143.6 million for the full year, which is hardly approaching profitability.

FlashBlade may help. Dietzen said FlashBlade has dozens of beta customers, with direct availability to selected customers planned for the third quarter this year. The goal is to make FlashBlade generally available by the end of the year.

“We think the product can grow even faster than FlashArray did,” Dietzen said.

May 26, 2016  9:44 AM

NetApp launches Ontap 9 for flash, cloud

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage

With the worst part of its Clustered Data Ontap (CDOT) challenges behind it, NetApp is bringing out the next version of its Ontap storage operating system. Ontap 9 will focus on flash, coming as NetApp supports 15 TB solid-state drives and promises at least 4-1 data reduction through deduplication.

NetApp CEO George Kurian disclosed the upgrade Wednesday during the company’s earnings call. NetApp planned an official Ontap 9 launch next week but released details following Kurian’s comments.

Ontap 9 – “data” has been dropped from the product name – will be generally available June 15. It is the successor to CDOT, with upgrades available to CDOT and 7-Mode customers.

NetApp VP of product marketing Lee Caswell said Ontap 9 will have three types of data reduction. Along with inline compression and deduplication, a new data compaction feature will use additional processing power to write data more efficiently to SSDs. The 15 TB SSDs will first be available on NetApp’s All-Flash FAS arrays.

Caswell said NetApp will give more storage to customers who fail to get 4-1 data reduction.

NetApp will also follow in the footsteps of all-flash vendors such as Pure Storage and give free controller upgrades after three years to customers under Premium Maintenance. It will also extend its flash warranty to six years.

The underlying Ontap 9 technology will also be available in Ontap Cloud (formerly Cloud Ontap) and a software-only version called Ontap Select. NetApp claims Ontap 9 can be deployed in 10 minutes. NetApp also promises greater security through triple-parity RAID and improved encryption.

“The next generation of Ontap will simplify customers’ IT transformations to modern data centers and hybrid cloud environments,” Kurian said. “Customers can choose the architecture of their choice – engineered systems, software-defined storage or cloud.”

Ontap Cloud will run in Amazon Web Services and the new version can run as a high availability cluster. Caswell said support will follow for other public clouds. Ontap Select replaces Ontap Edge, with greater hardware support and reduced pricing.

“Ontap was already friendly to flash and we invested early to make sure it can run in the cloud,” Caswell said. “With Ontap 9 you can consolidate platforms. The idea is you can have it all integrated into one system from flash to disk to cloud, and blocks, objects and files all together now.”

NetApp has had problems getting customers to migrate from its 7-Mode Ontap to CDOT. That, along with its lateness to all-flash storage, negatively impacted its sales in recent years. But Kurian said clustered node shipments grew 80% year-over-year last quarter, making up 85% of new sales. However, CDOT still accounts for only 26% of NetApp’s customer base and the upgrades have been largely driven by discounts and promotions.

Those discounts have taken a toll on NetApp’s bottom line. The company lost $8 million in the quarter, as revenue of $1.38 billion for the quarter declined 10.4% and came in below expectations. Its $5.5 billion in revenue for last year dropped nine percent from the previous year.

Kurian said it will be “a year of transition” as NetApp tries to rally around what it calls “strategic solutions” – CDOT, all-flash arrays including SolidFire, E-Series storage, hybrid cloud products and OnCommand Insight management software.

These products made up 53% of NetApp’s revenue over the past year and 61% last quarter. The strategy is similar to that of NetApp’s main rival EMC and other large storage vendors who also push new technologies such as all-flash arrays, cloud and hyper-converged while their older technology declines in deployments.

Kurian did not break out revenue from the all-flash SolidFire platform it acquired in January. But he left no doubt about its strategic importance, especially because SolidFire’s flash arrays are sold mainly to cloud providers.

“All-flash systems are the new SAN configurations,” he said.

NetApp also sells all-flash versions of its E-Series high performance systems and All-flash FAS mainstream storage arrays. NetApp executives said  the vendor is on pace for around $700 million in all-flash revenue over the next year, although it is also discounting those products.

NetApp has come a long way in flash since Kurian replaced Tom Georgens as CEO a year ago.

“When I took over as CEO, NetApp was dealing with several internal challenges,” Kurian said Wednesday. “We were late to the all-flash array market, we were not prepared to assist our installed base of customers in migrating to clustered Ontap and we had limited traction in the hybrid cloud. Over the course of the year, we’ve made substantial progress. We have moved into a leadership position in the flash market with a broad portfolio that addresses multiple workload requirements and deployment styles. We regained ground with our channel partners by successfully enabling them to migrate the install base to clustered Ontap. Our data fabric strategy has proven effective in positioning us to win leading edge cloud deployments.”

Considering it lost $8 million and its revenue declined last quarter, NetApp still has a lot of work to do in its transition. It has yet to satisfactorily address hyper-convergence. Kurian said SolidFire with OpenStack can bring the benefits of hyper-converged technology. That would require bringing a new version of SolidFire – perhaps packaging its Element OS on servers with hypervisors.

May 25, 2016  3:08 PM

Flash accelerates Nimble sales, not profits

Dave Raffo Dave Raffo Profile: Dave Raffo
Nimble Storage

Nimble Storage received a quick jolt from its All Flash arrays last quarter, as the platform drove larger deals and helped bring in bigger customers. The vendor continued to lose money, however, despite a 21% increase in revenue over last year.

Nimble’s revenue of $86.4 million exceeded its previous forecast for the quarter.  The vendor still lost $20 million in the quarter and CEO Suresh Vasudevan would not give an estimate when Nimble might approach profitability. Nimble ended the quarter with $203 million in cash. Nimble forecasted revenue in the range of $93 million to $96 million this quarter.

Nimble claims it added 55 all-flash array customers in the product’s first six weeks on the market. Vasudevan said all-flash sales made up 12% of Nimble’s array bookings in the quarter. He said 25 all-flash customers were new to Nimble. Overall, Nimble added 580 customers in the quarter.

Vasudevan said the deal size for all-flash arrays were around twice that of Nimble’s hybrid systems with flash and hard disk drives. He said the typical workloads for all-flash systems were virtual desktop storage, databases and other performance-intensive applications.

Vasudevan credited all-flash and Fibre Channel support introduced in 2014 for making Nimble more of an enterprise play. He said Nimble had a record quarter for deals over $250,000, with those deals making up 20% of its total revenue.

“What is interesting for us is that the number of larger enterprise opportunities we’re competing in is substantially higher than in the past,” he said. “The all-flash array has really moved the needle for us and it was a record level of contribution from the enterprise segment for us.”

May 25, 2016  2:13 PM

Violin Memory’s struggles continue

Dave Raffo Dave Raffo Profile: Dave Raffo
flash storage, Violin Memory

Violin Memory had little to show for its turnaround efforts last quarter. The all-flash vendor’s revenue declined and it continue to lose money, prompting the CEO to tell investors he feels their pain.

Violin Tuesday reported $9.7 million in revenue for last quarter, down from $12.1 million the previous year and $10.9 million the previous quarter. Its product revenue was only $4.2 million – a paltry sum considering all-flash arrays have a higher average selling price than hybrids and are in demand now.

Violin lost $22.2 million in the quarter, and is down to $49 million in cash. CEO Kevin DeNuccio admitted the company may have to seek additional equity funding to keep it going long enough to complete its turnaround plan. The revenue forecast for this quarter is in the range of $11 million and $13 million, with losses expected to approach $20 million.

DeNuccio ended his opening remarks on the earnings call by telling investors “me, the management team and the board fully understand the pain investors are feeling with the stock price decline and in way we sit today. We are all significant investors personally and continue to believe in the company strategy and equity story going forward. … And we strongly believe that our adjustments and strategy are going to produce a successful outcome from our employees, customers and investors.”

Without much customer quantity, DeNuccio tried to stress the quality of Violin’s customers. He claimed five Fortune 100 enterprises and more than a dozen Global 2000 companies are using Violin Flash Storage Platform (FSP) arrays. He said Violin has gained three Fortune 100 customers this year, calling it “the turning point for the company.”

“Our customers, some of the biggest in the world, believe in our technology and our FSP value proposition,” he added. “I know it’s been tough on our shareholders and investors, but we are energized by the opportunity ahead and committed to the turnaround in our direction.”

Investors were not impressed. The stock priced opened at 32 cents per share today, down from 37 cents at Tuesday’s close and from $3.30 a year ago.

DeNuccio’s other reasons for optimism include a new version of Violin’s Concerto operating system due this year with cloud integration capabilities, a new OEM deal that will combine Violin arrays with software for virtual desktops, and another OEM deal that has been in the works for months. However, DeNuccio said a research and development relationship he talked about last quarter is on hold.

“The Violin business is stabilizing on a number of fronts,” DeNuccio said. “An important barometer regarding the company’s progress is the number of wins since the launch of Violin’s Flash Storage Platform which incorporates a new operating system on next generation hardware. This product line is showing strong traction despite prevailing headwinds, as evidenced by 60 wins since its launch resulting in an average of one win per week.”

But is that a high number of sales? Nimble Storage Tuesday said it has won 55 customers with its Predictive All Flash Arrays in less than one quarter of shipping the system. Nimble sells smaller systems to smaller companies than Violin, but its sales rate is probably more indicative of the all-flash market than Violin’s.

May 24, 2016  10:45 AM

Datrium says new twist makes DVX run crazy fast

Dave Raffo Dave Raffo Profile: Dave Raffo
flash storage

Datrium has added an “Insane Mode” feature that the startup claims can double host storage performance on the fly.

Datrium took the Insane Mode tag from Tesla’s rapid acceleration technique for its Model S cars. The concept is similar – the goal of Datrium’s Insane Mode is to let customers instantly speed the storage systems performance by increasing available CPU resources.

Here’s how Datrium DVX works:

The system has two parts. The Diesl (Distributed Execution Shared Logs) file system runs on the host and NetShelf storage array. Diesl allows customers to manage storage like virtual machines through VMware vSphere, and handles features such as compression, deduplication and instant zero-copy clones. It uses up to 8 TB of NVMe flash in the server to boost performance. Customers can use any commercial flash in the host.

NetShelf is the second part, and serves as durable storage. NetShelf has dual controllers – each with NVRAM – for active/passive failover and hard disk drives for capacity. It exports no blocks or files and has no LUNs or volumes to manage. Diesl provides host-based data services. Writes are mirrored before control is returned to a virtual machine. A failed host does not impact fault tolerance or cause data loss. A NetShelf includes 29 TB of data after RAID 6 overhead.

Insane Mode an alternative to QoS

Insane Mode is the third method for improving DVX performance. The others are to add flash to the host or to vMotion a workload to another host with more available flash.

Insane Mode, now part of Diesl Hyperdriver Software, reserves 40% of host cores for DVX instead of the default 20%. Datrium recommends using Insane Mode only when a host averages less than 40% CPU utilization for VMs because ESX utilization should be kept below 80%. Customers can stay in Insane Mode indefinitely or use it for specific tasks like running batches before returning to 20% utilization. Customers can go from Fast Mode (the default) to Insane Mode with one mouse click from the DVX dashboard.

Datrium marketing VP Craig Nunes said doubling the CPU resources can improve workload performance from 1.5x to 3x “literally instantly.”

He said the feature is an alternative to quality of service on traditional arrays. “A storage array has a set amount of performance in its controller. When you hit the boundaries, it’s upgrade time,” he said. “Quality of service on the array is about how you provision precious resources in a storage array. The approach we take is, service levels are more about giving more resources for performance instead of limiting precious resources.”

Datrium CTO and founder Hugo Patterson said the extra CPU is available because customers rarely go above 50% utilization. “If you’re not doing I/O, then CPUs are available for other things,” he said. “This basically reserves enough so when you need to use it, it’s there.”

Patterson said he expects Insane Mode to be used for mainly for test/dev, virtual desktop infrastructure and server consolidation.

Datrium bucks all-flash, hyper-converged

Patterson sees Datrium as a superior to two hot storage trends – all-flash arrays and hyper-converged systems.

“Flash belongs on the host, and flash has been on the host for a decade,” Patterson said. “But when it is on the host, it’s just a storage device. It’s an island of flash, not an enterprise class storage system. One SSD is not an enterprise class storage system. So how do you get flash on the host and yet make it an integral part of an enterprise storage system? That’s what the DVX is really all about. We’ve moved a lot of the storage compute – just about all of it – into the host. These days it’s a bigger deal to do that with the storage compute getting bigger and bigger.”

Patterson also argues that a DVX costs less and uses both the server and storage better than a typical hyper-converged system.

“Each server is independent [with DVX] and gives an easier way to manage storage performance,” he said. “Communication is from servers to NetShelf and servers don’t talk among themselves, except in the case of vMotion. We don’t have one host accessing data that’s on another server. When you have a cluster of [hyper-converged] servers, they’re all co-dependent and they can heavily influence the performance of each other. When performance isn’t what you hoped for, it’s hard to do anything about it. With DVX, each server has its own local cache and uses its own CPU.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: