Storage Soup


June 28, 2016  9:52 AM

How to measure flash storage’s true value

Randy Kerns Randy Kerns Profile: Randy Kerns
flash storage

Flash storage, or using the broader term, solid-state storage, suffers from an inadequate measure of value. Flash storage provides a step-function improvement in the ability to store and retrieve information. The value of processing with flash storage compared to access from electro-mechanical devices is not easy to express.

Many in the industry still use a “data at rest” measure, which is the cost of storing data. That fails to represent more valuable characteristics such as access time and longevity. The data at rest measure, given as dollars per GB, can be misleading and does not convey real economic value. If that is the only measure to use for information storage, then you should use magnetic tape for all operations because it is the least expensive media.

Some vendors also use a dollars per IOPs measure for all-flash storage systems.  This measure does not represent the value of what flash can accomplish because it is an aggregate number. This means it represents the total number of I/Os a system can do, which could be from thousands of short-stroked disk drives.  It does not directly reflect the response time increase, which is the most meaningful measure in accelerating applications and getting more work done.

So if these measures are inadequate, what is the best way to gauge the value of flash storage? It actually varies depending on the case. Flash can provide key improvements, including consolidation, acceleration, reduction in physical space/power/cooling, longevity, and reduced tuning. Let’s look at these:

  • Consolidation – The greater performance levels of flash storage allow for the deployment of more diverse workloads on a single system. With larger capacity flash storage systems, workloads running on multiple spinning disk systems can be consolidated to a single flash storage system. The value of consolidation includes a reduction of the number of systems to manage and the physical space required.
  • Acceleration – The first deployments of flash systems focused on accelerating applications (mostly databases), and virtual machine or desktop environments. Acceleration enabled more transactions and improvements in the number of VMs and desktops supported. The successes here drove the shift to more widespread use of solid-state storage technology.
  • Physical space – Flash technology increases the capacity per chip and results in less physical space required. Even flash packaged in solid-state drives have eclipsed the capacity points of hard disk drives. With flash storage, more information can already be contained within physical space than was previously possible and technology gains are still improving in this area. This is important for most organizations where information storage represents a large physical presence.
  • Power and cooling – Storage devices using flash technology consume less power and generate less heat (requiring less cooling) than devices with motors and actuators. There is an obvious reduction in cost from this improvement. But this becomes more important when physical plant limitations prevent bringing in more power and cooling to the data center.
  • Longevity – Probably the least understood added value from flash storage is the greater longevity in usage for the flash devices and the economic impact that brings. The reliability and wear characteristics are different from electro-mechanical devices, and have reached a point where vendors are giving seven- and 10-year guarantees and even some lifetime warrantees with ongoing support contracts. This dramatically changes the economics from the standpoint of total cost of ownership over the long lifespan. The key driver of this is the disaggregation of the storage controller or server from the flash storage enclosures that allows controllers to be updated independently. This has led to some “evergreen” offerings by vendors, which actualizes the economic value in this area.
  • Reduction in tuning – One of the most often reported benefits (which can be translated to economic value) from deployment of flash storage is the reduction in performance tuning required. This means there is no longer a need to chase performance problems and move data to balance workloads with actuator arms.

It is clear that a data at rest measure is inadequate.  Nevertheless, price is always an issue and the cost for flash storage continues to decline at a steep rate because of the investment in technology.  Data reduction in the form of compression and deduplication also is a given for the most part in flash storage, multiplying the capacity stored per unit by 4-1 or 5-1 in most cases.  The continued technology advances will improve costs even more.

The $/GB data at rest measure is difficult for many to stop using, even though it misrepresents true value. People do it because it is easy and it is a habit after years of measuring value that way. However, it is wrong. There needs to be another relatively simple measure to encompass all the values noted earlier. It may take a while for that to come about.  In the meantime, we will continue to look at economic value, do TCO economic models, and explain real value as part of evaluating solutions to handling information.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

June 27, 2016  10:36 AM

Nasuni customers hit with cloud outage

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud provider Nasuni Corp. experienced two cloud disruptions recently that affected about 20 percent of its customers and about 10 percent of workloads, according to company spokesperson Fred Pinkett.

Pinkett, senior director of product marketing and managment at Nasuni, said the first four-hour cloud outage problem on June 15 was a performance degradation in the company’s API servers. Then there was a another feature disruption tied to the global file-locking system (GFS) on June 16 for about one hour and 45 minutes.

“The GFS uses some servers for the API and does the health check and it caused (a problem with) the GFS,” said Pinkett. ”

The second disruption was shorter than the first. Amazon Web Services (AWS) happened to be going through a performance degradation while Nasuni was doing a health-check rebuild process.

“Customers had intermittent access during that time to those storage volumes that use global file locking,” Nasuni said in a prepared statement. “No data was lost or corrupted. In addition, Nasuni is taking measures to prevent a similar feature disruption in the future by rolling out cross-regional locking and by helping customers configure their filers so that, in the case of a locking disruption, they will be able to read and write to all data in their local cache.”

Nasuni executives said it has served billions of global locks and believes that cloud-based global file locking is the best architecture for locking files over many locations sharing many terabytes of data.

Systems that rely on devices to act as a lock server are extremely difficult to scale globally and are vulnerable to disruption if the device fails, in which case the responsibility for fixing an outage lies with the customer’s IT organization.

“Nasuni, on the other hand, proactively monitors its service and takes full responsibility for fixing issues as they arise,” according to the company statement.

Nasuni uses a hybrid approach to the cloud. Its Nasuni Filers on customers’ locations hold frequently accessed data in cache while sending older data to the Microsoft Azure and AWS public clouds.

The global file-locking capability facilitates use of the controller in multiple locations within a company by maintaining data integrity. It prevents users from accessing a file that is already open and creating multiple, conflicting file versions.

Nasuni isn’t the first cloud gateway to add file locking but it is considered a key step to becoming a primary storage system.


June 23, 2016  6:50 AM

Dell extends Nutanix hyper-converged OEM deal

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, EMC, Hyper-convergence, Nutanix, VMware

LAS VEGAS — Dell answered one question about its post-EMC merger product lineup Wednesday at the Nutanix .NEXT conference.

Alan Atkinson, Dell vice president of storage, appeared on stage alongside Nutanix CEO Dheeraj Pandey to reveal Dell will extend its OEM deal to sell Nutanix hyper-converged systems. That revelation comes with Dell poised to pick up a bunch of new hyper-converged products from EMC and VMware.

“Customers love the XC Series (using Nutanix software),” Atkinson said. “We started with one product and now we have seven products. I’m thrilled to say we’ve reached an agreement to extend our OEM relationship.  We’ll keep rolling it forward.”

Atkinson left the stage without saying how long the OEM extension will run, but a Nutanix spokesperson said the extension is for “multiple years.”

Dell and Nutanix originally signed a three-year OEM deal in 2014. Nutanix has since added a similar deal with Lenovo.

The Dell-Nutanix relationship was threatened by Dell’s pending $67 billion acquisition of EMC. Both EMC and its majority-owned VMware have hyper-converged products that compete with Nutanix. EMC sells VxRail and VxRack systems built on VMware’s VSAN and EMC ScaleIO software. Dell already re-brands VSAN Ready Node systems and resells EMC’s VSAN and ScaleIO boxes.

Atkinson said he has been asked about the future of the Dell-Nutanix deal often since Dell disclosed plans to acquire EMC last October.

“That’s a question I get once an hour,” he said at .NEXT.

Even Panday has been asking. “We’ve been talking to Michael (Dell) on an everyday basis,” he said.

The Dell-EMC deal is expected to close in August.

In its Securities Exchange Commission filing detailing its plans to become a public company, Nutanix listed the Dell-EMC deal as a risk to its business.

“Dell is not just a competitor but also is an OEM partner of ours and the combined company may be more likely to promote and sell its own solutions over our products or cease selling or promoting our products entirely,” the filing reads. “If EMC and Dell decide to sell their own solutions over our products, that could adversely impact our OEM sales and harm our business, operating results and prospects, and our stock price could decline.”


June 20, 2016  4:26 PM

EMC, Veritas lead in worldwide PBBA revenues

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Unlike overall disk storage, purpose-built backup appliances’ total worldwide factory revenue increased in the first quarter of 2016, according to International data Corporation (IDC) Worldwide Quarterly tracker.

IDC shows the backup appliance marketing growing 6.2 percent year-over-year to $762.2 million in the first quarter. That’s in contrast to overall disk storage,  which declined seven percent to $8.2 billion in the quarter according to IDC. External storage, which includes SAN and NAS, declined 3.7 percent to $5.4 billion for the quarter.

EMC led the pack in overall growth for backup appliances.  Total worldwide PBBA capacity shipped during this period was 886 PB, an increase of 40% compared to last year.

EMC generated $427 million in revenue and held 56% of the market share for PBBAs, compared to $377 million in revenue and 52.5% market share in the first quarter of 2015. That was a 13.2 percent growth rate.

No. 2 Veritas generated $139 million in revenue and held 18.2% market share in the first quarter of this year, compared to $133 million in revenue and 18.5% market share in the first quarter or 2015 when it was part of Symantec. The company gained 4.4 percent growth this year in the PBBA market.

Hewlett Packard Enterprise came in third with $35 million in revenue or 4.5 percent market share, compared to $32.5 million in revenue and 4.5 percent market share in the first quarter of 2015. HPE experienced 6.3 percent growth between the first quarter this year and the first quarter in 2015.

IBM backup appliance revenue fell 26% to $30 million, or  3.9 percent market share, compare to $41 million last year. Dell generated $25 million or 3.2 percent market share compared to $18.3 million in revenue the same period last year.

Dell is in completing a $67 billion acquisition of EMC. The combined companies had 59.2% of the backup appliance market.

Total PBBA open systems factory revenue increased 8.3 percent year-over-year in the first quarter with revenues of $703.2 million. Mainframe revenues declined 14%. Last year, the PBBA market experienced a downturn in early 2015 before sales picked up later in the year. 

IDC defines a PBBAs as a standalone disk-based solution that utilizes software, disk arrays, server engines or nodes used for a target for backup data and specifically data coming from a backup application or can be tightly integrated with the backup software to catalog, index, schedule, and perform data movement.

The PBBA products are deployed in standalone configurations or as gateways.


June 16, 2016  9:35 AM

Cavium pays $1 billion for QLogic, storage

Dave Raffo Dave Raffo Profile: Dave Raffo
Cavium, Qlogic, Storage Networking

After months of shopping itself, storage networking vendor QLogic has a buyer.

Networking semiconductor vendor Wednesday said it will pay $1 billion for the FC and Ethernet adapter company in a deal expected to close in the third quarter of 2016. The deal adds to the evidence that storage networking is a best fit as part of a larger Ethernet-dominated networking company.

Cisco kicked off the trend of combining networking and storage when the Ethernet giant stared selling Fibre Channel SAN switches in 2003. The FC connectivity market has seen a great deal of consolidation since then, with FC vendors either going away or joining forces with Ethernet companies.

Avago Technologies acquired QLogic rival Emulex for $606 million in 2015. That deal was sandwiched between Avago’s purchases of storage and networking chip firm LSI and networking connectivity vendor Broadcom.

Brocade, which began as a Fibre Channel switch company, moved into Ethernet network switching with a $2.6 billion acquisition of Foundry Networks in 2008. FC remains the bulk of Brocade’s business, but the vendor also added wireless networking vendor Ruckus for $1.2 billion in April to make it a broader enterprise play.

Now we have the two FC switch vendors Brocade and Cisco and the two main FC adapter providers QLogic and Emulex selling storage as part of larger networking companies.

Cavium is a player in the networking, communications and cloud markets but will use QLogic to fill one gap in its product line.

“Storage has been one of the areas that has been an aspirational market for us,” Cavium CEO Syed Ali said Wednesday evening on a webcast to discuss the QLogic deal. “We never had penetration into the mainstream storage market.”

Ali said owning QLogic can help Cavium push its own products such as its ThunderX data center and cloud processor into deals with storage vendors such as EMC and NetApp.  It also gives Cavium an instant storage software stack.

“QLogic has an extensive software stack that we don’t have for the mainstream market,” he said. “That software stack takes years to build out.”

Ali said Cavium will kill QLogic legacy products such as its FC switches because “it’s not a great idea for a silicon company to also be a switch company” but he said he is bullish on QLogic’s FC adapters. He pointed to the ongoing transition from 8 Gb per second FC to 16 Gbps and the move to 32 Gbps expected in 2017-18 and the rise of all-flash arrays as drivers of FC business. He said all-flash arrays have a 70% to 80% FC attach rate.


June 10, 2016  5:34 PM

IDC: NetApp jumps to No. 2 in Q1 all-flash

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

EMC remained No. 1 and NetApp jumped to No. 2 in all-flash array (AFA) revenue for the first quarter of 2016, according to IDC’s new market share statistics.

EMC had $245.6 million in revenue and 30.9% AFA market share, while NetApp took in $181.1 million and accounted for 22.8%. Rounding out the top five were Pure Storage at 15.0%, Hewlett Packard Enterprise (HPE) at 12.4% and IBM at 8.5%.

NetApp benefitted from a change to IDC’s AFA taxonomy with the release of the June 2016 Tracker. Previously, IDC only recognized NetApp’s EF-Series as an AFA. With the new taxonomy, NetApp’s All Flash FAS also qualifies under a new “Type 3” category.

IDC defines an AFA as “any network-based storage array that supports only all-flash media as persistent storage and is available as a unique SKU.” With the new IDC AFA taxonomy, IDC for the first time identified three types of AFAs. Type 1 and Type 2 would have fit the old taxonomy, but Type 3 is new, according to Eric Burgener, an IDC storage research director.

Type 1:  Arrays that were originally “born” as AFAs. Examples include EMX XtremIO, IBM FlashSystem, Kaminario K2, Pure Storage FlashArray//m, NetApp SolidFire FS Series, Tegile IntelliFlash HD, and Violin Memory Flash Storage Platform. Although NetApp acquired SolidFire in December, IDC won’t begin crediting NetApp with the SolidFire revenue until the second quarter, according to Burgener.

Type 2: Arrays that originated as hybrid designs but have undergone significant flash optimization, do not support HDDs, and include some high-performance hardware unique to the all-flash configuration, such as controllers that are faster than those included in the vendor’s hybrid flash arrays (HFAs). Examples include: Hitachi Data Systems (HDS) VSP F Series, Hewlett Packard Enterprise (HPE) 3PAR StoreServ 8450, and Tegile IntelliFlash T3800.

Type 3: Arrays that originated as hybrid designs but have undergone significant flash optimization. Type 3 arrays do not support HDDs and do not include hardware, other than flash media, that is different than the hardware the vendor ships on its HFAs. Examples include: EMC VMAX All Flash, Fujitsu DX200F, NEC M710F, and NetApp All Flash FAS.

Burgener said IDC changed its AFA taxonomy for several reasons.

“No. 1 is the level of flash optimization that we were starting to see from the systems that began life years ago as hybrids were producing performance that was not really very distinguishable from purpose-built AFAs,” he said. “Two years ago, there was a big difference in terms of the latencies and the ability to sustain consistent performance. But now, there’s not that big a difference.”

Burgener said storage vendors were selling their all-flash configurations, whether hybrid or not, in a “directly competitive manner to purpose-built AFAs.” Because the products target the same customers in essentially the same market, it made sense to include them, he said.

But, if a storage array ships with only flash drives and can support HDDs, IDC considers it to be a hybrid flash array.

“That gets rid of this whole question: ‘Well, what if I put a disk in later?’ ” Burgener said. “We’ve clarified that now. We just said, ‘Look, let’s simplify it.’ ”

Most of NetApp’s new AFA revenue was due to its All Flash FAS (AFF), according to Burgener.

Burgener said there might have been pent-up demand for a more flash-optimized All Flash FAS with NetApp’s OnTap 8.2 release. But, he expects the pent-up demand will wane, and NetApp’s growth rate will drop by the fourth quarter.

IDC went back and adjusted its historical numbers and forecasts to reflect the new AFA taxonomy. Under the old taxonomy, 2015 revenue for the total AFA market was $2.53 billion. With the new AFA taxonomy, the 2015 revenue figure for the entire market is $2.73 billion, according to Burgener.

NetApp placed fifth in 2015 AFA revenue  under the new IDC taxonomy, but moved ahead of HPE into fourth place in Q4 of 2015, Burgener said.

Table-Q1 2016-IDC AFA Market Share

Table-2015-IDC AFA Market Share-update


June 10, 2016  4:15 PM

HDS storage ‘freezing’ really a focus on future

Rodney Brown Rodney Brown Profile: Rodney Brown
Hitachi Data Systems

There has been some noise in the storage space lately, based on the English translation of a Japanese IT website that said Hitachi Data Systems was “freezing” investment in its high-end storage products. Lost in this declaration is that HDS gets it that arrays alone aren’t the future of enterprise storage.

On June 1, the site IT Pro Nikkei published a report based on an HDS briefing that said the vendor would be “freezing the investment in the high-end model of the storage business.” That led media sites and competitors to speculate that perhaps HDS was going to let its high-end storage business languish and die.

HDS has been quick to deny it will exit the high-end storage market. But the strategy outlined in the IT Pro Nikkei story is not new or surprising. High-end enterprise disk arrays are far from a growth market in this age of flash, cloud, hyper-convergence and software-defined storage.

HDS CTO Hu Yoshida revealed the vendor’s new strategy in a January interview on our SearchStorage site with senior writer Carol Sliwa.

In that interview, Yoshida said, right out of the gate “The infrastructure market is not going to be a growth market. Instead of trying to compete on infrastructure, we’re going to have to compete on application enablement.” Translation: HDS’ new Lumada line for handling data from IoT sources, based on the Pentaho IoT data analytics technology HDS acquired in 2015, will play a major role in the company’s future.

Yoshida specifically calls out IoT as a vital component of the future of HDS. “We have an overall corporate strategy with Hitachi, called Social Innovation, where we are moving toward the Internet of Things (IoT), trying to build smart cities and provide more insights into data centers, telco [and] automotive,” he said.

IoT was also a big topic at the HDS Connect partner conference a year ago.

On his own blog, Hu’s Place, Yoshida this week clarified what “freezing” investment in high-end storage means for HDS. He wrote that hardware investments in the flagship high-end HDS hard disk drive arrays are no longer necessary due to the ability to use standard Intel processors and flash for performance, with storage features running in software.  HDS will shift research and development from its Virtual Storage Platform (VSP) hardware to Storage Virtualization Operating System (SVOS), flash and storage automation.

“There is no need to build separate hardware for midrange and enterprise customers,” Yohsida wrote. “They all have access to enterprise functionality and services like virtualization of external storage systems for consolidation and migration, high availability with Global-Active Device, and geo-replication with Universal Replicator.”

In April HDS added to SVOS what it apparently considers to be the last hardware  puzzle piece, building NAS functionality (or, as Yoshida described it “embedded file support in our block storage”) into its G series of hybrid flash  arrays.

So the upshot is, HDS will invest in making SVOS and VSP perform better for specific applications, like IoT data storage and analytics. That sounds much less like the sky is falling on HDS, and a lot more like a strategic investment in what it sees as the future in which application integration rather than storage features becomes the major differentiator between storage vendors.


June 9, 2016  9:36 AM

Iguaz.io promises AWS-like storage in the data center

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Newcomer iguaz.io is the latest software startup that will try and deliver the Holy Grail of storage  — the ability to provision and manage on-premise capacity the same way as in Amazon Web Services (AWS).

Iguaz.io calls its software a virtualized data services architecture, similar language copy data management vendor Actifio used when it launched in 2010 and others have adopted. There does seem to be some copy data management in iguaz.io’s software along with features that help developers and application owners provision and manage storage. Iguaz.io gave a peek under the hood this week but is still months away from a shipping product.

“When people go to Amazon, they don’t know anything about the infrastructure,” Iguaz.io founder and CTO Haron Yaviv said. “You go through APIs and define policies. Enterprise storage today is legacy storage – you go to IT, say ‘Provision this stuff for me, this is the performance I need, go run backups against my data’ and so on. We said, let’s take Amazon features and extend it to enterprise storage. It’s all self-service. Most of the work is for the application users and developers. They create policies and provisions, just like they’re using Amazon.”

Haviv said iguaz.io software will be sold either as software-only or on an appliance, and he expects cloud providers to be a target customer as well as enterprises looking to build private clouds. He gives no target ship date but said Iguaz.io plans to launch by the end of 2016.

Here is what Iguaz.io promises its software will do:

  • consolidate data into a high-volume, high-velocity, real-time data repository designed that virtualizes and transforms data on the fly, exposes it as streams, messages, files, objects or data records consistently, and stores it on different memory or storage tiers;
  • seamlessly accelerate popular application frameworks including Spark, Hadoop, ELK, or Docker containers;
  • offer enterprises a 10x-to-100x improvement in time-to-insights at lower costs;
  • leverage deep data insights to provide best-in-class data security, a critical need for data sharing among users and business units.

Its goals include the ability to enable stateless application containers in a cloud-type approach, provide access to data from multiple applications and users, and to simplify deployment and management. Haviv said it will run on flash, NVM, in the cloud, and on block and file storage.

If you’re wondering, the vendor’s name comes from the cascading Iguazu Falls on the border of  Argentina and Brazil – signifying data cascading into a single stream. The Israel-based startup was founded in 2014 and has $15 million in funding. Its other founders include CEO Asof Somekh, formerly of Mellanox and Voltaire, and COO Yaron Segev, who founded all-flash array pioneer XtremIO and sold it to EMC.


June 3, 2016  12:40 PM

IDC: Q1 HPE storage sales soared, rivals tanked

Dave Raffo Dave Raffo Profile: Dave Raffo
HPE

Hewlett Packard Enterprise (HPE) bucked the trend of storage revenue declines in the first quarter of 2016, according to IDC’s quarter enterprise storage systems tracker. HPE was the only member of the top six vendors to increase its storage revenue over the first quarter of the previous year.

Industry-wide total external storage (SAN and NAS) declined 3.7 percent to $5.4 billion for the quarter. Overall storage, including servers and server-based storage, declined seven percent to $8.2 billion.

HPE’s external storage revenue increased 4.6 percent to $535.7 million, ranking third behind EMC and NetApp. HPE increased market share from 9.1 percent in the first quarter of 2015 to 9.9 percent in the first quarter of 2016.

EMC revenue declined 11.8% to $1.349 billion as it awaits its $67 billion acquisition by Dell. EMC’s market share fell from 27.2% to 24.9% over the year.

NetApp had a bigger fall, dropping 15.6% to $645.5 million and a share decline from 13.6% to 11.9%.

HPE was followed by Hitachi in fourth, IBM in fifth and Dell in sixth. All three declined in revenue year-over-year, although Hitachi gained share from nine percent to 9.2 percent. All other vendors combined for $1.59 billion, a 7.8% increase from the previous year. “Others” market share grew from 26.2% to 29.3%.

In the total storage market, HPE grew 11% and jumped ahead of EMC into first with $1.42 billion. EMC, which generates all its revenue from external storage, fell 11.8% in the overall market to a 16.4% market share compared to HPE’s 17.3% share. No. 3 Dell, No. 4 NetApp, No. 5 Hitachi and No. 6 IBM all declined in overall storage revenue. Other vendors increased 5.3% but storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale data center customers slipped 39.9%.

IDC put all-flash arrays at $794.8 million in the quarter, up 87.4% for the year. Hybrid flash arrays accounted for $2.2 billion and 26.5% of the overall storage market share.

HPE continued its momentum into the second quarter, according to recent earnings reports from the major vendors. HPE reported two percent year-over-year growth while EMC, NetApp and IBM all said their storage revenue declined. HPE CEO Meg Whitman said 3PAR all-flash revenue nearly doubled from last year.

“We estimate that we gained market share in the external disk for the tenth consecutive quarter and expect storage to gain shares throughout the remainder of the year on the strength of the 3PAR portfolio and new logo wins as we take advantage of the uncertainties surrounding the Dell-EMC merger,” Whitman said on HPE’s May 24 earnings call.

Top 5 Vendors, Worldwide External Enterprise Storage Systems Market, First Quarter of 2016 (Revenues are in Millions)
Vendor 1Q16 Revenue 1Q16Market Share 1Q15 Revenue 1Q15 Market Share 1Q16/1Q15 Revenue Growth
1. EMC $1,349.4 24.9% $1,530.4 27.2% -11.8%
2. NetApp $645.5 11.9% $764.9 13.6% -15.6%
T3. HPE* $535.7 9.9% $512.1 9.1% 4.6%
T3. Hitachi* $497.1 9.2% $506.9 9.0% -2.0%
T5. IBM* $429.0 7.9% $446.2 7.9% -3.8%
T5. Dell* $376.2 6.9% $395.1 7.0% -4.8%
Others $1,590.6 29.3% $1,475.8 26.2% 7.8%
All Vendors $5,423.6 100.0% $5,631.4 100.0% -3.7%
Source: IDC Worldwide Quarterly Enterprise Storage Systems Tracker, June 3, 2016

 


June 3, 2016  7:34 AM

Qumulo pockets $32.5M to fund data-aware storage

Dave Raffo Dave Raffo Profile: Dave Raffo
Qumulo

Qumulo, the scale-out data-aware NAS startup founded by Isilon veterans, today added $32.5 million in funding to expand its sales operation. Qumulo’s C funding round brings its total funding to $100 million.

The vendor launched its Qumulo Core storage platform in March, 2015. It added a major upgrade last April with support for 10 TB drives, erasure coding and advanced performance analytics.

“We’ve had a great year,” Qumulo CEO Peter Godman said. “We’ve been launched for about a year and we have more than 60 customers who continue to deploy bigger and bigger systems.”

Godman said Qumulo’s goal is to generate three-times as much revenue over the next year, so a significant amount of the funding will go towards sales and marketing. He said field sales operations will nearly double over the next year. The start-up has around 135 employees.

Godman said about half of Qumulo’s sales come from customers adding systems to their original purchases. A lot of repeat buys come from media and entertainment whose capacity needs expand rapidly due to new higher-definition video formats.

On the product development front, he said Qumulo will continue to shoot out upgrades to its Core software every two weeks.

The daunting part for Qumulo is it competition comes from two of the largest storage vendors, EMC and NetApp. In EMC’s  case, Qumulo usually goes head-to-head with the Isilon platform that Qumulo founders helped develop. Godman said Qumulo competes frequently with Isilon in use cases such as animated movies where high performance is required. He said Quantum StorNext is another competitor in media but “90-plus percent of the time, we compete with NetApp and EMC.”

Allen & Company, Top Tier Capital Partners, and Tyche Partners were Qumulo investors with the C round, joining previous investors Kleiner Perkins Caufield & Byers (KPCB), Madrona Venture Group, Highland Capital Partners, and Valhalla Partners.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: