Storage Soup

March 15, 2016  12:46 PM

Hybrid cloud still an unfulfilled goal for most

Dave Raffo Dave Raffo Profile: Dave Raffo

Storage vendors often talk about this being a transformation period in storage. EMC, whose executives use that term as much as anybody, conducted an analysis of its customers to see just where they stand in the transformation.

EMC and its VMware subsidiary conducted IT transformation workshops across 18 industries to gauge their customers’ progress. The workshop focused on helping organizations identify gaps in their IT transformation, determine goals and prioritize their next steps.

While organizations generally were far along in virtualization, they have a long way to go in streamlining infrastructure and moving to hybrid cloud architectures:

  • More than 90% are in the evaluation or proof of concept stage for hybrid clouds, and 91% have no organized, consistent way of evaluating workloads for the hybrid cloud.
  • 95% say it is critical to have a services-oriented IT organization without silos but less than four percent operate that way.
  • 76% of organizations have no developed self-service portals or service catalog – crucial pieces to building a private cloud.
  • 77% want to provision infrastructure resources in less than a day, but most say it takes between a week and a month to do so.
  • Only the top 20th percentile can do showback and chargeback to bill the business for services consumed, and 70% say they don’t know what resources each business unit is consuming.

So if so many organizations want to streamline their IT and build hybrid clouds, why have so few done so? If you guessed cost, you’re probably on the right track.

“There are usually two limiting factors,” said Barbara Robidoux, VP of marketing for EMC global services. “They’re all being told they have to hold costs down, especially on the legacy side. If they’re going to go forward and modernize any aspect, that costs money, yet they’re being told to hold costs down. So to some degree, you’re stuck. We’re hearing ‘We need help with ROI analysis to see how we can save money on infrastructure.’ The other thing is a lack of skills and know-how. That’s pretty disruptive.”

March 15, 2016  9:17 AM

HPE turns 3PAR array from all-flash to hybrid

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Flash Array, HPE

For years we’ve seen storage vendors take systems designed for hard disk drives (HDDs) and add solid-state drives (SSDs) to them. Now Hewlett Packard Enterprise is taking hardware that ran all SSDs and allowing customers to use HDDs inside of it.

HPE’s new 3PAR StoreServ 20840 array uses the same hardware as the all-flash StoreServ 20850 launched in 2015. But while the 20850 only supports flash, the 20840 holds up to 1,920 HDDs. Both systems can use from six to 1,024 SSDs.

The 3PAR StoreServ 20840 scales up to eight controller nodes and includes 52 TB of cache, 48 TB of optional flash cache and 6000 TB of raw capacity. The system supports any combination of SAS, nearline SAS disk drives and SSDs.

“The system is the exact same hardware as the all-flash 20850 and we will keep that system around for customers who want to go entirely with flash. This 20840 is for our customers that happen to support spinning disk on the back end,” said Brad Parks, director of strategy for HPE storage. “The 20840 is flash optimized but not all-flash limited.”

The new 3PAR storage model is part of the HPE 3PAR StoreServ 20000 enterprise SAN family that the company launched in June 2015.  In August 2015, HPE announced the 20450 all-flash system and the 20800 all-flash starter kit.

HPE last week also launched the enterprise-level StoreOnce 6600 and midrange HPE StoreOnce 5500 data deduplication disk backup appliances. Both are based on HPE’s latest ProLiant servers. The 6600 scales from 72 TB of usable capacity to 1728 TB, while midrange HPE5500 model scales from 36TB to 864 of useable capacity in a highly dense footprint designed for data deduplication in large and midrange data centers and regional offices.

HPE StoreOnce supports HPE Data Protector, Veritas NetBackup, Backup Exec via OST, Veeam and BridgeHead software. The systems work with the 3PAR StoreServ’s StoreOnce Recovery Manager Central, which takes application-consistent snapshots on the HP 3PAR StoreServ array and copies the changed blocks directly to any HP StoreOnce appliance. The process is known as flat backup.

“Snapshot data moves directly over the network to StoreOnce without engaging the third-party software,” Parks said.

Jason Buffington, senior analyst at Enterprise Strategy Group, said these latest models demonstrate that HPE is offering a consolidated approach for data protection in both data centers and remote sites.

“What we are seeing is organizations have multiple workloads with specify data protection solutions,” he said. “That is the macro trend that HPE is trying to address. StoreOnce is trying to address a way to centralize storage data protection.”

March 14, 2016  7:35 AM

IBM ships new Storwize V5000, rolls out software package for Apache Spark

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

IBM recently rolled out a pair of new storage offerings: the “Gen 2” Storwize V5000 hybrid system, with upgraded Spectrum Virtualize software, and a “Platform Conductor” for Apache Spark data analytics.

The Gen 2 edition of IBM’s Storwize V5000 for the first time gives customers a choice of three models: the V5010, V5020 and V5030. The company sold only one V5000 model in the past, although the Storwize product line also includes a higher end V7000 and lower end V3700.

Eric Herzog, vice president of product marketing and management for IBM’s storage and software-defined infrastructure, said the V5000 targets the “lower end of the mid-tier market.” That includes small and medium businesses and remote offices and departments of large enterprises. Typical workloads include databases and virtual server workloads.

The Storwize V5010 and V5020 models each hold up to 264 drives, and the larger V5030 has a 504-drive limit, or 1,008 drives per clustered system. Herzog said customers can technically use all hard disk drives (HDDs) or all solid-state drives (SSDs), but most use the V5000 in hybrid mode with a combination of the two.

The flash percentage is typically 5% to 10% in a Storwize V5000 hybrid array, and the bulk of the HDDs are 7,200 RPM, according to Herzog. The new V5000 offers per-system cache options of 16 GB, 32 GB and 64 GB.

The Storwize V5000 bundles the latest version of IBM’s Spectrum Virtualize software, which was formerly known as SAN Volume Controller (SVC). The main new feature in last fall’s Spectrum Virtualize 7.6 release was data-at-rest encryption. The Spectrum Virtualize software in the V5000 can manage up to 32 PB, according to an IBM spokesperson.

IBM’s Spectrum virtualization software supports more than 300 arrays from a wide range of vendors, according to Herzog. Customers with maintenance contrasts can update to the new Spectrum Virtualize at no charge.

All models of the new Storwize V5000 product line include features such as internal virtualization, thin provisioning, data migration, multi-host support, snapshots, automated tiering and remote mirroring. The V5020 adds encryption and boosts performance, and the V5030 also tacks on clustering, compression, external virtualization and HyperSwap high-availability technology.

The Gen 2 Storwize V5000 supports 16 Gbps Fibre Channel (FC) and 12 Gbps SAS connectivity, unlike the prior version’s 8 Gbps FC and 6 Gbps SAS. The Gen 1 and Gen 2 products both support 10 Gbps iSCSI/Fibre Channel over Ethernet (FCoE).

List price for the V5010 is $9,250 including hardware, software and a one-year warranty, according to IBM.

IBM Platform Conductor for Spark
IBM also recently announced its Platform Conductor to help users deploy the open source Apache Spark engine for large-scale data processing. Apache Spark can run programs up to 100 times faster than Hadoop Map Reduce in memory, or 10 times faster on disk, according to the open source project’s Web site.

Spark is a real-time analytics engine. These real-time analytics engines require parallel processing on the storage side” for performance, said Herzog. “Spectrum Scale provides the parallel file system and scales up to exabytes of capacity. We don’t care what storage hardware they use.”

The IBM Platform Conductor for Spark includes the Apache Spark distribution, workload/resource management software and IBM’s Spectrum Scale File Placement Optimizer.

“People were getting Spark, the platform computing and Spectrum Scale. They were putting it all together, or we were putting it together, or our business partners were putting it together,” said Herzog. “We saw this trend happening already, so it made sense just to bundle them up and make it more convenient.”

Herzog said Apache Spark originally gained a foothold in academic communities, but Spark usage has been spreading to more traditional enterprise customers. He said IBM is considering more pre-packaged bundles similar to the Spark offering.

The list price for IBM Platform Conductor for Spark is $6,250 per managed server, whether physical or virtual, inclusive of licensing and one year of support.

March 11, 2016  7:51 AM

Violin Memory receives no offers, makes few sales in Q4

Dave Raffo Dave Raffo Profile: Dave Raffo
flash storage, Violin Memory

Violin Memory finished its hunt for a buyer without a deal. It did land a few partnerships, while shedding a quarter of its workforce and closing few sales of its all-flash storage arrays.

Violin Thursday night reported revenue of $10.9 million, down from $12.5 million the previous quarter and $20.5 million a year ago. Only $4.3 million came from product sales, despite playing in a rapidly emerging flash storage array market. Violin lost $25.6 million for the quarter compared to a loss of $46.8 million the previous year. For the full year of 2015, Violin’s revenue of $50.9 million declined from $79 million in 2014. Its 2015 loss of $99 million was actually less than 2014 when it lost $108.9 million.

Violin CEO Kevin DeNuccio blamed the poor quarter results partially on the company’s exploration of a sale. The plan now is to cut staff – a process already underway – and try to push sales through its Flash Storage Platform (SFP) 7000 arrays and new partnerships to achieve profitability in 18 to 24 months.

“The strategic evaluation process and related media coverage impacted sales as it created a wait-and-see-what-happens mindset with some customers,” DeNuccio said on the Violin Memory earnings call.

DeNuccio said Violin explored strategic relationships with at least 15 companies over the last four or five months, but received no formal offers acquisition offers. He said Violin did sign a formal partnership offer “with one of the largest technology bellwethers” that could lead to an OEM relationship. DeNuccio said he expects a formal announcement over the next few months. He added that there are three more “relationships in various stages of development” that might lead to reseller or OEM deals.

“We are concluding the formal review of our strategic alternative process and we will focus on these new relationships,” he said.

Violin reduced headcount by 25% since last Oct. 31, going from 349 employees to 263 with most of the reduction hitting the sales team.

DeNuccio said there is still time for Violin to turn things around because the flash storage market is still in its early days, but admitted 2015 was a rough year.

“In the technology startup world, this would have been a rocket ship takeoff,” he said. “However, for Violin, anyway you look at it, we have just completed a very challenging year. It has been a year of navigating through a completely overhauled product line transition coupled with the launch of high value producing software in an industry-leading management suite. Despite the challenges of the quarter and fiscal year, we have put in place a new base of technology and customers from which to build the Violin business.”

March 9, 2016  7:15 PM

Seagate demos fast, dense PCIe SSDs that support OCP specs, NVMe

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
OCP, Seagate

Seagate is demonstrating a new PCI Express (PCIe) flash drive this week at the Open Compute Project (OCP) Summit that it claims meets OCP specifications and delivers throughput of 10 gigabytes per second.

The new full-height, half-length PCIe add-in card – which Seagate expects to ship this summer – bundles multiple gum-stick-sized, energy-efficient, consumer-grade M.2 solid-state drives (SSDs). The Seagate PCIe card accommodates 16-lane PCIe slots and supports non-volatile memory express (NVMe).

To comply with OCP specifications, the Seagate SSD had to enable capabilities such as the bifurcation of PCIe lanes at boot-up, out-of-band temperature and performance measurements, and the management of airflow and fan control in out-of-band fashion, according to Tony Afshary, director of marketing for flash products at Seagate.

The OCP is a collaborative community that focuses on redesigning hardware to more efficiently support the increasing demands on IT infrastructure, especially in large data centers with thousands of servers. Facebook has been the primary driver behind the OCP initiative. Other members include Microsoft and Google, which disclosed today that it’s joining the OCP.

“There is an enormous amount of data that is being generated specifically in the public cloud space, but even within enterprises, that requires you to be innovative and build data centers differently. That’s what OCP is all about,” Afshary said.

He said Facebook was influential in the design of Seagate’s new PCIe add-in card, both with the NVMe and with the aggregation of M.2 flash form factor cards inside an OCP server. Small-form-factor M.2 SSDs were originally designed for power-constrained devices, such as laptops and tablets, and later saw use in the SATA form factor to speed the boot process in servers.

Afshary predicted that NVMe-based M.2 SSDs will move into different tiers of storage and compute. Enterprises often use flash drives with databases for primary storage, but Afshary said innovative cloud companies are considering flash even for cold storage.

In an August 2013 presentation at the Flash Memory Summit, Jason Taylor, now the OCP’s president and chairman and Facebook’s VP of infrastructure, raised the prospect of using SSDs for cold storage. Taylor suggested that solid-state technology could provide high-density storage and longer hardware lifespan at a reasonable cost. He challenged the industry to “make the worst flash possible – just make it dense and cheap; long writes, low endurance and lower IOPS/TB are all OK.”

Greg Wong, founder and principal analyst at Forward Insights, said that, with Seagate’s new PCIe add-in cards, the M.2 SSDs enable easy upgrade for capacity purposes.

In addition to the full-height, half-length PCIe add-in card, Seagate is also finalizing a smaller half-length, half-height card that has eight-lane PCIe slots and does not use M.2 SSDs, according to Afshary. Seagate claimed that model can deliver 6.7 GB per second on reads.

Seagate’s new PCIe cards currently support two-dimensional triple-level cell (TLC) and consumer-grade multi-level cell (cMLC) NAND flash. Seagate expects to support 3D TLC and cMLC within a few months, according to Afshary. He declined to disclose pricing but said the cards will be competitive in price with any NVMe-based SSD, regardless of form factor.

The new PCIe cards from Seagate will work in OCP-compliant servers as well as standard servers, all-flash arrays and hybrid storage arrays that also have hard disk drives (HDDs), according to Afshary. Seagate has yet to announce the name or capacity options for its new PCIe add-in cards. Details will be available at the time of the official product launch.

The eight- and 16-lane PCIe add-in cards are currently in testing with multiple large customers. Seagate’s own test system included a Quanta Leopard server with two Intel Broadwell processors and 32 GB of memory, running the CentOS 7 Linux distribution. The bandwidth tests used 256K sequential reads.

Afshary said the new PCIe cards primarily target large-scale cloud providers, but he expects they will also see use with “anyone who wants to have high-density, high-performance, competitively priced SSDs.” Potential use cases include Web applications, weather modeling and statistical trend analysis among enterprises processing data for object storage or in real time, where speed matters, according to Seagate.

March 9, 2016  4:41 PM

Riverbed adds AWS and Azure support to its SteelFusion

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Riverbed Technology this week announced its hyper-converged SteelFusion solution now supports both Amazon Web Services and Microsoft Azure so customers can leverage the cloud as a secondary storage tier.

SteelFusion, a hyper-converged solution for branch and remote offices, supports Azure via Microsoft StorSimple and AWS through AWS Storage Gateway. SteelFusion is a two-part offering with infrastructure in both the data center and branch, which are virtual machines that mirrors data from the edge to the data center.  Applications run on the edge device for better local performance.

“What we are doing is adding a new leg to our solution,” said Saveen Pakala, Riverbed’s senior director of product management. “We are adding a storage tier. Customers now have the flexibility of using traditional data center storage but they also have the option to go to the cloud. ”

In April 2015, Riverbed expanded the capabilities of its SteelFusion branch office appliance, upgrading its Core and Edge models while adding FusionSync software for business continuity. SteelFusion is designed to consolidate storage, servers and backups at remote sites, removing the need for IT at those offices.

The upgrade came a year after Riverbed re-named its Granite product  as SteelFusion. SteelFusion combines storage, WAN optimization and virtual machine management in a single appliance. Riverbed now labels SteelFusion “hyper-converged infrastructure for the branch office.”

The addition of FusionSync keeps multiple data centers in sync with branch office data. If there is a data center failure, the branch office can continue to function.  In November, Riverbed announced SteelFusion support for customers using VMware vSphere 6.

The upgraded SteelFusion Core and Edge  models support larger branch offices and regional hubs than previous appliances. The SteelFusion Core is a single appliance that supports between 100 TB and 150 TB in branch locations. The new SteelFusion Edge system supports 256 GB of memory to handle larger workloads. Three of the Edge models support advanced tiering cache.

March 4, 2016  12:42 PM

Nimble’s all-flash investment causes pain before gain

Dave Raffo Dave Raffo Profile: Dave Raffo
Nimble Storage

Even as it waits for its new all-flash array (AFA) to hit the market, Nimble Storage did better than expected last quarter. However, its forecast for this quarter was below expectations, with the vendor blaming lower than expected sales and higher losses on a transition period and expenses associated with its new array.

Nimble reported $90.1 million in revenue for last quarter, 32% higher than last year and more than $1 million over the high end of its previous guidance. It lost $9.4 million, slightly up from $8.5 million last year but within the vendor’s guidance range. The forecast for this quarter is for $83 million to $86 million in revenue and a loss of between $20 million and $22 million. That revenue would be up from $71.3 million last year, but it would be by far its greatest quarterly loss since becoming a public company in 2013.

Nimble’s stock price took a big hit for the second straight quarter following earnings. It opened today at $6.63, down from $8.25 at Thursday’s close. That’s not close to the nearly 50% drop the share price took in November after earnings, but it’s still going in the wrong direction. Nimble’s stock price was $32.16 last June 15.

Nimble executives say everything will be fine once its all-flash systems get fully into the market. But the normal drop off in storage sales in the first quarter of any year will hit Nimble harder in 2016. The vendor expects customers will pause buying its hybrid arrays while they check out the all-flash systems.

“Q1 is our seasonally slowest quarter and one in which the large incumbents in our industry typically see a double-digit sequential decrease in revenue from Q4,” Nimble CFO Anup Singh said. “In addition, we have taken into account the potential impact of the AFA product introduction during the quarter will have on sales cycles.”

Singh and CEO Suresh Vasudevan said the AFA launch will cost Nimble approximately $4 million in extra expenses this quarter.

“We are making the investments that we thought were the right level of investments to make,” Vasudevan said, pointing to additional cost for demand generation and channel enablement. “We absolutely believe that it will translate into growth in the second half [of 2016].”

March 4, 2016  10:43 AM

HPE’s Whitman grows hyper for hyper-converged

Dave Raffo Dave Raffo Profile: Dave Raffo
HPE, Hyper-convergence

Hewlett Packard Enterprise (HPE) is planning to go after Nutanix and other hyper-converged vendors with an appliance based on its ProLiant server line.

HPE CEO Meg Whitman revealed the vendor’s hyper-converged plans during its quarterly earnings call Thursday night. It was part of her promise to deliver innovative products in storage and other areas.

She said the new hyper-converged announcement will come this month. Whitman said the system will “offer customers installation in minutes, a consumer-inspired simple mobile array user experience, and automated IT operations. All at 20 percent lower cost than Nutanix.”

Whitman predicted the new system will make HPE a major player in what she identified as a $5 billion hyper-converged market. Nutanix is considered the market leader, although VMware claims it has more customers for its Virtual SAN (VSAN) hyper-converged software.

HPE sells hyper-converged appliances based on its aging StoreVirtual technology acquired from LeftHand Systems. HP briefly signed on as a VMware EVO: Rail partner, but quickly dropped out of that VSAN OEM program.

Whitman said all the technology for the new hyper-converged system is developed by HPE.

“We’re quite excited about this,” Whitman said. “The hyper-converged market is big. It’s growing fast. It’s also getting pretty crowded. You’ve seen a lot of announcements over the last couple of months, but we very much like this product from a side-by-side comparison and features and functionality to our competitors.”

During the call, Whitman said the HPE “innovation engine is firing on all cylinders,” with products coming in many enterprise areas including all-flash storage.

The earnings report was the first for HPE since the HP split, but the same old story for storage. Overall storage revenue fell three percent year over year, but 3PAR array revenue grew 21%. CFO Timothy Stonesifer said revenue from 3PAR all-flash systems more than doubled from a year ago. And despite the overall revenue decline, HPE executives claim they gained overall market share.

Whitman held up 3PAR’s all-flash array as an example of the type of innovation she expects to see from the vendor going forward. Whitman said she favors home-grown products to acquisitions. She actually cited 3PAR as an example of both “internally homegrown” technology as well as a good acquisition.

“The benefit of doing organic innovation is you don’t end up with a Frankenstein of architectures,” she said. “The second choice would be acquisitions that look like 3PAR, 3Com and Aruba that have been very successful acquisitions for us. They are additional complementary technology.”

March 3, 2016  6:50 PM

Brocade VP: Flash spurs coordinated Gen 6 Fibre Channel launch

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Brocade, QLogic and Broadcom coordinated this week’s shipments of the first switches and adapters to support Gen 6 Fibre Channel storage networking technology.

We caught up with Jack Rondoni, vice president of storage networking at Brocade, to talk about the latest generation of Fibre Channel (FC), which is designed to support 32 Gigabits per second (Gbps) and 128 Gbps. Rondoni also gave his take on the future of the specialized storage technology at a time when many experts predict the use of Ethernet-based networking will continue to grow for enterprise data storage. Interview excerpts follow.

I’m sure you’ve heard the death knell sounding for Fibre Channel. What might change that picture with the launch of Gen 6 Fibre Channel?

Jack Rondoni: I’ve been hearing that death knell since the year 2000 – which is kind of humorous since we’re in 2016, and the technology is still advancing to the future. The biggest difference right now is, when Gen 5 launched, Brocade was on an island. We were the only one launching that technology in the market. Our competitors were publicly saying Fibre Channel is dead, and the adapter vendors were focusing on [Fibre Channel over Ethernet] FCoE technology – and frankly, so was Brocade. We were working on FCoE technology. But, the difference is we were doing that in parallel. It was not an either/or.

With Gen 6, the vast majority of the ecosystem is already there . . . Part of that is just the failure of FCoE in the market, but I also think it’s clearly the realization that mission-critical applications still run on block storage today. They’re the applications that run companies. It’s not Facebook-class data. If you want to keep those applications running and keep your company running, Fibre Channel is the most proven, most resilient technology out there.

Why did everyone get on board for the Gen 6 launch?

Rondoni: The proliferation of flash-based storage. There’s a clear benefit that solid-state storage gets with Fibre Channel. We see higher attach rates of Fibre Channel to SSD-based storage. And SSD-based storage carries a very good value proposition to the end user community – better performance, better storage capacity utilization. That dynamic in the storage array market was not there [when 16 Gbps Gen 5 FC launched]. It was just at its early stages.

How do you see the battle between the 25/50/100 Gigabit Ethernet shaping up against 32/64/128 Gbps Fibre Channel?

Rondoni: Certainly within the Fibre Channel community, there’s aggressive work being done. Obviously Gen 6 includes 32 Gigabit. It also includes 128 Gig, [and] you can take multi 32 Gigs today and trunk ’em at 64 much like most of the early 50 Gig implementations will be.

Netting it out, Fibre Channel will always be faster than Ethernet, whether that’s a comparison of 32 to 25, 128 Fibre to 100 Gig Ethernet. And if you look further into the future, the standards works that are being done at 64 Gig Fibre Channel serial technology and 256 Gig parallel, it’s actually ahead of where 50 Gig serial and 200 Gig Ethernet is. From the standards on the Fibre Channel side, we expect 64 or 256 to be done in 2017, while on Ethernet, the 50 and 200 Gig timeframe is going to be probably 2018.

To me, it’s really not about speed. Fibre Channel will always advance the roadmap to be a step ahead of Ethernet for those who care about it. But, the demand for resiliency, availability and deep instrumentation within the storage connectivity networks is actually going to be more important than the speeds because you’re going to have many, many workflows in these environments.

What’s in store for the short- and long-term future of Fibre Channel?

Rondoni: Fibre Channel will continue to advance to enable [enterprises] to use next-generation storage technologies such as high-performance SSDs and [non-volatile memory express] NVMe without ripping and replacing their entire Fibre Channel environment. We’re ready for the future of any kind of new storage devices being thrown at us.

The second thing is that Fibre Channel will continue to advance on its core principles of the highest levels of resiliency, availability and performance, and we’re going to keep the operational costs down as low as possible.

March 3, 2016  11:54 AM

Pure Storage accelerates flash revenue

Dave Raffo Dave Raffo Profile: Dave Raffo
Pure Storage

Pure Storage is looking at more competition and more opportunity than ever, and handling both well. The all-flash vendor bucked industry trends by growing revenue significantly last quarter despite new flash arrays flooding the market.

Pure Wednesday reported $150 million in revenue for last quarter, a 128% increase over last year and well above its own guidance. Its 2015 revenue of $440.3 million grew 152% over 2014. Pure claims it added more than 300 customers in the fourth quarter, bringing its total to more than 1,650.

Pure Storage is still losing money, but cut losses slightly last quarter due to the spike in sales along with a shift to commodity hardware that lowered costs for its M Series arrays.

Its loss of $44.3 million for the quarter compared to a loss of $47.6 million a year ago, although its loss for the year of $213.7 million actually grew from $183.2 million loss in 2015. Pure did have free cash flow of $32 million for the quarter compared to negative $45 million a year ago.

“We are making progress toward profitability,” Pure Storage CEO Scott Dietzen said on the earnings conference call. “We’ve previously said that we expected to reach sustained positive cash flow by 2018, and today we are pleased to pull that date forward to the second half of 2017. The business also rounded the corner on operating losses, which peaked last year, but will be flat this year and then improve going forward.”

Reaching profitability will require sustained revenue growth, too. Pure forecast revenue of between $135 million and $139 million for this quarter, compared to $74 million for the same quarter a year ago. The overall market for all-flash arrays is growing, but so is the amount of competition.

Flash is all over the storage news these days. EMC rolled out two new all-flash products this week and declared 2016 the year that all-flash arrays take over the primary storage word. SanDisk lined up partner IBM to sell its InfiniFlash system and Tegile Systems started shipping its IntelliFlash box based on SanDisk technology. Nimble Storage launched its first all-flash array last week after years of trying to convince people that hybrid was the way to go. NetApp also closed its acquisition of all-flash startup SolidFire last week, and earlier this year Hitachi Data Systems launched a new all-flash platform.

“That feels to us like a really strong endorsement of the founding thesis of the company,” Dietzen said of all the flash activity.

Pure will also launch new flash products at its Accelerate user conference this month. Pure is well behind EMC’s market-leader XtremIO in revenue from all-flash systems, but is close to or ahead of the other large storage vendors.

Dietzen attributes Pure’s revenue growth to its ability to tap into cloud companies as well as take advantage of flash. He said Pure counts “cloud customers” such as LinkedIn, Intuit and Workday, as well as software-as-a-service and infrastructure-as-a-service providers as a rapidly growing part of its business.

He said Pure is “in a rapidly evolving market that is proving difficult for competitors ill-prepared for the all-flash and cloud disruptions … Our success is being driven by increased customer adoption of our uniquely flash and cloud-friendly storage platform.”

That sets up an interesting battle between Pure and NetApp’s SolidFire arrays that focus on cloud companies.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: