All-flash pioneer Violin Memory can’t sell enough all-flash arrays to make money. So its next move could be to sell itself in its entirety.
After another disappointing quarter, Violin CEO Kevin DeNuccio said the company has hired an investment banker to explore “strategic alternatives.” That usually means the company is for sale. But are there any takers with a glut of flash arrays on the market?
“There is no set timetable for completing the process, nor are there any assurances given that the exploration of strategic alternatives will result in any transaction being consummated,” DeNuccio said on the earnings call last night.
A financial analyst on the call asked DeNuccio what type of company might be interested in Violin.
“We think this is an attractive asset, both from a partnership or acquisition potential … “ he said. “The industry is obviously restructuring pretty dramatically with several companies buying storage companies and this company buying flash fabs. So we think there is a broad range of people interested in an asset like this.”
Violin hasn’t been able to find enough people interested in buying its flash arrays. Its $12.5 million in revenue was down 18% from the previous quarter and 42% from last year, and below Violin’s forecast of $16 million to $20 million. Violin lost $22.7 million in the quarter compared to $24.4 million the previous quarter and $23.5 million in the same quarter last year.
Perhaps the most disappointing aspect of last quarter was product revenue fell 63% to $6.3 million in the second quarter of availability of Violin’s Flash Storage Platform (FSP). Violin bet its business on its FSP, which added storage management and data protection features that were missing in its earlier performance-focused arrays. The FSP is still in early days and Violin added two new models this week, but a paltry $6.3 million in sales in a growing market doesn’t give much hope. Pure Storage, another all-flash pioneer, reported $131.4 million in total revenue and $113.6 million in product revenue in the same quarter.
DeNuccio admitted the quarter results were “extremely frustrating,” He said Violin failed to close one multi-million dollar deal involving a customer of its older 6000 technology that pushed back a FSP 7000 Series rollout.
“Our FSP and Concerto software, which consisted more than 50 million lines of code, has taken significant time and rigorous life testing to address the nuances of data risk, customer environment and to get things fully stabilized and operating properly,” he said.
He said Violin did gain a large repeat order of an FSP array from a large U.S. cable company and expects to add more than $1 million in revenue from that company each quarter. “This win … affirms our strategic premise,” Nuncio said.
Now if that cable company wants to get into the flash storage business, maybe Violin can find a buyer.
HP CEO Meg Whitman made that clear Tuesday during HP’s last earnings call before the split. Whitman, who will lead HPE, gushed over the momentum of 3PAR ahead of the company break-up.
Whitman said 3PAR StoreServ picked up share last quarter in the overall storage and flash-only markets. She said 3PAR all-flash arrays are on pace to sell $500 million over the next year and revenue more than doubled last quarter. That puts it ahead of the growth of newly public all-flash specialist Pure Storage, although it is still playing catch up to EMC’s XtremIO market leading all-flash array. Whitman also pointed to a recently released Gartner report that ranked 3PAR first in critical capabilities for consolidation, OLTP, virtualization/VDI, analytics and cloud use cases as well as first among high-end arrays.
Overall, HP storage revenue declined seven percent last quarter but Whitman claimed “storage significantly outpaced the market.” HP executives claim they picked up 10 percentage points of share in the all-flash market in the past year.
“Obviously the storage business is doing very well with 3PAR and all-flash,” said Whitman, who added that HPE will become more aggressive to take advantage of confusion created by the Dell-EMC merger.
If the largest storage array vendors that sell Brocade’s SAN switching are getting hit with declining revenue every quarter, that’s bad news for Brocade, right?
If you agree with that, then you’re missing the big picture, according to Brocade CEO Lloyd Carney. Carney said it doesn’t matter if people are buying their storage from big vendors or small, using flash or hard disk drives, or putting their data on premise or in the cloud. He says that if storage capacity grows, that’s good for Brocade because Brocade is one of only two vendors that sells the switching – Fibre Channel and Ethernet – that storage systems require. And storage capacity is growing, even if people are paying less for it.
“The knee-jerk reaction that a lot of us have is, storage dollars are coming down so something must be wrong with Brocade,” Carney said on Monday’s Brocade earnings call with analysts. “Well as long as storage capacity grows, Brocade is fine.”
At another point on the call, he said: “As long as storage capacity is growing, whether it’s Fibre Channel or IP-based, we will be successful.”
Carney’s optimistic outlook was justified last quarter. Brocade’s $588.8 million in revenue was better than its previous forecast, and its $324.9 million of SAN revenue was far above expectations. Overall SAN revenue was about the same as last year while most storage vendors saw revenue declines last quarter, and Brocade’s large director switch revenue improved 14% over last year.
Carney was less optimistic for this quarter, though. Brocade’s forecast of revenue from $550 million to $570 million is below the $589 million it did in the same quarter last year, and it expects SAN revenue to grow no more than three percent compared to an average gain of nine percent in this quarter. Brocade still predicts growth for the full year in 2016, however.
Carney said Brocade won’t get squeezed like the big storage vendors because it doesn’t face nearly as much competition. There are a bunch of smaller storage array vendors trying to take customers from the likes of EMC, NetApp, Hewlett Packard Enterprise, Hitachi Data Systems, IBM and Dell. On the SAN switching side, it’s Brocade and Cisco. And SAN switching is a small part of Cisco’s business.
“We are better positioned than the people who sell raw storage from a competitive standpoint,” he said.
Carney said Brocade’s plan to beat Cisco is to innovate – he pointed to non-disruptive monitoring and analytics added last quarter – and expand its partner base. Brocade recently broadened partnerships with Huawei and Lenovo with the intention of expanding its business in China.
As long as storage capacity expands, Brocade’s CEO expects to take in its fair share.
“Despite what is happening at EMC or NetApp or overall, people are buying more storage,” Carney said. “You yourself are using more storage. You’re sending pictures, you’re sending video content. So everybody is using more storage.
“Now, the price per megabyte or per terabyte of storage is coming down, because it’s such a competitive marketplace. However, the connectivity into that storage, the layer at which we play, is not as competitive a place.”
Nimble Storage’s drive for profitability took a major hit last quarter. Nimble, which was believed to be one of the smaller competitors eating away at large storage vendors’ sales, won’t hit break-even this quarter as it previously forecasted.
Nimble’s revenue of $80.7 million was up 37% from last year, but roughly the same as the previous quarter and below the hybrid array vendor’s forecast of from $86 million to $88 million. Nimble lost $11 million in the quarter – its largest quarterly loss since going public in 2013 — and increased spending this quarter will push back profitability indefinitely. Nimble forecasted revenue of $87 million to $90 million and losses in the $8 million to $10 million range for this year-ending quarter, typically the best sales period for storage vendors.
Reaction from Wall Street was potentially devastating. Nimble’s share price fell from $20.39 to $10.00 overnight – a drop of nearly 50% — and at least 11 financial analyst firms downgraded the stock.
CEO Suresh Vasudevan blamed the problems on two things. He said Nimble is affected more by large vendors’ price cuts as it moves deeper into the enterprise. And it has struggled to balance growth in the enterprise and its traditional commercial markets while trying to control spending with an eye on profitability. He said Nimble will beef up its commercial sales staff and is working on an all-flash array to help enterprise sales.
“We now believe that this approach of constraining investments at the same time that we diversify our customer base may have impacted our growth,” Vasudevan said on Nimble’s earnings call Thursday night.
He repeated what he has hinted at in the past – that Nimble will add all-flash arrays. Nimble has an all-flash expansion shelf and its arrays can pin data to all-flash volumes, but it lacks an all-flash product as more enterprises are looking to go in that direction.
“We have said that we have a very concrete plan for broadening our flash platform to compete in the entire space and that is still very much on target,” Vesudevan said. “You should expect to see us participating in the entire market with both hybrid flash and all-flash.”
The price cuts from the likes of EMC, NetApp and Hewlett Packard Enterprise could be a tougher problem to solve. Vasudevan really had no specific counters when asked his plans for dealing with that.
“We believe our business foundation remains extremely strong,” he said.
That belief will be tested in coming months. Meanwhile, the drastic drop in Nimble’s stock price is likely to attract some potential suitors who want to broaden their storage portfolios in the wake of the Dell-EMC deal.
NetApp’s latest earnings report told a familiar story.
NetApp Wednesday night said its revenue last quarter – although higher than expected – continued to decline with the vendor’s most established products taking the biggest hits. Its newer flash, cloud and software-defined software products are on the upswing but not nearly enough to keep overall sales up.
These same trends have been hitting NetApp and other large storage vendors for more than a year, and led to EMC’s getting swallowed whole by Dell. They contributed to NetApp switching CEOs from Tom Georgens to George Kurian in June without much change in result. And the trends will almost certainly continue for the foreseeable future.
NetApp’s revenue of $1.45 billion last quarter declined 6.3 percent from last year, and product revenue fell 12% to $819 million. For this quarter, NetApp expects revenue of $1.4 billion and $1.5 billion, or roughly a seven percent decrease from last year.
“Parts of our business are working well, some parts need improvement and other parts we must manage through declines,” Kurian said on the earnings call. “The IT spending environment continues to be constrained and the expectation for growth for the overall storage market has decreased to low single-digits.”
The parts that are working well, he said, are scale-out, software-defined, flash, converged and hybrid cloud storage. Those areas are growing about 20% a year while the traditional standalone hybrid storage market declines approximately nine percent, he said.
For NetApp, that means its Clustered Data OnTap (CDOT) operating system, hybrid cloud software, and all-flash arrays are growing while its traditional FAS with Data OnTap 7-Mode and its OEM products decline.
Kurian said 7-Mode OnTap shipped on about 30% of NetApp FAS arrays last quarter, down from around 65% a year ago. CDOT was on nearly 70% of new FAS systems shipped in the quarter, up from 35% last year. CDOT systems grew 95% year-over-year while 7-Mode shipments declined 60%. Still, CDOT overall made up only 17% of NetApp’s revenue compared to 15% in the previous quarter. That means 7-Mode customers are still not upgrading in large numbers.
NetApp’s All-Flash FAS array unit shipments increased 445% year-over-year following a price cut, controller upgrade program and seven-year extended warranty in June.
Kurian said Dell’s planned $67 billion acquisition of EMC is “clearly an opportunity” for NetApp because it will create confusion for customers and the channel. “The Dell-EMC transaction is yesterday’s solution to tomorrow’s customer problems,” he said. “It does not fundamentally address the hybrid cloud, it does fundamentally address the data management opportunity that customers are forced to deal with. It is really about trying to build efficiency in an integrated hardware business rather than the software-defined data center of the future.”
NetApp has $4.8 billion in cash, but Kurian didn’t sound enthusiastic about making acquisitions to bolster its product portfolio. When asked if he would consider that, he said NetApp’s strategy focused more on growing the emerging products it already has.
Nutanix launched its free Community Edition in June, so customers could try out the software for free on their own x86 hardware instead of purchasing an appliance from Nutanix or its OEM partner Dell. Through a technology partnership with Ravello Systems, the hyper-converged pioneer is now offering the Community Edition in the public cloud for approximately $1 per hour.
Nutanix senior director of technical marketing Greg Smith said more than 10,000 organizations have registered for the Community Edition, but Nutanix also received many requests to make it available as a virtual appliance in the cloud.
“We wanted to simplify the process by which a person could deploy Nutanix in a popular public cloud,” Smith said. “Ravello allowed us to take our existing software and stand up our whole environment on Google or Amazon.”
Ravello Systems uses what it calls a cloud application hypervisor to move multiple-virtual machine applications along with storage and networking to a public cloud.
Ravello director of product marketing Shruti Bhat said a user does not need an Amazon or Google cloud account to use the Nutanix Community edition. The cloud subscription and billing goes through Ravello. A Nutanix “blueprint” is published on Ravello’s catalog, and the customer clicks a drop-down box to publish on Amazon or Google. That provides access to the latest Nutanix Acropolis hypervisor and Prism management software and user interface.
Community Edition is a full-featured version of Nutanix software, but does not include the vendor’s support. Community users cannot upgrade to a licensed version of the product, but can purchase an appliance from Nutanix or Dell.
Pure Storage became the latest flash vendor to support TLC 3D NAND drives that lower the cost of solid-state storage. The vendor also expanded its Pure1 cloud-based management, adding predictive support and capacity planning.
3D TLC NAND will be available as an expansion shelf for Pure’s FlashArray//m Series arrays in early 2016. The shelves hold 44 TB and can be purchased fully populated or with 22 TB of capacity. Pure VP of products Matt Kixmoeller said the lower-cost NAND can reduce the cost of flash to about $1.50 per GB, assuming Pure’s average deduplication ratio of 5.4 to 1. Pure’s MLC flash costs around $2/GB, he said.
The TLC flash modules are 2 TB. Previous expansions shelves used 1 TB modules. Pure’s arrays use all flash with no hard disk drives.
“You will see us float more TLC into the product,” said Kixmoeller said. He said Pure will support TLC NAND inside its arrays and in shelves for older arrays.
“Now there are fewer workloads where you can claim you are not able to afford flash,” he added.
Dell, Hewlett Packard Enterprise, Kaminario and SolidFire also support 3D TLC NAND.
Pure is adding features to the Pure 1 Global Insight cloud-based management program it initiated this year. Pure1 looks at diagnostic data across Pure arrays in the field, scans against a library of known problems and can predict possible customer trouble, Kixmoeller said.
“It’s like anti-virus software. If a problem is detected, it’s codified into a signature,” he said. “Global Insight constantly scans arrays to see if anything has changed or is wrong.”
Pure1’s Capacity Planner identifies usage trends to forecasts consumption over time and estimate when a customer might need more storage. It suggests pre-emptive action for the customers.
Pure also added FlashStack converged infrastructure reference architectures for Oracle and SAP. FlashStack reference architectures include Cisco UCS servers and VMware virtualization software.
Datto, which sells backup and disaster recovery as a service, today closed a $75 million funding round although its founder and CEO says the vendor is already profitable.
The B round brings the startup’s total funding to around $100 million. Technology Crossover Ventures (TCV) was the sole venture capitalist involved in the round.
Datto sells its data protection products to managed service providers (MSPs).
“We don’t need this money to fund operations,” Datto CEO Austin McChord said. “We’ve been profitable since 2013. We’ll use this cash to make future investments in technologies and geographies, and we want to bring TCV into the fold.”
The Norwalk, Conn.-based company already has offices in the U.K., and this month moved into Australia and New Zealand. McChord said he intends to expand further.
Datto acquired cloud-to-cloud backup startup Backupify last December. Backupify protects data in Salesforce.com, Google Apps and Microsoft 365. McChord left the door open for more acquisitions but said Datto prefers to develop technologies in house. Possible new services Datto may develop include analytics and security.
“We store an enormous amount of data in our cloud, we’re up to about 160 petabytes,” McChord said. “That’s only valuable to our customers now if they have a disaster. So we’re looking at how we can bring value on an every day basis.”
McChord said TCV has a strong track record of working with maturing startups, and TCV general partner Ted Coons has a great deal of experience in the MSP market. Coons joins Datto’s board, along with Patrick Gray and Xerox CEO Ursula Burns.
Datto found itself in the news last month for its involvement with former secretary of state Hillary Clintons’ e-mail. Datto reseller Platte River Networks used a Datto server to store Clinton e-mails. Platte River turned over Clinton’s e-mail server to the FBI, and McChord said Datto has also cooperated with the FBI.
“We’ve done everything in accordance with the wishes of our client and the end user,” McChord said. “We’re working hard to protect them like any other customers. Both the end user and Platte River gave us permission to give data over to the FBI. We have done that, and it is no longer in our hands anymore.”
Riverbed this week upgraded its Steelfusion operating system to support VMware vSphere 6 and added a capability to do incremental upgrades when installing new VMware hypervisor versions to its SteelFusion Edge.
Steelfusion is a combined server and WAN optimization platform for branch or remote offices. The launch of the latest SteelFusion 4.2 follows on the April release of version 4.0 when Riverbed also changed the name of the product from Granite.
Riverbed initially positioned Granite as a storage product, but it also includes WAN optimization and virtual machine management. SteelFusion allows organizations to maintain data in SANs in the data center and push that data out to branch offices. It consists of SteelFusion Core appliances in the data center and SteelFusion Edge, which runs at branch offices and could be software on a Steelhead appliance or a standalone device.
The latest operating system, which will be generally available next week, helps tackle the challenge of shutting down the servers when a VMware refresh needs to be done. The new function gives users the flexibility to do the upgrade in increments so the systems can stay up and running.
“You can do an upgrade of one but not do the other,” said Saveen Pakala, senior director of product management at SteelFusion. “You don’t have to take on a big project. You can break it down.”
The SteelFusion OS also now supports VAAI write-same for improved provisioning and cloning performance. It also now has faster high-availability synchronization. Pakala said typically users start with a single node and then eventually add a second one and that requires a synchronization between systems.
“We have made it easier to add a second node at a later point in time,” he said. “We have made that process a lot faster.”
While it’s nothing new for information technologists to look at alternatives for their infrastructure, there seems to be more interest in that today than ever before.
There is so much interest that it becomes more important than ever to understand and effectively communicate the alternatives. Most of the interest today is around building private clouds or adding a special purpose system for analytics. Driving the investigation is the fact they must scale storage massively, and that can make costs soar. I have written an introductory Industry Insight report that can be accessed here.
Existing environments are unlikely to change because the current business operations must continue without disruption or introducing risk. However, pressures from executives and peer companies place a greater focus on examining and evaluating alternatives for storage at scale for new deployments. . The pressure can be so great that IT often must report progress to executives on initiatives for deploying these new environments. In response, many pilot programs have started that include evaluating new technology.
These pressures around adding storage that scales without costing too much has led many to evaluate on Open Storage Platform (OSP).
An OSP consists of hardware and the software used to create a storage system that can scale and share data for access by applications written to work in a federated environment. This is usually called a cloud.
The hardware invariably is Intel-based servers with attached storage. The attached storage can be internal solid-state drives or hard disk drives, but can also include direct attached enclosures with disks or flash devices. The software provides the storage function for accessing and managing data, including potential data services such as copy and replication for data protection. This is often called software defined storage, but I prefer “software-based storage” because that term has not been overused and over-hyped.
Storage-based software includes EMC ScaleIO, NetApp Cloud Ontap, IBM Spectrum Accelerate, VMware VSAN and DataCore SANsymphony-V. OSP hardware includes Intel servers with storage acceleration features, SanDisk InfiniFlash and X-IO ISE.
For large organizations, building a storage infrastructure from OSP elements could cost substantially less than purchasing a complete storage system from established vendors. But the investment IT must make in staffing, space, and related infrastructure may go way beyond original plans. Building and integrating storage systems adds risk and requires storage engineers more than administrators. Long-term support requires retaining these people. There must be a strategy for any project to develop a storage infrastructure.
This is the reason Evaluator Group is covering the OSP products and strategies. The questions coming from IT have been great enough to realize the need to carefully evaluate the options required for a successful deployment.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).