Nimble Storage’s drive for profitability took a major hit last quarter. Nimble, which was believed to be one of the smaller competitors eating away at large storage vendors’ sales, won’t hit break-even this quarter as it previously forecasted.
Nimble’s revenue of $80.7 million was up 37% from last year, but roughly the same as the previous quarter and below the hybrid array vendor’s forecast of from $86 million to $88 million. Nimble lost $11 million in the quarter – its largest quarterly loss since going public in 2013 — and increased spending this quarter will push back profitability indefinitely. Nimble forecasted revenue of $87 million to $90 million and losses in the $8 million to $10 million range for this year-ending quarter, typically the best sales period for storage vendors.
Reaction from Wall Street was potentially devastating. Nimble’s share price fell from $20.39 to $10.00 overnight – a drop of nearly 50% — and at least 11 financial analyst firms downgraded the stock.
CEO Suresh Vasudevan blamed the problems on two things. He said Nimble is affected more by large vendors’ price cuts as it moves deeper into the enterprise. And it has struggled to balance growth in the enterprise and its traditional commercial markets while trying to control spending with an eye on profitability. He said Nimble will beef up its commercial sales staff and is working on an all-flash array to help enterprise sales.
“We now believe that this approach of constraining investments at the same time that we diversify our customer base may have impacted our growth,” Vasudevan said on Nimble’s earnings call Thursday night.
He repeated what he has hinted at in the past – that Nimble will add all-flash arrays. Nimble has an all-flash expansion shelf and its arrays can pin data to all-flash volumes, but it lacks an all-flash product as more enterprises are looking to go in that direction.
“We have said that we have a very concrete plan for broadening our flash platform to compete in the entire space and that is still very much on target,” Vesudevan said. “You should expect to see us participating in the entire market with both hybrid flash and all-flash.”
The price cuts from the likes of EMC, NetApp and Hewlett Packard Enterprise could be a tougher problem to solve. Vasudevan really had no specific counters when asked his plans for dealing with that.
“We believe our business foundation remains extremely strong,” he said.
That belief will be tested in coming months. Meanwhile, the drastic drop in Nimble’s stock price is likely to attract some potential suitors who want to broaden their storage portfolios in the wake of the Dell-EMC deal.
NetApp’s latest earnings report told a familiar story.
NetApp Wednesday night said its revenue last quarter – although higher than expected – continued to decline with the vendor’s most established products taking the biggest hits. Its newer flash, cloud and software-defined software products are on the upswing but not nearly enough to keep overall sales up.
These same trends have been hitting NetApp and other large storage vendors for more than a year, and led to EMC’s getting swallowed whole by Dell. They contributed to NetApp switching CEOs from Tom Georgens to George Kurian in June without much change in result. And the trends will almost certainly continue for the foreseeable future.
NetApp’s revenue of $1.45 billion last quarter declined 6.3 percent from last year, and product revenue fell 12% to $819 million. For this quarter, NetApp expects revenue of $1.4 billion and $1.5 billion, or roughly a seven percent decrease from last year.
“Parts of our business are working well, some parts need improvement and other parts we must manage through declines,” Kurian said on the earnings call. “The IT spending environment continues to be constrained and the expectation for growth for the overall storage market has decreased to low single-digits.”
The parts that are working well, he said, are scale-out, software-defined, flash, converged and hybrid cloud storage. Those areas are growing about 20% a year while the traditional standalone hybrid storage market declines approximately nine percent, he said.
For NetApp, that means its Clustered Data OnTap (CDOT) operating system, hybrid cloud software, and all-flash arrays are growing while its traditional FAS with Data OnTap 7-Mode and its OEM products decline.
Kurian said 7-Mode OnTap shipped on about 30% of NetApp FAS arrays last quarter, down from around 65% a year ago. CDOT was on nearly 70% of new FAS systems shipped in the quarter, up from 35% last year. CDOT systems grew 95% year-over-year while 7-Mode shipments declined 60%. Still, CDOT overall made up only 17% of NetApp’s revenue compared to 15% in the previous quarter. That means 7-Mode customers are still not upgrading in large numbers.
NetApp’s All-Flash FAS array unit shipments increased 445% year-over-year following a price cut, controller upgrade program and seven-year extended warranty in June.
Kurian said Dell’s planned $67 billion acquisition of EMC is “clearly an opportunity” for NetApp because it will create confusion for customers and the channel. “The Dell-EMC transaction is yesterday’s solution to tomorrow’s customer problems,” he said. “It does not fundamentally address the hybrid cloud, it does fundamentally address the data management opportunity that customers are forced to deal with. It is really about trying to build efficiency in an integrated hardware business rather than the software-defined data center of the future.”
NetApp has $4.8 billion in cash, but Kurian didn’t sound enthusiastic about making acquisitions to bolster its product portfolio. When asked if he would consider that, he said NetApp’s strategy focused more on growing the emerging products it already has.
Nutanix launched its free Community Edition in June, so customers could try out the software for free on their own x86 hardware instead of purchasing an appliance from Nutanix or its OEM partner Dell. Through a technology partnership with Ravello Systems, the hyper-converged pioneer is now offering the Community Edition in the public cloud for approximately $1 per hour.
Nutanix senior director of technical marketing Greg Smith said more than 10,000 organizations have registered for the Community Edition, but Nutanix also received many requests to make it available as a virtual appliance in the cloud.
“We wanted to simplify the process by which a person could deploy Nutanix in a popular public cloud,” Smith said. “Ravello allowed us to take our existing software and stand up our whole environment on Google or Amazon.”
Ravello Systems uses what it calls a cloud application hypervisor to move multiple-virtual machine applications along with storage and networking to a public cloud.
Ravello director of product marketing Shruti Bhat said a user does not need an Amazon or Google cloud account to use the Nutanix Community edition. The cloud subscription and billing goes through Ravello. A Nutanix “blueprint” is published on Ravello’s catalog, and the customer clicks a drop-down box to publish on Amazon or Google. That provides access to the latest Nutanix Acropolis hypervisor and Prism management software and user interface.
Community Edition is a full-featured version of Nutanix software, but does not include the vendor’s support. Community users cannot upgrade to a licensed version of the product, but can purchase an appliance from Nutanix or Dell.
Pure Storage became the latest flash vendor to support TLC 3D NAND drives that lower the cost of solid-state storage. The vendor also expanded its Pure1 cloud-based management, adding predictive support and capacity planning.
3D TLC NAND will be available as an expansion shelf for Pure’s FlashArray//m Series arrays in early 2016. The shelves hold 44 TB and can be purchased fully populated or with 22 TB of capacity. Pure VP of products Matt Kixmoeller said the lower-cost NAND can reduce the cost of flash to about $1.50 per GB, assuming Pure’s average deduplication ratio of 5.4 to 1. Pure’s MLC flash costs around $2/GB, he said.
The TLC flash modules are 2 TB. Previous expansions shelves used 1 TB modules. Pure’s arrays use all flash with no hard disk drives.
“You will see us float more TLC into the product,” said Kixmoeller said. He said Pure will support TLC NAND inside its arrays and in shelves for older arrays.
“Now there are fewer workloads where you can claim you are not able to afford flash,” he added.
Dell, Hewlett Packard Enterprise, Kaminario and SolidFire also support 3D TLC NAND.
Pure is adding features to the Pure 1 Global Insight cloud-based management program it initiated this year. Pure1 looks at diagnostic data across Pure arrays in the field, scans against a library of known problems and can predict possible customer trouble, Kixmoeller said.
“It’s like anti-virus software. If a problem is detected, it’s codified into a signature,” he said. “Global Insight constantly scans arrays to see if anything has changed or is wrong.”
Pure1’s Capacity Planner identifies usage trends to forecasts consumption over time and estimate when a customer might need more storage. It suggests pre-emptive action for the customers.
Pure also added FlashStack converged infrastructure reference architectures for Oracle and SAP. FlashStack reference architectures include Cisco UCS servers and VMware virtualization software.
Datto, which sells backup and disaster recovery as a service, today closed a $75 million funding round although its founder and CEO says the vendor is already profitable.
The B round brings the startup’s total funding to around $100 million. Technology Crossover Ventures (TCV) was the sole venture capitalist involved in the round.
Datto sells its data protection products to managed service providers (MSPs).
“We don’t need this money to fund operations,” Datto CEO Austin McChord said. “We’ve been profitable since 2013. We’ll use this cash to make future investments in technologies and geographies, and we want to bring TCV into the fold.”
The Norwalk, Conn.-based company already has offices in the U.K., and this month moved into Australia and New Zealand. McChord said he intends to expand further.
Datto acquired cloud-to-cloud backup startup Backupify last December. Backupify protects data in Salesforce.com, Google Apps and Microsoft 365. McChord left the door open for more acquisitions but said Datto prefers to develop technologies in house. Possible new services Datto may develop include analytics and security.
“We store an enormous amount of data in our cloud, we’re up to about 160 petabytes,” McChord said. “That’s only valuable to our customers now if they have a disaster. So we’re looking at how we can bring value on an every day basis.”
McChord said TCV has a strong track record of working with maturing startups, and TCV general partner Ted Coons has a great deal of experience in the MSP market. Coons joins Datto’s board, along with Patrick Gray and Xerox CEO Ursula Burns.
Datto found itself in the news last month for its involvement with former secretary of state Hillary Clintons’ e-mail. Datto reseller Platte River Networks used a Datto server to store Clinton e-mails. Platte River turned over Clinton’s e-mail server to the FBI, and McChord said Datto has also cooperated with the FBI.
“We’ve done everything in accordance with the wishes of our client and the end user,” McChord said. “We’re working hard to protect them like any other customers. Both the end user and Platte River gave us permission to give data over to the FBI. We have done that, and it is no longer in our hands anymore.”
Riverbed this week upgraded its Steelfusion operating system to support VMware vSphere 6 and added a capability to do incremental upgrades when installing new VMware hypervisor versions to its SteelFusion Edge.
Steelfusion is a combined server and WAN optimization platform for branch or remote offices. The launch of the latest SteelFusion 4.2 follows on the April release of version 4.0 when Riverbed also changed the name of the product from Granite.
Riverbed initially positioned Granite as a storage product, but it also includes WAN optimization and virtual machine management. SteelFusion allows organizations to maintain data in SANs in the data center and push that data out to branch offices. It consists of SteelFusion Core appliances in the data center and SteelFusion Edge, which runs at branch offices and could be software on a Steelhead appliance or a standalone device.
The latest operating system, which will be generally available next week, helps tackle the challenge of shutting down the servers when a VMware refresh needs to be done. The new function gives users the flexibility to do the upgrade in increments so the systems can stay up and running.
“You can do an upgrade of one but not do the other,” said Saveen Pakala, senior director of product management at SteelFusion. “You don’t have to take on a big project. You can break it down.”
The SteelFusion OS also now supports VAAI write-same for improved provisioning and cloning performance. It also now has faster high-availability synchronization. Pakala said typically users start with a single node and then eventually add a second one and that requires a synchronization between systems.
“We have made it easier to add a second node at a later point in time,” he said. “We have made that process a lot faster.”
While it’s nothing new for information technologists to look at alternatives for their infrastructure, there seems to be more interest in that today than ever before.
There is so much interest that it becomes more important than ever to understand and effectively communicate the alternatives. Most of the interest today is around building private clouds or adding a special purpose system for analytics. Driving the investigation is the fact they must scale storage massively, and that can make costs soar. I have written an introductory Industry Insight report that can be accessed here.
Existing environments are unlikely to change because the current business operations must continue without disruption or introducing risk. However, pressures from executives and peer companies place a greater focus on examining and evaluating alternatives for storage at scale for new deployments. . The pressure can be so great that IT often must report progress to executives on initiatives for deploying these new environments. In response, many pilot programs have started that include evaluating new technology.
These pressures around adding storage that scales without costing too much has led many to evaluate on Open Storage Platform (OSP).
An OSP consists of hardware and the software used to create a storage system that can scale and share data for access by applications written to work in a federated environment. This is usually called a cloud.
The hardware invariably is Intel-based servers with attached storage. The attached storage can be internal solid-state drives or hard disk drives, but can also include direct attached enclosures with disks or flash devices. The software provides the storage function for accessing and managing data, including potential data services such as copy and replication for data protection. This is often called software defined storage, but I prefer “software-based storage” because that term has not been overused and over-hyped.
Storage-based software includes EMC ScaleIO, NetApp Cloud Ontap, IBM Spectrum Accelerate, VMware VSAN and DataCore SANsymphony-V. OSP hardware includes Intel servers with storage acceleration features, SanDisk InfiniFlash and X-IO ISE.
For large organizations, building a storage infrastructure from OSP elements could cost substantially less than purchasing a complete storage system from established vendors. But the investment IT must make in staffing, space, and related infrastructure may go way beyond original plans. Building and integrating storage systems adds risk and requires storage engineers more than administrators. Long-term support requires retaining these people. There must be a strategy for any project to develop a storage infrastructure.
This is the reason Evaluator Group is covering the OSP products and strategies. The questions coming from IT have been great enough to realize the need to carefully evaluate the options required for a successful deployment.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
ClearSky Data, which in August launched its managed service for primary storage, today closed a $27 million Series B funding round to expand sales and marketing teams.
Polaris Partners led the round with content delivery network provider Akamai Technologies involved as a strategic investor. Previous investors General Catalyst and Highland Capital Partners also participated. The funding brings Boston-based ClearSky’s total to $39 million.
ClearSky’s tiered service lets companies run primary workloads on-premise and in ClearSky’s point-of-presence (POP) data centers. For data protection, it replicates data to object storage in Amazon Simple Storage Services (S3).
ClearSky founder and CEO Ellen Rubin said ClearSky will build up its 40-person team with most expansion coming in sales/marketing and operations/support. She said the investment from Akamai and the addition of Akamai Labs CTO Andy Champagne as a ClearSky board adviser should help the startup gets its foot in the door with enterprise customers.
“We’ll be working with them as we go to market,” Rubin said. “We’ll engage with their sales teams and marketing teams as we go to enterprise customers. Basically, any enterprise we’re speaking with is already an Akamai customer.”
Rubin said after two months of having its beta service in the market, ClearSky is trying to convert those customers to paying customers. Part of that process learning what the beta users want to see on ClearSky’s roadmap. She said ClearSky will soon add Fibre Channel and NFS support to go with its original iSCSI storage. It is also working on adding POPS in major metro areas for early 2016. ClearSky started with POPs in Boston, Philadelphia and Las Vegas.
“We’re using the cloud the way it was meant to be used,” she said. “We give you all the benefits of cloud but we incorporate it as a set of managed services.”
As for which workloads her customers want to move to the cloud, Rubin said “most have multi-year long plans where they are going into the cloud with production workloads. Now we see mostly test/dev, web apps and DR. IT teams are trying to get their arms around ‘How do we make sure we don’t give up anything – availability, security and visibility into what’s happening?’ There’s a bunch of things they’re not interested in ever moving. There are other things they might move, but not yet. We’re living in an evolving hybrid world and I feel it will be that way for a while.”
Dell’s pending $67 billion acquisition of EMC has sparked plenty of discussion about overlapping products. Yet despite the enormity of their potential merger, the newly fused Dell-EMC may not have everything it takes to compete in a changing storage world.
“It doesn’t solve their problems of needing to look at how they’re going to react to next-generation players like Amazon. It makes them bigger, but it doesn’t give them innovation,” said Mark Lewis, CEO of Formation Data Systems, a startup specializing in data and storage virtualization technology that runs on commodity x86 server hardware.
Lewis has fielded lots of questions about Dell’s acquisition because he worked at EMC from 2002 to 2012, including a five-year stint as executive vice president and chief technology officer. Also a veteran of Compaq/HP and DEC, Lewis has witnessed his share of IT industry mergers — and he admitted he struggles to recall major tech acquisitions that have been successful. Execution will be the critical element, he said.
“It’s a really good move for Dell,” said Lewis. “If you think about Dell wanting to get into the enterprise, wanting to diversify, they had some real gaps in the enterprise. You can build it, but it’s really hard to build that go to market and all those capabilities. So, I think it’ll be really good for them structurally and economically.”
But, as CEO of a startup, Lewis also views Dell’s acquisition as “a big positive, because it really showed how the legacy market, or the existing data center market, is going to continue to consolidate because of slow growth and the need for consolidation of that platform as we move to the next-generation platform.”
Lewis sees the next-generation platform as the “software-defined cloud storage” that creates systems like Amazon Web Services, which enable users to set up storage in automated fashion. He said legacy systems target storage administrators with low-level tasks such as provisioning storage and service-level agreements. And the older vendors often try to leverage their old technology “when really what they’ve got to do is a paradigm shift,” he said.
“You don’t merge two video rental companies and get Netflix,” said Lewis, echoing sentiments he raised on his Formation-hosted blog. “You get scale. You get other advantages. You get operational efficiencies. But, you don’t get innovation, and that’s an element they’re going to have to figure out how to drive into their new business.”
One challenge confronting Dell-EMC will be the melding of “inherently different cultures” spanning Dell’s Texas-based operations and EMC’s Massachusetts headquarters and California-based VMware division, Lewis acknowledged.
“Culture’s a big part in acquisitions and integrations and mergers, and people often [discount] that impact,” he said. “That’s something they’re going to have to sort out.”
Could startups such as Formation Data Systems ultimately become acquisition targets for traditional storage vendors such as Dell-EMC? Lewis said although they will need to look at buying “third-generation data center technology” at some point, such acquisitions would present difficulties.
“Our economics are such that we believe we can lower the cost to store data by 10x,” Lewis said. “That fundamentally changes the market dynamic. I can grow to a billion-dollar company, but I might consume $10 billion in total market to do it. So it’s good for me. For the guys that make the 10 billion, it’s not so good.”
Glenn O’Donnell, a vice president at Forrester Research, thinks it’s more likely that the new Dell-EMC will build its way into the storage software space. “They’ll make some targeted acquisitions, but acquisitions are going to be difficult in the future. Now that they’re shelling out all this money for EMC, they don’t have much left in the kitty,” he said.
“EMC has a lot of software people,” O’Donnell added. “Everybody looks at both of these companies as hardware companies — which clearly they are – but nobody’s in the hardware business these days unless you do an awful lot of software.”
Not only the big storage array hardware vendors are seeing revenues declines from last year. This week, backup vendors Commvault, Quantum and FalconStor said they lost money and had revenue declines last quarter.
All three are in transition. Commvault is trying to rebound from a six-quarter slump by rebooting its product, branding and sales strategies. Quantum has had an up and down year, but is looking to become a major player in media/entertainment and video storage with its StorNext scale-out file system to go with its tape and disk backup product. FalconStor is still trying to dig out of a disastrous 2010-11 period with FreeStor, the latest version of its data protection and storage management software that it sells to OEMs, service providers and enterprises.
All three say they are close to turning the corner, but none of them did last quarter.
Commvault lost $9.2 million in the quarter. Its $141 million of earnings was a bit better than expected but down seven percent from last year. Software licensing revenue of $58 million fell 17% from last year.
Quantum’s $117 million in revenue missed its guidance mark. Quantum executives said there was a $7.8 million of backlogged orders, but that still would leave revenue below the $135.1 million last year. Quantum lost $11.2 million for the quarter.
FalconStor’s $9.7 million in the quarter was down 13% from last year, and half of the revenue in the quarter came from OEM partner Hitachi Data Systems. The vendor lost $2.5 million in the quarter.
There is one backup software vendor doing fine, and that is privately held Veeam Software. Veeam does not report detailed earnings but said last quarter its 17% bookings growth, 23% growth in enterprise customer orders and added 11,300 paid customers in the quarter.
Commvault is moving to what the company calls Commvault Next follow a string of poor quarters in the face of increased competition. The changes include rebranding of the company (no more capital V), the main product (no more Simpana), new sales and marketing leadership, and new sales model (selling point products for targeted use cases). Commvault just released the 11th generation of its software, now called the Commvault Data Platform, which will probably be the main factor for whether it succeeds or fails in its turnaround.
“The objective of Commvault Next is to return CommVault to solid revenue and earnings growth in an environment driven by security disruptions caused primarily by the shift to the cloud and SaaS environments,” CEO Bob Hammer said. “A lot of [storage] companies have been disrupted. I said two years ago we had to change fundamentally everything we are doing. First of all you have to be relevant and provide highly differentiated value in the cloud and for SaaS types of applications. That requires fundamental architectural changes to the platform to enable all that to happen.”
Hammer said the company is poised for “substantial growth” over the next six months.
“We have made good progress on our transformation as a result of momentum building and we are positioning Commvault to better financial results in the [next six months]. We have built that foundation for CommVault to generate both revenue and earnings growth in FY 2016 — 2017.
Despite Quantum’s disappointing quarter, StorNext and related scale-out storage revenue of $29.9 million increased 17% from last and has grown 33% over the last six months. But even that is below the vendor’s goal of 50% growth in scale-out storage.
Quantum CEO Jon Gacek blamed its low revenue quarter to an unusually high number of customers placing orders on the last day of the quarter, shortage of supply from disk partners that prevented it from filling all the orders received, fewer large deals and larger competitors’ willingness to lower prices to win deals. With help of the late orders, Quantum’s revenue guidance of $130 million to $140 million for this quarter was higher than expected.
“Scale-out is where the market is expanding for us,” Gacek said. “We have unique position there and that’s where we’re investing money.”
FalconStor CEO Gary Quinn said Volkswagen signed on as the first FreeStor Global 2000 enterprise customer.
“I think once we show two to three quarters sequentially performing in the upward direction and add some additional resources, we will know that the North American business is stable and improving,” Quinn said. “The software-defined storage space is a very exciting and very potentially large addressable market to grow into as the future moves forward here. And we think that we are well-positioned in the center of that marketplace as we enter 2016.”