All-flash array vendor Violin Memory recorded less revenue and a great loss than expected last quarter. Still, its earnings report was better than the previous quarter.
Violin’s first quarterly earnings report as a public company last November was a train wreck. It’s poor revenue and guidance surprised investors and the storage industry, causing its stock price to plummet and a large investor to call for the company to put up a “for sale” sign. The board fired CEO Don Basile less than a month later and hired Kevin DeNuccio to replace him in early February.
Violin Thursday reported revenue of $28 million for the fourth quarter of 2013 and $108 million for the year. Its losses were $56 million for the quarter and $150 million for the year. Quarterly revenue was up 22 percent from last year but down from $28.3 million from the previous quarter. For the year, revenue increased 46 percent over 2012 but the quarterly and yearly losses were greater than the previous quarter and year.
DeNuccio outlined his plans for a turnaround on Thursday’s earnings call. Those plans consist mainly of reducing expenses by selling off its PCIe flash business and cutting staff related to that business. DeNuccio has revamped the Violin management team. He brought in Eric Herzog from EMC to head marketing and business development and Tim Mitchell from Avaya to take over global field operations.
DeNuccio said Violin will have new flash hardware and software products in the next few months. “We expect to make one of the most significant product announcements in our history,” he said.
He defended the decision to sell the PCIe business launched a year ago by saying “It was clear that we grew too much, too fast. Now it’s a matter of how do we get the company into a size that is manageable, and how do we focus on an area that we are successful in?”
Violin was the all-flash array revenue leader in 2012 according to Gartner, but new entrees from large players such as EMC, NetApp, Hitachi Data Systems, IBM, Dell and Hewlett-Packard changed the market in 2013.
“We have formidable competitors,” DeNuccio said. “We’re at the top of the pyramid, and we compete with the big boys. But we’re confident that our technology is unique enough and we can establish ourselves running the critical applications for our customers to allow us to compete at that level.”
The latest version of Spanning Backup for Google Apps launched this week includes a status reporting feature that shows customers problems with the most recent backup. This report includes data that cannot be backed up because of limitations in the Google API that affect files such as Google Forms and scripts.
“Customers need to trust that data will be there when they restore,” said Mat Hamlin, Spanning’s director of product management. “We’re now providing granular insight into each user’s data so administrators can understand what data has been backed up and what data has not been backed up. Third-party files, Google Forms and scripts are not available for us to back up. Customers may not be aware of that. When they come to us to back up all the data, the expectation is we will back up all that data. We want them to know what we cannot back up.”
Google Apps and Salesforce.com are the chief software-as-a-services (SaaS) apps protected by cloud-to-cloud backup vendors.
Spanning’s new report also brings other problems to customers’ attention so they can take action. It flags zero byte files that could indicate corrupt files, and points out temporary problems that are likely to be resolved within two or three days.
Hamlin said the data that cannot be backed typically up makes up a small percentage of data in Google Apps. He said Spanning is coming clean to add transparency, both for Spanning and competitors. He said Spanning often tells customers up front about the limitations, but competitors will not admit those limitations.
Ben Thomas, VP of security of Spanning’s chief competitor Backupify, said Backupify for Google Apps runs into the same problems. However, he said there are ways to minimize these limitations.
“We do have similar things we run into,” Thomas said. “Some cloud systems, whether it’s Google or Salesforce or other apps, may not have API calls available to pieces of data. Some API calls may be throttled, so only so many API calls per hour or per day can be made. We’ve been smart over the years about the way we manage throttling. For example, Google will throttle the amount of data per day per e-mail. The limit is a 1.5 gigabyte a day now. If we’re continually hitting that limit, we scale ourselves back to meet that. And we do that for every API.”
Storage vendors are counting on cloud storage providers more and enterprises less for large implementations these days. Rackspace’s new SAN design is a prime example of that.
Rackspace upgraded to Brocade Gen5 (16 Gbps) Fibre Channel (FC) switching and EMC VMAX enterprise array to achieve greater performance and density in its data centers around the world, according to Rackspace CTO Sean Wedige. Rackspace previously used Brocade’s 8 Gbps switches, but redesigned its SAN to take advantage of the extra bandwidth and density of the new gear.
“Storage is a big driver for us,” Wedige said. “Our customers’ storage is growing exponentially. It dwarfs what we saw just two years ago. We’re looking to increase our densities and speed.”
Rackspace’s new SAN set-up links Brocade 8510 Backbone director switches with Brocade’s UltraScale Inter-Chassis Links (ICLs), and its ports are connected to EMC VMAX enterprise storage arrays and Dell servers. Using four 16 Gbps FC cables gives Rackspace 64 Gbps of connectivity between directors. UltraScale ICLs can connect 10 Brocade DCX 8510 Backbones.
Rackspace also uses EMC VNX and Isilon storage.
Wedige said Rackspace runs FC SANs in seven of its eight data centers around the world, and will eventually add FC to its Sydney, Australia data center too. He said Rackspace has multiple petabytes of storage and thousands of ports, and Brocade’s Gen5 switching enables more ports per square foot in the data center.
“The bulk of our large customers are using a SAN,” he said. ““We use SANs for customers who are looking for dedicated infrastructure, high performance and fault tolerance.”
Wedige said the new design lets Rackspace connect servers to storage that is in different data centers, giving the hosting company more flexibility and better port utilization.
He said, like all of Rackspace’s storage infrastructure, the new SAN design withstood thorough testing before it was deployed.
“One of our biggest challenges is, because of our scale, we tend to break a lot of things,” he said. “Vendors appreciate that we put stuff to the test, but a lot of it may not be as suitable as we’d like for our environment.”
The term storage virtualization has been with us since 1999, and the concept continues with new product offerings that are variations of the original.
The longevity of storage virtualization in a high tech world where new ideas gain a foothold rapidly is a testament to the value that storage virtualization delivers. But there are many descriptions for storage virtualization based on the variety of products and the desire of vendor marketing to distinguish their products. A quick review of what is encompassed by the general phrase “storage virtualization” might be useful to characterize these offerings and the context in which they are typically used.
First, let’s look at the descriptions of virtualization:
- Grouping or pooling of resources for greater resource utilization.
- Abstraction to enable storage management at a higher level. This includes the promised ability to automate actions across the virtualized resources and the ability to use the same management tools across heterogeneous devices.
- Applying advanced features such as remote replication and point-in-time copies (snapshot) across the aggregated, abstracted resources without having to use multiple, device-specific capabilities.
- Distribution of data to aid in performance. This may be for parallel access or load balancing.
- Transparent migration of data between LUNs and storage systems for purposes such as asset retirement, technology upgrades, and load and capacity balancing.
The different types for storage virtualization are depicted with this graphic:
There are preconceptions about the term storage virtualization that exist primarily because of the success of products and product marketing. Most understand storage virtualization to be block virtualization where LUNs are presented to attached hosts that are constituted from multiple storage resources that may be from different storage systems and vendors. The next preconception is that the block virtualization is done in-band (in the data path), which again comes from the success or predominance of those types of solutions. There are several locations where the virtualization can occur either in-band or out-of-band.
The attachment of external storage systems/arrays by other storage systems (storage system-based virtualization) has been commonly deployed with many vendors under different names. Using an appliance in the data path (in-band) is the most prevalent method of storage virtualization. And abstracting the access to the storage through software installed on servers is another approach that is usually out-of-band from the data access standpoint but is done through the control of where the access is targeted.
Confusion remains around storage virtualization as vendors try to highlight different characteristics their products bring to customers. Ultimately, storage virtualization is all about the value delivered. It can affect the capital expense with greater resource utilization and greater performance. It can have a measurable effect on the operational expense with management costs, licensing costs for advanced features, and the ability to transparently migrate data between systems. IT must look beyond the labels applied to the solution and focus on the value the solution brings.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Nexenta this week enhanced its open source ZFS-based NexentaStor storage software that includes a code-based migration to the illumos open-source operating system, support for 512 GB memory per cache head, and a faster high availability process.
The NexentaStor is a unified storage platform that supports Fibre Channel and iSCSI for block storage and NFS, CIFS and SMB for files across active/active controllers. It delivers data services such as unlimited snapshots, clones, thin provisioning, inline deduplication, compression and replication across hard drives, all-solid state drives (SSDs) and hybrid configurations.
Thomas Cornely, Nexenta’s vice president of product marketing, said the move to illumos will give customers more flexibility and less vendor lock-in. The company’s customer base swings from those that store from 18 TB to petabytes of data. Nexenta has 2,500 paying customers, 1,000 of which are hosting providers.
“Nexenta 4.0 is a good foundation to expand our target market, which is small configurations and ever-growing bigger configurations. We don’t see much in the middle,” Cornely said.
The latest version of NexentaStor boosts the ability to fail over by 50 percent because the process has been streamlined and fewer steps are involved. This enhancement is particularly important for multi-petabyte configurations where hundreds of drives are involved.
“The enhancements are in multi-threading,” Cornely said.
Another uptime-boosting enhancement is with the Fault Management Architecture (FMA), which intelligently detects failing hardware to reduce application interruptions. When a drive gets slow or is not working well, the FMA capability helps take it out of the RAID storage group.
NexentaStor 4.0 also now supports Sever Message Block (SMB) 2.1 for Microsoft Windows Server 2012 and cloud environments.
In its first quarter as a public company, Nimble Storage sidestepped the IT spending slowdown that its larger competitors say have hampered sales.
Nimble reported $41.7 million in revenue last quarter, more than double from its $20.2 million in the same quarter the previous year. Nimble’s $126 million revenue last year also more than doubled from $54 million the previous year. For this quarter, Nimble expects revenue in the range of $42 million to $44 million, roughly double the $21.1 million it generated a year ago.
In comparison, EMC’s storage product revenue grew 10 percent year-over-year last quarter. NetApp’s FAS and E Series revenue declined five percent, Hewlett-Packard’s storage revenue stayed the same despite a substantial increase in its 3PAR storage and IBM’s storage revenue declined 13 percent.
Part of the reason for Nimble’s rapid growth is its revenue is still tiny compared to the giants it competes with. It also sells mostly lower-priced systems to smaller companies — many of whom are adding Nimble arrays to earlier implementations — rather than large enterprises that take longer to make purchases. Nimble’s average deal price is under $70,000, but it also reported an increase in deals of more than $100,000 last quarter.
Nimble claims it added 527 new customers last quarter, bringing its total to 2,645.
EMC and NetApp executives talked about cautious IT spending and longer evaluation cycles during their earnings calls, but Nimble VP of marketing Dan Leary said his company has not run up against those trends while selling its iSCSI hybrid flash arrays.
“We haven’t seen those headwinds in our business,” he said. “We’re winning deals because we’re delivering better performance and better capacity with about one-third to one-fifth the amount of hardware that our competitors require. The primary thing limited our growth is the ability to hire and recruit headcount, and that’s why we’re investing heavily in the company. We’re not seeing any market limitations.”
That investment is also part of the reason Nimble is still losing money. It lost $13 million last quarter and $43 million for the year. The losses are expected to continue at least into late 2015, but CEO Suresh Vasudevan said the company will continue to invest and grow, and has around $208 million in cash.
On the product front, Nimble is investing in more enterprise features. It added capabilities to its CASL operating system last year that allows customers to set up clusters in scale-out arrays. Another item on the roadmap is Fibre Channel support. Vasudevan said on the earnings call that Fibre Channel support is planned for late this year to help win deals at large companies already invested in the protocol.
He said Nimble currently wins about 40 percent of its deals against FC SAN arrays now, but “at the same time, there are several large enterprises that have already made an investment in Fibre Channel and that becomes a stumbling block for us.”
Keith Klarer, Astute founder and vice president of engineering, said ViSX can be used as an NFS target within OpenStack or as a block iSCSI device within OpenStack. The new version,ViSX OS 5, gives Astute cloud support it previously lacked.
“This is the first integration of our service into the cloud,” Klarer said. “We are starting with OpenStack because we think cloud OpenStack has the biggest opportunities. Rackspace is OpenStack-based and so is Dell and IBM SoftLayer. ”
An OpenStack Cinder compute plug-in can be downloaded and it talks directly to the ViSX platform. The Cinder features are managed directly by an OpenStack interface. Previously, the ViSX interface had to be used.
“OpenStack has its own interface,” Klarer said. “You don’t have to manage separately from OpenStack. OpenStack can directly control our applications.”
Klarer said virtualized applications that are moved to a private or public cloud need high-performance storage. Within OpenStack, data storage space can be defined and allocated to applications and then quickly provision high-performance storage in virtualized and cloud environments.
“OpenStack can directly control applications and anything you need to do for management,” Klarer said.
Other benefits to the OpenStack support include the ability to bypass VMware license fees to manage applications, allocate storage and migration among servers.
“OpenStack give you the same functionality but it provides it for free,” Klarer said. “The open virtualization management layer is free. People are seeing a value to these alternatives.”
Ashish Nadkarni, research director for storage software at IDC, said Astute faces an uphill battle even with the new capabilities.
“Their claim to fame is an ASIC. In a world of software-defined storage, where does a company with a custom ASIC go?” he asked.
QLogic’s acquisition of Ethernet controller assets from Broadcom this week won’t have much immediate effect on storage, but it could become important in a few years.
The $147 million deal will give QLogic 40-gigabit and 100-gigabit Ethernet, RDMA and virtualization technologies it currently lacks. QLogic also gets about 170 engineers in the deal, and claims the Broadcom portfolio makes it No. 2 in the Ethernet controller market behind Intel.
What makes the deal interesting is QLogic’s long-term roadmap. During a conference call to explain the deal, QLogic executives laid out plans to add services such as caching, replication, deduplication, encryption and monitoring on converged Ethernet-Fibre Channel controllers. That would turn QLogic from a network connectivity vendor to a platform vendor.
The first step is to port the Mt. Rainier caching technology used on QLogic’s Fabric Cache FC host bus adapters to Ethernet cards. FabricCache serves as a caching SAN adapter. A Cluster of FabricCache cards can access all the combined caches in that cluster.
The other features on QLogic’s roadmap – all important in storage and data protection – will follow. The process will likely take at least three years to complete the entire list, and a lot can happen in that time. One thing that can happen is features like replication, dedupe and encryption will already be common in storage products by then.
Vikram Karvat, QLogic VP of marketing, admitted those features are likely to be built into high-end storage arrays by then, but he said they will fit into other platforms.
“Some things that are considered high-end features are migrating into other markets, such as private or public clouds, or appliances,” he said. “This will allow non-traditional providers to add them.”
Based on a review of all-solid state storage systems, it may be time for IT professionals to re-think their strategy regarding the lifespan of storage systems.
The storage system lifespan is a critical part of a storage strategy for scheduling replacements, planning acquisition costs, and in calculating Total Cost of Ownership (TCO). The lifespan is used in the amortization schedule or depreciation for the purchased asset.
Two major technology factors are important to consider when changing the strategy for a storage system lifespan. The first is the use of scale-out storage systems and the ability to replace individual nodes transparently for a technology upgrade. I will deal with this case in a future article. The second factor for consideration is the use of solid state storage.
The lifespan of storage systems does not mean when the system is no longer usable or will have its “use by date” come to an end. It is really about planned replacement with several elements playing into the replacement motivation:
- Wear out. The system may have characteristics such as mechanical wear that causes the failure rate to increase, which leads to more costly service and potential impacts.
- Maintenance costs. There is a warranty period for storage systems and the maintenance costs begin to increase after that period. The costs become more prohibitive due to the increase likelihood of service required.
- Migration of data. Many storage systems do not have transparent or seamless migration of data to a new system. Scheduled replacement does not alleviate the problem but does make it a planned activity.
Solid state storage in the form of NAND flash today has different wear out mechanisms than spinning disks. Most storage vendors have been making tremendous strides in the improvement of managing the wear out issues in NAND flash. They have accomplished this by changing the methods in the way page erase is done and using shadow RAM to minimize the number of erases. These improvements have continued to increase the lifespan for those vendors that have made the technology investment.
Vendors of solid state systems are now quoting longer lifespans for their systems. There are two notable examples:
Pure Storage has changed its upgrade and maintenance model with an approach called Pure Storage Forever Flash. The approach includes a five-year maintenance plan with a free controller upgrade every three years when renewing the maintenance. With every upgrade, the maintenance terms for the storage system can be reset to the current pricing, which is expected to be lower.
Nimbus Data offers up to a 10-year end-to-end warranty for its system.
The warranty period and the upgrade plans will be competitive issues with vendors. Customers will see the economic benefit from the better offerings in TCO calculations and other vendors will have to react. The change can lead to longer term economics for customers with use of solid state storage systems. The change is also an indicator of the evolving technology with solid state NAND Flash. Flash is getting less expensive and use of data reduction is adding to the cost reduction. It is also a reflection of the current methods of purchasing solid state, which is generally for a specific purpose or application workload usage today which leads to a sizing to match those needs. As price changes and application capacity needs increase, another system is purchased, increasing the amount of solid state storage.
For vendors, the longer warranties can be costly if the systems do not live up to the vendor promises. Service Cost Estimates for storage systems are calculated on intrinsic failure rates or demonstrated failure rates where there is field data. The warranty period requires a financial reserve from the price of sales to cover the costs. The decision to extend the warranty is not made lightly by vendors but competitive pressures do factor into the decision.
It should be expected that more all solid state storage system vendors will extend the warranty for the system in some manner. The bigger issue for the customer is the change in the strategy for acquiring storage — and planning for replacement.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Tintri CEO Ken Klein says the vendor will use its $75 million funding haul collected this week to grow the company substantially in hopes of going public next year.
Tintri, which sells arrays designed specifically for virtual machines, will use its latest funding to invest in “an aggressive product roadmap, double our sales force, triple our marketing spend and build out our infrastructure to be a public company,” Klein said.
“Our plan is to become the next great storage company.”
Tintri claims revenue from its VMstore storage arrays increased 115 percent last year over 2012, and its customers are running more than 100,000 virtual machines in full production
Klein became CEO last October when founder and former CEO Kieran Harty moved into the CTO role. Klein said no change of direction was required and he was brought on to oversee the company’s growth.
One change in Tintri is it now bills itself as “application-aware” storage instead of “VM-aware storage.” Klein said that’s because Tintri will add features such as application-specific quality of service and other capabilities that focus on applications rather than VMs.
“I believe VMs are proxies for applications,” he said. “We will be adding more capabilities for applications, that’s the direction of the company. We upleveled the messaging.”
As for the aggressive product roadmap, Klein said Tintri will add more enterprise features and cloud functionality. He added, “We’re always looking at designing new platforms” to complement its VMstore arrays.
The E series funding brings Tintri’s total to $135 million, and added Insight Venture Partners to its list of investors. Previous investors Lightspeed Venture, Menlo Ventures and NEA also participated.