Symantec’s transformation towards an integrated backup appliance model accelerated last quarter, as revenue of its NetBackup appliances increased 35 percent over last year.
Symantec’s backup business results last quarter followed a familiar pattern. NetBackup sales increased, mainly on the strength of its appliances, while Backup Exec revenue dropped. The vendor did not break out its total backup revenue, although the Information Management category that backup is part of was flat from last year at $650 million.
The appliance business has grown rapidly since Symantec began selling its backup software on integrated hardware instead of relying on third-party disk targets. CFO Thomas Seifert pointed out that Symantec has gone from none of the backup appliance market to 38 percent in less than years, according to IDC’s research. Symantec is second behind EMC in backup appliance revenue.
But because Symantec does not give the total revenue for NetBackup, it’s impossible to say if it is adding new customers or switching over those who were already using its enterprise backup software.
Brown said Backup Exec sales were hurt by a pause in sales by channel partners ahead of the recently released Backup Exec 2014 for SMBs. However, Backup Exec sales have been in downfall since the poorly received Backup Exec 2012 came out two years ago. Symantec execs hope the new version will satisfy unhappy customers who refused to upgrade to BE 2012.
He said Symantec’s next step in backup will be towards the cloud. “We’ll be moving our products to the cloud to complement the strength we already have in our cloud-based archiving business,” Brown said.
It’s not clear if he was talking about both NetBackup and BE. Symantec discontinued its BackupExec.cloud service in January.
Brown said Symantec’s CEO search committee has narrowed its list to finalists and its goal is to reveal its choice by the end of September. He said the ideal candidate has experience in technology that is closely related to Symantec’s, has global operations background, a collaborative leadership style and has been CEO of a public company.
Brown has been interim CEO since the Symantec board fired Steve Bennett in March.
Nutanix hasn’t been sitting idle waiting for its Dell OEM deal to kick in.
The hyper-converged system vendor today said it exceeded $50 million in revenue for the quarter that finished at the end of July. Nutanix said it is picking up larger customers, with 29 companies buying more than $1 million of Nutanix products and services. That number has more than doubled since January, when Nutanix had 13 million-dollar customers.
Nutanix, which raked in $101 million in funding in January, has more than 600 employees.
Nutanix SVP of product management Howard Ting said the vendor’s revenue more than tripled from the second calendar quarter of 2013 to the second quarter of this year. He attributed that mainly to increased brand recognition and the addition of new versions of its Virtual Compute Platform. Nutanix systems include storage, networking, and compute in one box. It started with one configuration, but late last year added entry level and data center models.
“Expansion of our platform really helped,” Ting said. “Three years ago when we came out, we had one product with a set amount of CPU, memory and disk. One reason we lost deals was because of product market fit – the customer’s workload wouldn’t fit on that platform. We didn’t have a storage-heavy appliance for databases or applications with large datasets like Exchange then. Now, we have a whole range of appliances, ranging from branch offices to more heavy data workloads.”
Ting expects to get another big bump from Dell, which in June entered an OEM deal with Nutanix. Ting said the vendors are on track to begin selling Dell hardware with Nutanix software beginning in October. Dell hasn’t released product specs yet, but Ting said Dell will eventually have “a full spectrum of products” incorporating Nutanix.
Ting said Nutanix is nibbling away at larger storage vendors such as EMC, NetApp, IBM and Hewlett-Packard, who have reported declining sales in recent quarters. “Large companies are starting to feel the impact,” he said. “The disruption created by young companies like Nutanix is eating into their revenue.
Walls is responsible for setting the strategy for the company’s storage portfolio and defining the architecture for next-generation flash arrays and storage class memories. He received the highest distinction of his career this year when IBM named him a fellow, making him one of only 257 total and 87 active IBM employees (out of more than 400,000) to achieve the honor.
On the eve of next week’s Flash Memory Summit in Santa Clara, California, Walls took time out today to discuss IBM’s flash strategy with SearchStorage.com. Interview excerpts follow:
What do you see as the optimal architecture for next-generation flash?
Walls: I’ve been with the acquisition of Texas Memory Systems since its beginning, and before that, I was the chief architect and CTO for our flash strategy. Through the years, we’ve adapted to what’s been happening, and I think the tipping point is there, and we really are at a point where data reduction combined with the technology that we have today can be used to put all active data, or most active primary data, on flash.
So, I see the future being to continue that reduction in overall cost per gigabyte, based on data reduction and next-generation flash, as well as enabling things like [triple-level cell] TLC, if possible, to continue lowering the cost, but to do that perhaps as a tiering strategy and to also look at next-generation storage class memories also with tiering and decrease the latency by being able to use phase-change memory or resistive RAM or next-generation storage class memories as the tier for hot data.
I see continuing with certainly the [multi-level cell] MLC flash that is there, continuing to reduce the cost, do more features and data reduction, but then looking at other technologies to see if they also can be used in the all-flash arrays to improve the performance and further decrease the cost.
Do you think TLC is realistic for enterprise storage? Will it be TLC or 3D NAND?
Walls: I think 3D NAND for sure is going to come in. Back in 2008, 2009, we were working closely with different companies, and we said MLC was going to come into the enterprise. There were a lot of people who said, ‘No, it’s going to be [single-level cell] SLC for a long time.’ And we were the first to really bring MLC into enterprise storage.
When I look at TLC, it’s even more of a challenge, of course. You’re talking about in some cases a few hundred cycles. But, we are looking to see if we can bring it in in innovative ways . . . You could think of maybe read-mostly applications or a tiered architecture where most of the hot accesses are serviced out of DRAM or out of MLC, and you’d have some TLC. We think that the benefits are enough that it really [merits] a serious look to see if it can also be used to further reduce the costs.
IBM’s all-flash arrays use eMLC flash in contrast to a lot of purpose-built flash arrays that use cheaper MLC drives. How important is the type of flash these days now that manufacturers have figured out ways to improve the reliability and endurance? Why is IBM still using eMLC?
Walls: It is true to a certain extent that the flash manufacturers have figured out how to improve the endurance of the devices themselves. However, as the geometries continue to shrink, the endurance that you get out of the 20-nanometer and 15- or 16-nanometer bare MLC flash is only 3,000 write/erase cycles. That’s all that the manufacturer will guarantee.
So, we believe that in this generation with the [FlashSystem] 840 that the eMLC allows us . . . to be able to get a 10x improvement in endurance without having to worry about it and pass that on to our customers. We think eMLC right now is a very valuable add, and other competitors use it as well. There are many who don’t, but I think one has to be careful to see how they make sure that they aren’t going to wear out. We believe eMLC right now is an important part of our strategy.
There are many types of implementations of solid state or flash storage systems. At Evaluator Group, we regularly field questions and work on projects regarding solid state storage with our IT clients. In addition to the performance explanations and evaluation guide for solid state, we find it necessary to categorize the different implementations to aid their understanding.
The categorizations do not necessarily match what vendors would say in positioning their products. The important point is categorization has served us well in communicating to our IT clients. However, we understand that nothing is static in the area of storage. Like the technology, these explanations will evolve with new developments.
Here are the categories and explanations that have worked well so far:
- All-solid state (flash) storage systems – These are new systems designs for solid state from the start. These designs optimize performance for the given amount of system hardware.
- Hybrid arrays in a new design system – Hybrid arrays use both solid state (usually in the form of Flash SSDs) and hard disk drives (HDDs) with the idea that large capacity HDDs will decrease the overall price of the system. As a new design, all I/O goes through the SSDs and the HDDs serve as backing storage.
- All-solid state (flash) storage systems based on traditional storage systems with major modifications – These are traditional storage systems designed for spinning disks but modified to take advantage of solid state with the addition of embedded software. The Evaluator Groups looks at the design changes made to determine the significance.
- Hybrid arrays based on traditional storage systems – This large segment includes the traditional storage systems designed for spinning disks where solid state drives (SSDs) are added for cache and/or tiered storage. In these systems, small percentages of SSDs will max out the performance of the system quickly, increasing the aggregate system performance by 2x to 4x.
- As technology evolves, there will be changes to these categories. Certainly, acquisitions will occur – changing what vendors offer and product positioning. Over time, more extensive changes will be made to traditional systems that are limited by their spinning disk designs.
The biggest evolution of all will be the introduction of new solid state technology. Forward-thinking system designers have anticipated this and will seamlessly (and optimally) advance to the new technology when the economics are favorable. This is one of the reasons we use the solid state storage terminology rather than wholly referring to only the current implementation of NAND flash. We will adapt out categorization used with our IT clients to fit the current implementation. Meanwhile, it is great to see the continued advances in technology and implementations for storage systems.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Overland Storage has upgraded its GuardianOS operating system, which powers its SnapServer Dx1 and Dx2 NAS, to make its devices more compatible with Windows in mixed environments. The upgrade is also designed to boost performance, and includes a new BitTorrent-powered sync-and-share feature for mobile devices.
The GuardianOS integrates replication, thin provisioning, snapshots, backup, file sharing and security for the SnapServer Dx1 and Dx2. The Dx1 is a 1U system that scales to 160 TB while the Dx2 is a 2U server that scales to 384 TB.
The software Windows-only Tree improves permission handling and authentication in Windows and Mac mixed environments. Each time a Window or Mac user opens a file, the updated file will be written with Windows data attributes. Typically, a Mac system will switch the data attributes when a file is open for updates.
“They will remain in Windows attributes because you want to keep a certain attribute type,” aid Jeremy Zuber, Overland’s product marketing manager. “If attributes are flip-flopped, you can run into issues.”
The GuardianOS also has been enhanced with the Lightweight Directory Access Protocol (LDAP), allowing administrators to set permissions and specify access to directories through name lookup to and from a unique user identifier (UID). The software also uses Server Message Block (SMB) 2.0 for improved read and write performance for Windows clients and servers when accessing SnapServer storage.
The operating system’s snapshot capability has been upgraded for higher performance with a more efficient copy-on-write process.
Unitrends has gone GA with its first new version of the PHD Virtual Backup application since acquiring PHD Virtual last December. Virtual Backup 8 released last week gained support for Microsoft Hyper-V, and lost the PHD brand.
The product is now called Unitrends Virtual Backup. It joins a roster of Unitrends branded software that includes Unitrends Enterprise Backup (UEB) and Unitrends ReliableDR (previously a PHD product), and the Unitrends Certified Recover Suite that bundles the other three apps. Unitrends also sells a series of integrated appliances that runs UEB software.
Unitrends claims Virtual Backup 8 has more than 140 enhancements, most of them around making the product easier to use in hopes of taking on Veeam Software for virtual backups.
“Simplilcity has always been the No. 1 reason customers pick us,” said Joe Noonan, Unitrends senior product manager. “As we gain larger customers and get into deployments of 1,000 VMs or more, simplicity takes on new requirements.”
The application re-design includes the ability to backup and recover data in four clicks. It is also the first version of Virtual Backup with Microsoft Hyper-V support built in. PHD launched a separate version for Hyper-V earlier this year, but Virtual Backup now supports VMware, Citrix and Microsoft hypervisors in one application.
Virtual Backup does not yet work with Unitrends integrated backup appliances yet, but Noonan said that is on the roadmap. “The first step is to centrally manage everything,” he said. “Then we will integrate it all under the hood.”
With both disk and tape backup target markets stagnating, Quantum is counting on its StorNext platform for growth.
Quantum’s “scale-out storage” – StorNext file system and Lattus object storage appliances and software – increased 41 percent over last year to $18.1 million last quarter while Quantum’s total revenue of $128.1 million dropped four percent (that decline does not count a $15 million one-time royalty payment last year).
The StorNext revenue was the same as DXi disk backup appliance revenue for the quarter, and it wasn’t that long ago that DXi was considered Quantum’s best chance for rapid growth.
StorNext revenue includes part of a $3 million-plus deal with what Quantum CEO Jon Gacek called a “leading consumer electronics company” for StorNext and Lattus products for managing video workflows. Approximately half of that revenue was recognized last quarter.
Quantum added 75 new StorNext and Lattus customers. It is going after media and entertainment companies, and Gacek said it landed a $200,000-plus deal at a major studio, an international broadcaster and a follow-up sale with a TV shopping network in the quarter. Quantum is also looking to replace aging Apple Xsan systems with its StorNext Pro high performance systems for post-production studios and small broadcasters.
Quantum also increased the number of partners selling StorNext by 12 percent since last year.
The amount of time Gacek spent talking about StorNext during the Wednesday earnings call was disproportionate to its overall contribution to the company, he admitted in an interview afterwards. But he said StorNext is the Quantum platform clearly on the rise.
“Tape is still the biggest part of our business, so I get in trouble internally when I do that [talk up StorNext],” he said. “But in the world of software, cloud and unique value, StorNext is where we have the most differentiation and overall growth.”
Gacek said the StorNext 5 release that pumped up the software’s performance and scalability has been a big boost. Quantum is also looking to capitalize on the emergence of object storage, which it sells through an OEM partnership with Amplidata on its Lattus appliances.
Video storage is a big use case for StorNext and Lattus. Gacek said Quantum previously sold to all major sports leagues, and now is looking to expand to the individual teams as they use more video for scouting and in-game entertainment.
Quantum added around 90 new DXi customers in the quarter, and Gacek said it won close to 55 percent of its deals with the disk backup appliance. He hopes DXi will get a boost from the DXi6900 launched this week. The 6900 is built on StorNext 5 software and scales from 17 TB to 510 TB.
“We said we’re going to grow at the same rate as the market,” Gacek said of DXi. According to IDC’s latest purpose built backup appliance tracker, that market decreased by 2.5 percent in the first quarter of 2014 although the research firm predicts it will return to growth.
As for its bread and butter, Quantum’s branded tape automation revenue dropped nine percent and OEM tape automation – sold through partnerships with vendors including Dell, Hewlett-Packard and IBM – fell 24 percent from last year. Gacek said the branded business probably gained share despite the decline, but OEM revenue – a minority of Quantum’s tape business – has been a sore spot.
“The OEM business was not up to expectation,” Gacek said. “That has been our weakest part over the last couple of quarters. Branded business will be our share grabber.”
Quantum’s $128.1 million total revenue was above the midway point of its guidance. It projected revenue of $130 million to $135 million this quarter.
EMC CEO Joe Tucci today said he plans to meet with the investor group reportedly calling for the storage conglomerate to break itself up, although he maintained that his company benefits from its current structure.
Earlier this week, news reports said Elliott Management Corp. has taken $1 billion-plus stake in EMC (around two percent of the company) and would like to see it spin off parts, particularly its majority-owned VMware.
“We have not heard from Elliott Management other than a call to us saying they intend to be one of EMC’s largest investors,” Tucci said during EMC’s earnings call. “I have agreed to meet with them. We are always open and welcome a dialog with all of our shareholders, and we respectfully listen to their ideas beliefs as we form our strategic direction. … I want to hear what their proposals are, and I’m sure they would like to hear some of our plans.”
EMC’s earnings of $5.9 billion last quarter were slightly ahead of expectations, and Tucci said during the call that customer surveys show “our strategic relevance has never been greater and is rising.” He also said EMC would accelerate its share buyback and buy $3 billion of stock in 2014 instead of the previously planned $2 billion.
EMC Federation owns VMware, RSA Security and Pivotal, all which play largely outside of EMC’s core storage market. The Wall Street Journal and New York Times reported this week that Elliott will push EMC to break off some of those pieces to increase overall shareholder value.
Tucci said he agreed with Elliott that EMC stock is undervalued but expressed doubt that spinning off any pieces – especially VMware – would make the EMC Federation more valuable. He pointed out large competitors such as IBM, Cisco and Oracle also have products and services spanning the IT spectrum.
“Without a doubt, we have great assets and strong strategic vision – splitting them up, spinning out one of our most strategic assets – I don’t know of another technology company that has done that and been successful,” he said.
Calls to break up EMC are not new. Tucci made a strong case for EMC’s holding on to the 80 percent of VMware that it owns in May at an investor conference, claiming the EMC companies were “better together.”
EMC’s storage revenue increased one percent over last year, with its high-end enterprise storage down 14 percent. Tucci and EMC storage CEO David Goulden said they expect the recent launch of the new VMAX platform to raise enterprise sales by the end of this year.
“The pause ahead of this product (new VMAX) was hurting us,” Tucci said.
We have been working with IT clients who deploy private clouds as part of their operations. The reasons include implementing a change in the delivery of IT services, a new infrastructure for dealing with the overwhelming influx of unstructured data, and a way to deploy and support new applications developed for mobile technology. Each of these reasons makes sense and can fit into an economic model given the projected demands.
The types of private clouds also vary. The simplest we have seen from IT clients is a private cloud that is an object storage system on premise used as a content repository. The most common types of these are:
• A large-scale repository that used by new or modified applications that must deal with large amounts of unstructured data.
• A large, online storage area where data is moved off primary (or secondary) storage to less expensive storage with different data protection characteristics.
• As the repository of data from external sources used for analytics. This data may not require long-term retention.
Most of these operations have built-in the ability to use public cloud resources and they use the term hybrid-cloud. Public cloud use may be in the form of long-term archive of data (deep archive) where the access time is much more relaxed, as sharable information in the form of file sync and share implementations, or as an elastic storage area to handle large influxes that may not be retained for extended time periods. Usually there is a mechanism to handle the transfer and security of data to and from public cloud. This is usually done by gateway devices or software. Storage system vendors are starting to provide built-in storage gateways to manage data movement to the cloud.
These are all justifiable reasons and usages for traditional IT operations to deploy private clouds. But what do you call IT that has changed operations to achieve this? Continuing to use the IT term is simple but it does not convey the fact that IT has fundamentally transformed operations and the value provided. IT as a Service (ITaaS), which is the outcome goal of most transformations, is a significant value and is different from IT of the past. The term cloud is an ambiguous identity used in many different contexts and probably will not have staying power over time.
Historically, what is known as IT has changed names over time, representing major transitions in the industry. Many remember DP or Data Processing as the term for centralized IT of the past. Some of us even go back to an earlier point when the term was EAM, which is an artifact acronym for Electronic Adding Machine. There was also the term IS, which stood for Information Systems and later Information Services. Information Technology is the current broadly accepted definition for business, but a change is in order with the services and capability transitions underway.
What should the new identity be? Maybe there should be a naming contest. Probably the worst thing would be for one vendor’s marketing organization to drive its descriptive name. The new name in this case would be to promote that vendor’s vision and how its products serve the necessary requirement. In this case, there may be a full court press for that identity with paid professionals promoting the name. The new identity needs to convey the data center transformation that has occurred. Hopefully, the name will not have “cloud” in it.
The name change will occur and probably start as one thing and quickly evolve. A few years from now, it will be commonplace. The identity change is a step in the logical progression of computing services (keeping in mind the value is in deriving information from data). Terms such as client-server will become footnotes in the history of the industry. Other ideas that were detours down a rough road will seem like another learning experience everyone had to go through. This is an interesting period to see transitions occurring.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Online storage provider Box last week raised another $150 million from two investment firms, pushing the total amount of its funding to $564 million as it prepares to go public.
While it can be seen as a good sign for the vendor that it can raise that much money in one round, you also have to wonder about the long-term viability of a company that has gone over $500 million in funding and has yet to break even.
The Los Altos, California-based company also got a vote of support from research firm Gartner, which named Box a leader in the enterprise file synchronization and sharing market along with Citrix, EMC and Accellion. It’s definitely a milestone for a company to rise to the top in a crowded market that initially purported to have more than 50 startups.
Box, however, remains unprofitable with a reported $168 million loss for the past year. It has an expensive business model, with a high burn rate as it tries to convert users who are lured into the service with free storage to paying customers.
Box also has to mature beyond basic sync-and-share , which appears to be turning into a feature more than a full-blown product as companies integrate the technology into other cloud products. Box has opportunities in integrating their technology with other enterprise applications, such as Salesforce.com
“The basic sync-and-share is a feature,” said Terri McClure, senior analyst at Enterprise Strategy Group. “[Box’s] advanced collaboration and data management is starting to become more compelling. They have a rich API set that allows integration into enterprise applications. They certainly are one of the market leaders in terms of functionality. Dropbox has an API strategy but it’s not as far along.”
McClure said that although Box faces the tough and expensive challenge of converting free users into paying customers, the company is making strong gains in that area. Seven percent of its 25 million users are now paying customers, translating to 1.75 million users. It also has 34,000 companies paying for accounts.
“It’s likely that a good number of those 1.75 million users are corporate users,” McClure said. “When (you are) seeing losses of $168 million against revenue of $124 million, it is easy to point fingers and call it questionable. But can this model work? Yes, over time and with the right investments.”
She said Box needs to build some on-premise functionality into its product to move into the enterprise and will have to make heavy investments in security. McClure said the company needs a European data center.
“Today, Box stores all customer data and files in the United States,” McClure wrote in a research brief. “Box did not report a geographic revenue breakout in the S1 but given the geopolitical and regulatory environment, companies outside the U.S. are hesitant to begin or are prohibited from storing data in the U.S. Cisco and IBM have discussed the fact that security concerns (specific to NSA spying) are inhibiting international sales of hardware. So you can say it’s impacting cloud SaaS and storage.”
Cloud companies also are dealing with what McClure calls the “Snowden effect.” Users are concerned that if a cloud provider holds their data and the encryption keys, then the data can be turned over to the federal government if it is subpoenaed.
“Concerns would be mitigated if Box offers users a method of holding and managing their own key,” McClure said.