Cloud-to-cloud backup has been a niche market, but that may change with EMC’s recent acquisition of Spanning Cloud Apps.
Spanning was among three small cloud start-ups EMC acquired Oct. 28, the day it also laid out its hybrid cloud strategy. Spanning backs up customer data in Salesforce and Google Apps to Amazon’s public cloud, so that data can be retrieved if it is lost or deleted. Next up on Spanning’s roadmap is backup for Microsoft Office 365. Spanning for 365 is set for beta late this year and general availability in 2015. EMC refers to data in those software as a service (SaaS) apps as data born in the cloud.
EMC sold Spanning through its EMC Select reseller program for nearly a year before the acquisition, and little will change in Spanning’s products and strategy in the near future, according to Spanning CEO Jeff Erramouspe. All 51 Spanning employees have been offered positions with EMC. Erramouspe will report to Russ Stockdale, vice president of EMC’s Core Technologies Division. The Spanning team remains in Austin, Texas.
“We will continue to do business as Spanning for the foreseeable future,” Erramouspe said. “That’s what we like about this deal. EMC lets companies they bring on continue to be themselves.”
Erramouspe said only a small percentage of Spanning customers have come through EMC Select, EMC began generating a lot more sales as it got closer to the acquisition.
Spanning will also be sold by EMC’s Mozy cloud backup sales team. The first place you can expect to see technical integration is with Spanning management functions becoming visible inside EMC’s Data Protection Advisor software.
Spanning claims just more than 4,000 customers worldwide, mostly SMBs. EMC will look to expand to the enterprise, especially when Spanning’s backup for Office 365 becomes available. That product will appeal to companies who are using EMC backup products for Exchange and other Microsoft applications running on-premise, but may eventually move those apps to the Microsoft cloud.
In an interview with SearchDataBackup.com in April, EMC backup boss Guy Churchward identified cloud-to-cloud backup as an area the vendor would move into.
“EMC realized it had a hole around the cloud in the data protection space,” Erramouspe said. “Mozy provides you with the ability to protect on-premise data by moving it into the cloud, but they didn’t have anything to address workloads moving to the clouds. If customers who protect Exchange on-premise with Avamar, NetWorker and Data Domain move those workloads to the cloud, it puts that revenue at risk. Our plan to do 365 backup plugs that hole.”
Other cloud-to-cloud backup startups include Backupify, CloudAlly and syscloud.
“EMC’s acquisition of Spanning provides validation for what we’ve known all along, that the cloud-to-cloud backup market is on fire,” Backupify CEO Rob May maintained in an emailed statement. “As companies move massive amounts of critical data to the cloud, ensuring this data is safe and secure will remain a top priority. There’s a lot more growth and innovation in the cloud-to-cloud backup market to come.”
In advance of the OpenStack Summit in Paris next week, flash array vendor Pure Storage is throwing its weight behind the open source cloud operating system.
Pure this week joined the OpenStack Foundation as a corporate sponsor and pledged to heavily participate in development of the OpenStack code base. The vendor has also made available an OpenStack Cinder driver and Python Automation Toolkit to help customers use OpenStack-based private and public clouds.
Cinder is OpenStack’s block storage service. Pure’s Cinder driver integrates with the Purity Operating Environment via a RESTful API. The Cinder driver supports OpenStack Juno and Ice house releases and Purity OE 2.4.3 and later. It calls Purity REST APIs for snapshot and volume services and volume migration.
The Python Toolkit lets developers build workflows for Pure FlashArrays. They can add service such as snapshot and replication scheduling, storage monitoring and reporting to Cinder. Pure customers can download the Cinder Driver and Python Toolkit from the Pure website.
Pure chief evangelist Vaughn Stewart said the most of the vendor’s investment in open source communities have gone to OpenStack. Stewart said service providers are a key market for Pure because flash arrays are used for the highest tier of service that providers offer. And those providers are increasingly adopting OpenStack.
“We look at OpenStack as critical for service provider customers and enterprise customers looking to advance their private clouds,” he said. “We believe OpenStack will be the open source winner in this space.”
Quantum turned a small profit last quarter, and its CEO said he is confident the vendor has turned the corner thanks to its StorNext technology.
Quantum’s $135.1 million revenue was at the high end of its forecast. It was up only three percent from last year, but scale-out storage (StorNext file management and Lattus object software) revenue increased 58 percent year-over-year.
Revenue from DXi backup deduplication appliances grew 11 percent from last year. DXi and StorNext revenue combined for $47 million, helping the vendor ride out declines in tape revenue to record a GAAP profit of $1.2 million. That’s compared to a loss of $7.9 million in the same quarter last year.
Scale-out storage revenue was $25.5 million with disk backup at $21.2 million.
“I don’t think this is a blip for us at all,” CEO Jon Gacek said in an interview after the earnings call. “It’s a nice start for something that will get much bigger.”
Gacek said StorNext is selling well in media and entertainment markets, and the vendor has just begun to move into other high-capacity storage markets such as video surveillance and corporate video. He said Quantum closed a deal with a sports broadcaster for $2 million last quarter, and had other deals more than $200,000 apiece.
DXi sales also benefited from the latest version of StorNext. StorNext 5 is the underlying file system for the DXi 6900 enterprise appliances, and Gacek said that has boosted performance and allows customers to use less hardware than previous DXi versions. He said the DXi 4700 midrange appliances also spiked in revenue last quarter.
Gacek forecasted revenue of $145 million to $150 million for this quarter. He expects the new video markets, plus the emergence of 4K High Definition video and eventually 8K Ultra High Definition, to provide plenty of growth in storage sales for video.
“We have good momentum,” he said. “And I don’t think it’s an anomaly, given that a lot of these markets are really nascent. I mean, the number of 4K installations is super small relative to what it’s going to be. So we have good momentum for this quarter for sure, and I think beyond that.”
After another quarter of declining revenue, FalconStor CEO Garry Quinn says the troubled company will focus on delivering storage software for flash array vendors and cloud service providers.
During FalconStor’s earnings call Wednesday, Quinn said the vendor will introduce software called FreeStor 10 in 2015. He said FreeStor “will be focused on this new software-defined marketplace that uses an intelligent abstraction layer to be completely hardware agnostic.”
The target markets are flash array vendors without their own software stacks and service providers looking to provide storage services to customers with legacy hardware.
Quinn said FreeStor will address data migration, continuous availability, protection and recovery and optimize data deduplication. Its partners or their customers can turn on the services they require.
The software will be based on development work FalconStor has done under an OEM agreement with flash array vendor Violin Memory. FalconStor is developing software that provides the above services for Violin arrays.
Quinn said the software can be sold with a FalconStor branded management console, a private labeled interface or without any management console. He added that it will work with web browsers, tablets or smart phones. He expects to announce the new products on Feb. 19, 2015 – the 15th anniversary of FalconStor’s inception.
“We’re moving in a different direction,” he said. “The focus of the company is to move into a new market that is more attractive, less confusing and allows FalconStor technology to get its fair opportunity in the marketplace. There are many, many, many people with point solutions in the business continuity and disaster recovery space, and the idea of moving more to the platform approach opens up opportunities for FalconStor to OEM its technology in the flash market.”
FalconStor will continue to sell its existing deduplication, continuous data protection and storage management software through resellers but future product development will focus on FreeStor.
The new direction comes after what Quinn calls “a complete miss” in the Americas region last quarter. While revenues increased from the previous quarter in the rest of the world, they were down 16 percent in the Americas. Overall, FalconStor revenue of $11.2 million was down from $11.3 in the previous quarter and $14.7 million a year ago. The company’s loss from operations was $2 million in the quarter, actually an improvement over the $2.8 million loss in the previous quarter.
Quinn has sought ways to turn FalconStor around since he became CEO in June, 2013 after his successor Jim McNiel resigned. McNiel’s predecessor, ReiJane Huai resigned as CEO in 2010 after his role in a customer bribery scandal became known. Huai committed suicide in 2011.
Quinn considered selling FalconStor but could not find a suitable buyer so he is changing its market focus.
FalconStor’s business issues may not be completely behind it. The company received a letter from the U.S. Securities and Exchange Commission (SEC) in September asking if it has done business in Cuba, Sudan or Syria – countries the United States has identified as state sponsors of terrorism – through its partner Hitachi Data Systems (HDS).
On the earnings call, Quinn said FalconStor’s agreements with HDS and all other partners include agreements that they conform to U.S. laws.
Quorum if offering a one-click disaster recovery product that gives customers the ability to prioritize restores.
The company recently announced OnQ Flex solution so customers can designate which servers need quicker restores based on Recovery Time Objectives. OnQ Flex, which is part of Quorum’s Disaster Recovery as a Service (DRaaS) product, offers one-click recovery and one-click testing capabilities whether the servers are on-premises or residing in a co-location.
“Before we offered full-level protection all the time. Now we have introduced something that is more flexible based on what server needs priority,” said Kemal Balioglu, Quorum’s vice president of products.
Typical disaster recovery processes often require hours or days to get back up and running. Balioglu said the OnQ Flex solution offers an instant, one-click recovery after any storage, server or complete site failure with the ability to restore mission critical data instantly while less-critical data becomes available at a later time. Virtual clones of all the protected servers can run on local and remote appliances.
With OnQ Flex, the primary hardware and application can be on premise or a customer-operated co-location. The on-premise configurations that encounter a disaster will fail over to the cloud while a virtual machine will be spun up for a configuration already in the cloud.
“We always have a high availability node on standby,” said Balioglu, “For a single-click recovery to the cloud.”
OnQ Flex provides replication for compressed and encrypted data, integrated server monitoring, Email and text alerts and scheduled health reports.
For data protection and recovery, the solution has full system imaging, sub-file-level incremental updates, global deduplication at the source, and bare metal restores and file-level recovery for any snapshot level. The deployment includes bandwidth throttling for internal and external storage.
Customers are charged through a prescription-payment model.
CommVault CEO Bob Hammer insists his company isn’t broken, although he has a plethora of fixes lined up.
The backup software vendor Tuesday reported rocky results for last quarter (it’s third straight disappointing quarter) and indicated this quarter won’t be much better.
CommVault’s $151.1 million in revenue last quarter was close to $7 million below Wall Street expectations. The revenue was up seven percent year-over-year and down one percent from the previous quarter, disappointing for a company that not long ago was growing in the 20 percent range year-over-year every quarter. CommVault executives said on the company’s earnings call that they don’t expect much revenue growth this quarter.
Hammer did maintain that CommVault will bounce back to post revenue growth in the 20 percent range by the end of 2016 and hit $1 billion in annual revenue within three years. He blames the poor recent sales mostly on the way the company has packaged and priced its Simpana software. His cures include management additions (new global sales VP and chief management officer) and structure, new product bundles and pricing models, and product upgrades that include Simpana 11 and an appliance partnership with NetApp.
“We knew this quarter would be challenging … and that it would take us several more quarters to get back to sustainable consistent high-growth trajectory,” Hammer said.
Hammer said he realized change would be needed for more than a year and put them in place, but not fast enough. “We possess strong underlying business fundamentals, our target markets continue to have solid growth potential and we are well-positioned to take advantage of the increasing demand by both enterprise-level and mid-sized companies,” he said.
“We see our current challenge as difficult but resolvable.”
Much of his optimism is based on Simpana 11. Hammer said Simpana 11 will be open so “anyone will be able to access and read data that we store under more sophisticated security functionality” and APIs throughout the stack. Simpana 11 will also index and transport data differently.
Hammer said CommVault is also emphasizing fast access to data and native copy capabilities that allow customers to store data in the same format the original application created it in. That allows recovery without having to use a backup copy.
Integrated backup appliances are catching on in the market, and CommVault has not followed Symantec’s lead of selling its own appliances with its software. However, CommVault is adding hardware partners, and Hammer said NetApp will begin selling its E Series storage appliances with Simpana software this quarter. Fujitsu is also adding a CommVault appliance in Europe this quarter.
“We didn’t want to be in the hardware business,” Hammer said. “So it took us more time to put programs together with our hardware and distribution partners. It took us a long time but now we have them.”
CommVault’s revenue from enterprise deals – considered deals above $100,000 in software revenue – fell five percent from last year and 14 percent from the previous quarter. The average enterprise deal fell to $281,000 from $396,000 the prior quarter.
Still, CommVault executives said its problems are mainly in the mid-market. Because of CommVault’s pricing structure, smaller companies targeted at specific use cases have taken business away, Hammer said.
CommVault has already switched to per user or per VM licensing for new solution sets launched in August. Those bundles were for virtual machine and cloud backup, endpoint protection, email archiving, and snapshot management. It has also set up business units for data protection, cloud ops/orchestration, information compliance, mobile and vertical solutions to concentrate on those market segments. Each business unit will be responsible for technical roadmaps and executing against strategic and revenue goals.
Veeam Software is not among the vendors hurting CommVault, according to Hammer and CommVault COO Al Bunte. Veeam specializes in virtual machine backup and has rapidly grown into close to a $400 million annual revenue company. Hammer acknowledged that data protection for virtualization is the fastest growing part of the data management market, but said CommVault does well there. He said Veeam plays at the low-end of the mid-market, and he expects the new packaging and pricing to help CommVault there. Bunte added that CommVault faces more competition from its traditional larger rivals EMC, Symantec and IBM in the mid-market.
EMC, under pressure to spin off assets or merge with another large company, today spun in one of its assets – its VCE joint venture with Cisco.
EMC CEO and chairman Joe Tucci would not comment on any other possible M&A strategy during EMC’s earnings report call. Hewlett-Packard executives have claimed the two companies held merger talks before HP split its company in two. Tucci said EMC’s policy is to not comments on speculation rumors.
He did say he agreed with investor Elliott Management that EMC’s stock is undervalued. Elliott is pushing for EMC to spin off assets. Tucci called the stock performance “painful” and “baffling” and said it does not reflect EMC’s growth in recent years. When asked if EMC would give any updates on possible mergers or sales, he said, “I believe we owe investors an update. We will do that early in the new year.”
Tucci’s contract expires next February, and that has been a catalyst for much of the merger and spinoff talk. But Tucci today said he is open to staying beyond that date in his current role or as chairman only. But not for long.
“You should view February of 2015 as a guidepost, not a definitive date,” he said. “I told the board, ‘If you have a [replacement] and want to move earlier, that’s fine.’ Or if you want me to stay a little longer – I’m not talking years, but months or quarters — that’s fine. Or if you want me to stay on in a chairman role, I would contemplate that favorably.’”
Tucci certainly didn’t sound like he favored spinning out VMware or any EMC asset. He said methods of raising stockholder value through spinoffs and stock buybacks “aren’t strategies, they’re tactics. You need to build a strategy. We’ve invested in a strategy. We have some great assets, and these are going to pay off big time.”
EMC reports revenue for all of its companies, including independently run VMware, Pivotal and EMC Information Infrastructure (EMC II). EMC II is the main storage group within the EMC federation.
EMC II reported revenue of $4.5 billion, which was up six percent from last year. Tucci and EMC II CEO David Goulden said emerging technologies such as XtremIO flash arrays and ViPR and ScaleIO software-defined storage fueled much of the growth, along with midrange VNX arrays and Data Domain backup appliances.
VCE will move under the EMC II umbrella, with Cisco reducing its stake in the joint venture from 35 percent to 10 percent. The move comes after weeks of rumors that Cisco would pull out or greatly reduce its role in the money-losing company that was created in 2009.
VCE sells Vblocks, which are pre-tested bundles of EMC storage, Cisco server and networking products, and VMware software. EMC claims VCE has more than a $2 billion run rate for VCE revenue, which means it sold more than $500 million worth of products last quarter. EMC also claims more than 2,000 Vblocks have been sold since VCE began.</
Although an EMC blog today hailed VCE as “The most successful joint venture in IT history,” it has been a money loser for the partners.
According to a report published by financial analyst Aaron Rakers of Stifel, EMC and Cisco suffered more than $1.6 billion in combined operating losses from the joint venture through July. With a 35 percent equity stake, Cisco’s share of the losses would be $644 million. Cisco decreased its VCE investment to $10 million in the quarter that ended last April compared to $91 million the previous quarter. VCE partners have invested a combined $1.988 billion in the joint venture with more than $700 million coming from Cisco, according to Rakers.
Goulden said Vblocks will continue to exclusively use EMC, Cisco and VMware technology. VCE’s 2,000 employees will join EMC.
Hewlett-Packard isn’t tied to one object storage product, not even its own. Days after revealing a reseller deal with Scality for its Ring software, HP and Cleversafe today said HP would also resell Cleversafe’s dsNet. HP resells software from both private companies on its ProLiant servers.
HP has worked closely with both vendors, and already set up a web page set to sell their object storage software before it reached official reseller deals for either. HP also has its own object storage in its StoreAll product.
As with Scality, Cleversafe sees a deal with HP as an opportunity to expand its sales reach.
Peter Howard, Cleversafe’s vice president of channels and alliances, said the deal came about after HP and Cleversafe developed a base of common customers. Cleversafe sells its dsNet software on appliances, but optimized it to work for HP customers who wanted to buy the software separately to run on ProLiant servers.
Cleversafe continues to sell appliances, but Howard said the software holds the value.
“We’re committed to being a software company,” Howard said. “The hardware is there as a convenience for customers who want one throat to choke. All the value is in the software. HP said they wanted to be the one throat for their customers, and they wanted to sell our software.”
Howard said Cleversafe’s largest customer segment is service providers, and most of its growth is in financial services, life sciences and other verticals that have a great deal of data growth. Active archive and web content are common uses cases.
“We look pretty attractive after you get above a petabyte of data,” he said.
The capacity utilization for storage is one area where storage vendors have made a lot of improvements. Advanced features such as storage pooling, thin provisioning, and storage virtualization have introduced greater efficiencies for using storage capacity.
Still, trying to understand capacity utilization can be confusing. The utilization must be examined at a larger scale than a single storage system. Storage virtualization can span systems. Thin provisioning overcommits capacity across systems with the ability to drive up utilization rates. The larger the pool, the more flexibility is allowed for a system in allocating storage resources.
Data reduction (compression and/or deduplication) usually allows more data to be stored in a given amount of storage. Data reduction effectiveness varies based on the data type and the implementation by the vendor. Data reduction represents a potential increase in usable capacity. Guidelines or guarantees from the vendor can be used to gauge that potential, and actual measurements are usually available from the management interfaces on the storage systems when data reduction is in use.
In the discussion about storage capacity utilization, it is useful understand basic definitions and update them to current terminology for the technology in use. The following are some of the more basic terms and explanations.
Used capacity – where the data is stored that can be accessed from hosts.
Usable capacity –storage space within a storage system or across pooled systems that can be configured for volumes (LUNs) or filesystems. This is the capacity minus the storage system overhead. The overhead includes data protection such as RAID devices and allocated chunks in storage pools and segments for forward error correction using correcting codes such as erasure codes. Filesystems also reserve space for operational processes, which is not included in the usable capacity calculation.
Allocated but unused capacity – allocated storage space in a volume or filesystem with no data stored. This space is not available for applications or file systems, although it can be used later for data.
Effective capacity – the usable capacity multiplied by the expected effectiveness of data reduction.
Raw capacity – the aggregate of the capacity of the storage devices (hard disk drive, solid-state devices, flash modules).
Storage system data protection also has special considerations.
Snapshots – there are two primary types of implementations: Redirect-On-Write and Copy-On-Write. Redirec- On-Write is used with more recent storage pooling implementations such as all solid-state storage systems, where available space from the storage pool is used for the change data. With thin provisioning, the recommendation is to not exceed 90% utilization including snapshots and used capacity. Copy-On-Write implementations usually depend on pre-allocated capacity to contain a copy of the original data when a change is made. The pre-allocated space is included in the storage system overhead and reduces the usable capacity.
Replicated copies for disaster recovery / business continuance – these are volumes or filesystems, typically at remote sites, that represent a copy of the original active data. For capacity utilization calculation, the space is treated the same as any of the primary volumes – replication just means you need that much more capacity. The effect of low capacity utilization is multiplied with replication.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
The most prominent storage feature made available yesterday with the 10th release of OpenStack cloud software — known as Juno — gives users the ability to control how and where they want to store, replicate and access data across object storage clusters.
The new “storage policies” capability applies to the OpenStack Object Storage project, which is better known by its code name, Swift. The latest OpenStack Swift release also includes updated support for the OpenStack Keystone identity service and CPU-lowering data handling improvements, but the feature drawing the most attention is storage policies.
“They’re the biggest thing that’s happened to Swift since it was open sourced as part of OpenStack four years ago,” said John Dickinson, the project technical lead for OpenStack Swift and director of technology at SwiftStack Inc., which sells a commercially supported version of the open source Swift software.
Dickinson said, by using storage policies, a company with a Swift-based server cluster located in the United States and in Europe could choose to store some data only in one geographic region. Or, a user with flash- and disk-based storage could set up tiers based on storage policies and offer different service-level agreements or chargeback/billing options.
Storage policies also enable users to decide the number of data replicas they want across a Swift cluster. For instance, an enterprise might choose to replicate some data only in two locations and other data across four data centers in different geographies.
“You can very specifically customize your Swift cluster for your use case – which, in my opinion, is really the whole purpose of cloud,” Dickinson said.
In addition to the immediate benefits, storage policies will also pave the way for an important feature in the 11th version of OpenStack, known by its project code name, Kilo. Dickinson said storage policies are the “critical foundation” allowing the community to build erasure code support in Swift. The community hopes to finish its work on erasure codes by year’s end, and at the latest, by the time of next spring’s Kilo release, according to Dickinson.
Another key storage capability targeted for OpenStack’s Kilo release is encryption of data at rest by Swift, but Dickinson said the feature is still in the design phase at the moment.
Of course, Swift isn’t the only storage option in OpenStack. The OpenStack Block Storage project, known as Cinder, will focus on core internals in the Kilo release, according to John Griffith, the project’s technical lead and a software engineer at SolidFire Inc.
“There’s a good deal of housekeeping that needs to be done, not only general architecture and stability improvements, but also we would like to focus on things like rolling upgrades and project interactions,” Griffith said via an email.
In the meantime, this week’s OpenStack Juno release added new features such as support for volume replication, volume pools, consistency groups and snapshots of consistency groups to OpenStack Cinder block storage.
File storage remains a work in progress for the OpenStack community. The OpenStack Foundation’s press release listed the Manila shared file system among several projects in the incubation phase, “expected to land in late 2015 and beyond.”