Hitachi Data Systems (HDS) has enhanced its cloud storage platform for people who want to use the public cloud as a tier.
The vendor made changes to three cloud products – the Hitachi Content Platform (HCP), HCPAnywhere and Hitachi Data Ingestor (HDI). HCP is object-based software that serves as HDS’s main cloud platform and can run on any of its hardware products. HCP Anywhere is the file sync and share software introduced a year ago that lets mobile devices access HCP. HDI is what HDS calls its “cloud on-ramp” that lets companies connect to the cloud from remote offices. All three can run as virtual appliances.
With HCP 7 launched this week HDS added cloud adaptive tiering, which lets customers move data on and off Amazon, Google and Microsoft public clouds. The policy-based tiering controls what data is kept on-premise and what data goes to a public cloud. New synchronization capabilities let customers sync data across active sites. The new tiering and synch features let companies store data local and in the cloud and access them through mobile devices.
“We’ve become a cloud broker,” HDS CTO Peter Sjoberg said. “Anything brought into HCP can be sent out to any cloud. The information flows through HCP, but the metadata is retained in the data center. The information can go to your cloud of choice.”
Because HDI caches data that is used locally, there is less of a need for WAN optimization for remote sites. HDS also says the new features alleviate the need to do distributed backups across site. “You don’t have to pop a tape in,” said Tanya Loughlin,HDS director of file, object and cloud product marketing at HDS.
Making it easier for end users to share information could solve a BYOD headache caused by people using their own devices in their own way, or as Sjoberg put it, “going rogue, and going around IT.”
I have been told by a storage administrator that if he moved data to the cloud he was no longer responsible for it. He made the wash-my-hands of it sign during that conversation to illustrate it was “not my problem.” That’s because of guarantees offered by the cloud service provider storing the information. I did not get any of the details about those guarantees but I would also question how sound they were and what recourse the company would have if anything did get lost.
An event happened this past week that illustrates this problem and set off alarm bells for all storage administrators and IT management. U.K.-based code hosting company Code Spaces lost every client’s data and ceased operation. The data loss was due to a malicious hacker that deleted the data and was so proficient at it that Code Spaces’ protection mechanisms were incapable of protecting or recovering data. Russ Fellows wrote a blog with more detail on the Evaluator Group web site. More can be found at SearchAWS.com.
This was a major fail for Code Spaces and a major loss of valuable development source code that was stored there by many companies. And there were guarantees about how the data was protected. A conversation with an IT guy who decided to put data on Code Spaces would be completely different now. Would a company executive believe that the IT guy was no longer responsible when the data was moved there? There is no realistic defense for the IT guy here.
This means the responsibility for protecting information and ensuring its availability for use remains with IT, specifically the storage team, irrespective of where it is physically located. Protecting data from disaster or some type of alteration or destruction is one of the earliest and most basic jobs in IT. Making the data available to the point of business continuity is a responsibility. The consideration that IT would be absolved of those responsibilities by moving to a cloud provider is wishful thinking.
Moving the data to the cloud may have economic benefit. But it still requires the operational effort and expense of ensuring the data is protected and available. The protection and availability must be proven with periodic exercising of recovery and availability switchover. Without that, the liability is there but without evidence that responsible actions have been taken.
In advising our IT clients, this is a great example to use. From this point on, when someone says they are moving data to the cloud and no longer have responsibility, I’ll just ask if they have it in writing that they will not be held liable. We certainly have an example now.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
More than three years after coming out with its copy data storage software, Actifio is expanding the use cases for its data protection technology. Actifio launched Actifio Sky platform in May as a remote office companion to its Copy Data Storage (CDS) core data center product, and this week built on those platforms with a Resiliency Director service for disaster recovery.
The goal for Resiliency Director is to provide one-click failover of heavily virtualized environments, using the cloud or a co-location as a DR site.
Resiliency Director consists of virtual appliance data collectors on the primary site that discovers VMware virtual machines and vApps, and CDS systems at the primary and DR sites. The CDS software performs deduplication and asynchronous replication of VMs, keeping them in a ready state for DR. When there is a failure (or DR test), Resiliency Director software at the DR site orchestrates storage, compute, network and data, and enables full application recovery.
Actifio claims it can enable application recovery faster than anybody else because of its granular understanding of VMs and vApp reduces the size of the data sets and it only has to rehydrate changed deduped blocks when making restores. Also, a read cache in CDS reduces latency and improves IOPS in the recovery stack.
Customers can organize VMs and vApps in application groups and prioritize the order in which they are recovered.
“Our customers said they wanted DR to operate in a much more granular VM level,” Actifio VP of product management Brian Reagan said. “And they want to orchestrate recoveries without relying on array-based replication that adds cost.”
Reagan said Actifio plans to add support for more hypervisors – Microsoft Hyper-V is likely the next – and physical machines to Resiliency Director.
Sungard Availability Services is already using Actifio Resiliency Director for its Recover2Cloud service. This expands the vendors’ partnership, which already includes Sungard’s Managed vaulting Backup for Actifio service. Sungard Availability VP of product management Souvik Choudhury said Recover2Cloud will be available as a standalone service and integrated with other Sungard services.
Veeam Software is offering its service provider partners a way to give more customers an integrated and secure way to move backups to an offsite repository.
The company recently introduced VeeamCloudConnect as part of its Veeam Availability Suite v8. The service requires a single server and takes less than 10 minutes to implement, while providing all the infrastructure management needed for an offsite repository service.
Doug Hazelman, Veeam’s vice president of product strategy, said VeeamCloudConnect offers the service to provide its partners a Secure Sockets Layer (SSL) connection for secure communications on the Internet for data transfers.
“Normally the service provider has to set up a VPN and a separate repository for customers,” Hazelman said. “(With this) they can make one repository available and create multiple tenants in that repository, which is at the cloud provider’s location.”
Hazelman said they are not disclosing pricing, but Veeam’s marketing data says they are offering pricing that is tailored to meet service providers’ specific needs, so instead of an up-front perpetual license Veeam will offer service providers monthly per virtual machine licensing.
The service providers will most likely charge monthly on capacity and not based on the number of virtual machines. Veeam customers can search for the nearest service providers through an integrated Web portal in the Veeam Availablity Suite v8.
Carbonite, a cloud backup pioneer going back to the days when it was something done only by consumers, is moving deeper into the SMB market. Carbonite is also changing its delivery method from software that customers download off its web site to appliances sold by channel partners.
Carbonite launched its first appliance this week. The Carbonite Appliance HT10 includes Carbonite software and 1 TB of local storage with the ability to move 500 GB into the Amazon cloud.
Dave Maffei, Carbonite’s VP of Global Channels, said the appliance is designed for SMBs with up to 500 employees. Customers will pay a monthly fee for the appliance and Amazon capacity. He said channel partners will set the pricing, but Carbonite CEO David Friend said the appliance would cost $99 per month during the vendor’s latest earnings call in April.
Maffei said the software on the appliance is different than what Carbonite sells to consumer. The appliance software likely includes technology that Carbonite acquired when it bought SMB cloud vendor Zmanda in 2012. HT10 connects to servers via Ethernet, takes local bare metal images of the servers and replicates them to the cloud.
The software for businesses must be more reliable than that for consumers. “A couple of days without data is potentially the end of the road for a business,” Maffei said. “It’s different than losing pictures of the kids.”
The Carbonite sells its Zmanda software as Carbonite Server for SMBs, but the majority of its revenue still comes from consumers. In the first quarter of this year, consumers accounted for $23.3 million in revenue compared to $9.2 million from SMBs. The appliance should close that gap.
Carbonite’s appliance and new SMB focus also shows how cloud backup is gaining acceptance. Carbonite, Mozy and the other early cloud backup vendors focused on consumers at the start because businesses wouldn’t think of trusting data protection to the cloud. Now Mozy is part of EMC, and even the largest corporations are backing up to the cloud. That means you can expect larger appliances from Carbonite in the not-so-distant future.
Asigra unveiled interesting technology this week with its new software stack that can reduce costs for its cloud backup service provider partners.
Calling it software-defined data protection, Asigra is offering a free download that runs on commodity hardware and replaces the need for expensive disk arrays as backup targets. Asigra’s open-source stack includes FreeBSD software and ZFS. ZFS serves as a mount point on the hardware. The providers still need to install Asigra Cloud Backup on customers’ site.
The software adds value for service providers, but doesn’t change things for users. It could, though. Asigra executive vice president Eran Farajun said the company is talking to commodity hardware vendors about bundling the software on low-cost appliances.
“It’s absolutely possible,” he said. “We’re talking with a couple of OEM vendors that sell commoditized hardware who are looking for applications like this. Software sells hardware, not the other way around.”
In keeping with Asgira’s business model, these appliances would be sold to service providers. But that wouldn’t stop other backup software vendors from doing something similar with their software and bringing it market for end users on appliances that cost far less than the specialized targets already on the market.
eFoder acquired Sweden-based Cloudfinder last week for an undisclosed price. Cloudfinder backs up data stored in Microsoft Office 365, Google Apps and Salesforce. Cloudfinder joins an eFolder lineup of services that includes file synch and share, DR/business continuity, and backup.
eFolder operates its cloud out of data centers in Atlanta, Salt Lake City and Colonna, British Columbia. Until now, CloudFinder used the Amazon cloud to store customer data. Ted Hulsy, efolder VP of marketing, said efolder will migrate CloudFinder customer data from Amazon to the eFolder cloud, in similar fashion as it did after acquiring sync and share vendor Anchor and moved its customers’ data from the SoftLayer cloud.
Unlike Cloudfinder, eFolder does not sell directly to users. It will also turn Cloudfinder into a 100-percent channel play, according to eFolder VP of marketing Ted Hulsy.
Hulsy said eFolder will continue to sell its services separately, although its service provider customers will eventually be able to manage all from the eFolder partner portal.
Cloudfinder launched in early 2013 with backup for Office 365, which remains its most popular product.
“We’re all studying how the landscape is shifting to a cloud-centric future with the proliferation of mobile devices, and we see the writing on the wall that traditional offerings have to transform [to the cloud],” Hulsy said. “Cloudfinder was an ideal company to acquire because of that.”
While Cloudfinder is smaller than cloud-to-cloud backup vendors Backupify and Spanning, Hulsy said the service will benefit from eFolder’s bulk and channel. He said eFolder has more than 120 employees and 2,200 partners worldwide, and has raised $26 million in funding.
“Our product vision is to build out really robust infrastructure of cloud services,” he said. “We’re talking about hundreds of applications over time.”
Spanning this week launched Spanning Backup for Salesforce1 Mobile, allowing Salesforce admins to monitor backups and restores remotely. The Spanning application runs on the Salesforce 1 app that is available on the Apple iOS and Google Android operating systems.
Last week Spanning added the capability for Spanning Backup for Salesforce customers to restore data objects directly from the Salesforce user interface. This lets users restore their own records without IT intervention. They can go back to previous versions of Salesforce data objects such as Accounts, Opportunities and Contacts. This allows them to restore similarly to how they do it in Google Apps , with each user responsible for his or her own data.
“Companies want to put data recovery into the hands of people who own their data, and they want granular point in time recovery,” Spanning CEO Jeff Erramouspe said. “Salesforce is one big database carved up into individual pieces, where in Google Apps each user owns his data – email, Drive documents, calendar – on an individual basis.”
SanDisk expanded its enterprise flash product line today by acquiring PCIe flash leader Fusion-io for $1.1 billion. However, SanDisk provided few specifics about its plans for its newly acquired products.
SanDisk is known primarily as a PC solid state drive (SSD) vendor although its enterprise revenue shot up in 2013 and this is its second acquisition for server-side enterprise flash in 10 months.
Fusion-io was an early flash success story, turning its dominance in the PCIe market to an initial public offering in 2011 but fell on hard times last year.
“This will position us for a leadership position in enterprise storage,” SanDisk CEO Sanjay Mehrotra said on a conference call this morning to discuss the deal.
SanDisk last August acquired SSD and flash controller company Smart Storage for around $307 million. Smart Storage sells ULLtraDIMM flash memory cards that sit in the server’s motherboard and have lower latency than PCIe cards. Mehrotra said the Fusion-io technology complements SmartS torage’s UltraDIMM product, which is seen as competitive with PCIe flash. He claimed there is a market for both types of products as well as SAS and SATA SSDs.
Fusion-io expanded its product line in April 2013 by acquiring storage array startup NexGen Storage for $114 million. Fusion-io sells the NexGen hybrid flash arrays under the ioControl brand. Those arrays use Fusion-io flash instead of solid state drives (SSDs).
However, Mehrotra would not commit to the ioControl product. He did not mention the arrays when discussing Fusion-io technology during the call. When asked twice if SanDisk would continue the hybrid SAN array product, Mehrotra said he would talk about that about it after the deal closes. He expects that to happen in Setember.
Mehrotra also declined to say how the deal would affect SanDisk’s current early development of PCIe flash products.
SanDisk moved into the enterprise SSD market when it acquired Pliant Technology for $327 million in 2011.
The combined revenues from SanDisk and Fusion-io would have made SanDisk the No. 2 overall SSD vendor last year behind Samsung, according to Gartner’s SSD and solid-state array market share report published last week. SanDisk and Fusion-io combined for more than $1.6 billion in revenue in 2013.
According to Gartner, SanDisk jumped from fifth in SSD shipments in 2012 to third in 2013, passing Fusion-io and Micron. SanDisk’s SSD revenue increased 263 percent over the year (compared to 53 percent growth for the entire market), from $355 million to $1.3 billion and its market share grew from five percent to 11.7 percent. It grew more than any other SSD vendor in 2013.
While most of SanDisk’s 2013 revenue came from PC SSDs, it did jump from eighth to fourth in revenue from enterprise SSDs. SanDisk enterprise revenue increased 181 percent (compared to 47 percent for the entire market) to $375 million and its market share went from 4.4 percent in 2012 to 8.5 percent last year.
Meanwhile, Fusion-io’s revenues declined sharply in 2013 as its early large customers Facebook and Apple greatly decreased their spending while competition grew after storage giant EMC and others entered the PCIe market. Fusion-io switched CEOs in May 2013, pushing out David Flynn and replacing him with Shane Robison.
Robison is out now, too. Mehrotra said Fusion-io president/COO Lance Smith will become senior VP and president at SanDisk, leading the Fusion-io development and marketing teams.
Fusion-io fell from fourth in 2012 in total SSD revenue to eighth in 2013. Fusion-io’s revenue declined 18 percent to $339 million last year. It stood eighth overall and fifth in enterprise SSD revenue. Its market share of 7.7 percent of the enterprise market was down from 13.6 percent.
Coraid Inc. recently introduced its EtherDrive EX unified storage system, a high density array for block and file storage, as the company continues to try and push into large-scale cloud deployments.
The EtherDrive EX system comes with an integrated dual controllers and other redundant components within a single chassis for high availability, compared to block-based Coraid’s EtherDrive SRX and file-based EtherDrive ZX systems that each have single controllers.
A single EX chassis holds 60 drives and can scale incrementally by adding more chasses. An EX in a 4U form factor provides up to 240 Terabytes of raw capacity, while users can tailor its storage performance with a mix of SSD drives, nearline SAS drives and cache drives. The system currently is available and it has a list price starting at $700 per Terabyte.
Gokul Sathiacama, Coraid’s vice president of product management, said the EX allows customers to start using Coraid systems at a small scale.
“We are trying to go after cloud service providers and enterprises that want private clouds,” said Sathiacama. “In that market, the agility of the platform is paramount. If you want to scale performance and capacity in a linear fashion, you can do it with the EX.”
Ashish Nadkarni, IDC’s research director in storage systems and software, said the company has been working on transitioning from being primarily a traditional storage player to a cloud supplier. Amazon uses technology that is similar to Coraid’s for its elastic block storage (EBS).
“They want to be the Amazon EBS equivalent,” he said. “Coraid wants to be the block cloud storage supplier, by making their technology friendly with things like OpenStack. They also has a software solution that helps it do cloud management. It’s a tough road to make the transition because the story is different on the cloud side. It’s about ‘How much do I get for my dollar?’ The enterprise storage (market) is crowded and crowded to the core. Even HP and Dell are struggling.”
Fueled by triple-digit percentage growth in revenue, IBM and Pure Storage were the market leaders for solid-state arrays (SSA) in 2013, according to a report released this week by Gartner Inc.
Revenue from IBM’s FlashSystem product line increased 278% year-over-year from $43.4 million in 2012 to $164.4 million in 2013. IBM commanded about a quarter of the all-flash array market, as its share grew from 18.4% to 24.6%. The FlashSystem platform came from IBM’s 2012 of Texas Memory System.
Pure Storage’s revenue spiked 642%, from $15.4 million to $114.1 million, and its market share surged from 6.5% to 17.1% in 2013.
“Pure Storage has broad applicability predicated on its data reduction abilities and fresh marketing approach that has resonated well with customers,” Joseph Unsworth, a research vice president for NAND flash and solid-state drive (SSD) technology at Gartner, wrote in an e-mail.
Unsworth wrote the “Market Share Analysis: SSDs and Solid-State Arrays, Worldwide, 2013 report with the revenue figures.
Violin Memory dropped from first in 2012 to third last year. Violin’s revenue increased by 22.6%, from $72.1 million to $88.3 million, but the company’s market share fell from 30.5% to 13.2% in 2013, according to Gartner.
Unsworth noted in the report that the U.S. government shutdown and missed sales targets hurt Violin after its initial public offering in September 2013. He wrote the company has top hardware but suffered from less than optimal data management software. Violin is working on its software strategy, trimming operating costs, refocusing on core customers and geographies and returning to the channel with a fresh management team in place, Unsworth wrote in the report.
Gartner changed the way it reports revenue for solid-state arrays with the release of the 2013 market analysis. The Stamford, Connecticut-based company now includes only the revenue from SSA products with a dedicated model and name that cannot be configured with hard-disk drives. By contrast, in 2012, Gartner also included general-purpose disk storage arrays that were configured only with SSDs, such as Hitachi HUS VM, Dell Compellent, EMC VMAX, IBM DS8000, HP 7000 series, NetApp FAS and others.
Those general-purpose storage arrays configured solely with SSDs accounted for $128 million in sales in 2012 and an estimated $170 million in 2013, but Gartner has now stripped that revenue out of its solid-state array market calculations.
Under Gartner’s revised SSA market calculation, EMC is now able to count revenue from only its XtremIO and VNX-F arrays, which were released last November. Despite the short time frame, the EMC all-flash systems placed fourth for the year, with $73.9 million in revenue, and EMC held 11.1% of the market.
In fifth place, NetApp all-flash revenue grew 126.5% for its EF540 all-flash array to $71 million. Nimbus Data Systems also more than doubled its revenue, from $21.6 million to $43.4 million, and placed sixth for the year, according to Gartner.
Filling out the top 10 were Kaminario ($22.5 million), Cisco ($21.4 million), SolidFire ($20.4 million) and Hewlett-Packard ($8.8 million). The total market grew 182% from 2012 to 2013, from $236.5 million to $667.3 million, using Gartner’s revised SSA reporting metrics.
According to the Gartner report, end users purchased 5,281 solid-state array units in 2013 at an average selling price of $126,360, or $9.70 per GB. The most popular capacity range was 10 TB to 19.99 TB, with a total of 2,126 units shipping at an average selling price of $118,647, or $11.59 per GB.
Runners-up were solid-state arrays in the range of 20 TB to 49.99 TB. A total of 1,629 units shipped at an average selling price of $180,699, or $8.82 per GB. Just 171 solid-state arrays of greater than 50 TB shipped last year, at an average selling price of $223,169, or $4.36 per GB. But, that could change this year now that most SSA vendors are making available arrays at higher capacities.
The Gartner analysis also included SSD revenue. Samsung, the largest supplier of NAND flash chips, maintained its overall lead in SSD sales with $3.1 billion in revenue, with the majority of its revenue from the PC SSD segment. Intel ($1.4 billion) held onto the No. 2 overall spot. SanDisk ($1.3 billion) jumped from fifth place in 2012 to third place, and Micron ($0.8 billion) improved from eighth to four. Toshiba ($0.6 billion) fell from third to fifth.
In the enterprise segment, which combined enterprise server and enterprise storage SSDs, Intel was No. 1 with 18.5% market share. Samsung (14.6%) jumped from third to second place, Western Digital (at 10.6%, with sales mostly from high-end storage through its partnership with Intel) moved from fifth to third, SanDisk (8.5%) leapt from eighth to fourth and Fusion-io (7.7%) fell from No. 2 to No. 5.
The Gartner report showed that total sales of enterprise SSDs grew from $3 billion in 2012 to $4.4 billion in 2013. Unsworth cited the main drivers in the enterprise SSD space as hyperscale customers purchasing low-cost SATA SSDs in huge volumes and storage manufacturers buying higher-quality SAS SSDs. He said hyperscale users and server manufacturers were prominent in sales of high-performance PCIe SSDs, but lower-cost PCIe SSDs “tempered revenue.”
Breaking down enterprise SSD sales, Intel was the leading producer of SATA SSDs followed by Samsung, Smart Storage, OCZ and SanDisk in 2013. Western Digital was No. 1 in the SAS-based SSD market, followed by SanDisk, Seagate, Toshiba and Hitachi. Fusion-io continued its dominance in PCIe SSDs, with Google, NetApp, LSI and Western Digital behind. Google, NetApp and Hitachi use their SSDs only within their own data centers or products, Gartner noted.