Storage Soup


February 24, 2012  5:23 PM

Backup vendors take cloud cover

Dave Raffo Dave Raffo Profile: Dave Raffo

Backup vendors are paying especially close attention to the cloud these days. Asigra, Imation and CA launched backup products and services this week with various degrees of cloud connectivity.

Asigra has sold backup software to service and cloud providers for years. With its Cloud Backup 11.2 software, it is adding the NetApp-Asigra Data Protection as a Service (DPaaS) bundle for providers.

The Asigra-NetApp deal is a meet-in-the-channel relationship that Asigra director of strategic alliances Doug Ko said can help service providers and telcos get cloud backup services up and running faster. Ko said engineers from the vendors have worked together to insure compatibility. CloudBackup 11.2 supports deeper snapshot integration with NetApp arrays.

Asigra also added support for VMware vSphere 5.0 and Apple iOS 5 and Google Android 4.0 to beef up support for virtual machine and mobile device protection in version 11.2.

Imation launched two new models of a DataGuard SMB backup appliance that use standard hard disk drives and RDX removable hard drives. Built-in replication lets a DataGuard appliance serve as a gateway to the cloud.

Imation says the DataGuard R4 and DataGuard T5R appliances are compatible with cloud storage APIs, including those from Amazon S3, Dropbox and OpenStack-based cloud providers. Imation doesn’t have deduplication in its DataGuard software yet, so customers need to use seed units for initial backups. After that, customers can set the appliances to automatically replicate data to the cloud.

Bill Schilling, director of scalable storage marketing for Imation, said the early target market is Imation’s SMB tape customers. “That’s the low-hanging fruit,” he said. “And maybe SMBs that aren’t backing up or need to be backing up better.”

Pricing ranges from $2,000 to $5,000 for the new DataGuard appliances.

CA launched ARCserve D2D On Demand with a connector to Microsoft’s Windows Azure cloud service. ARCserve D2D on Demand lets SMBs or cloud service providers keep data on premise while using Azure for backup and bare metal restore. A subscription includes 25 GB of Windows Azure cloud storage per protected machine.

February 24, 2012  10:48 AM

Dell expands its backup plan with AppAssure

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell made its fifth storage acquisition in a little over four years today when it acquired AppAssure and its backup and replication software for virtual and physical machines.

Dell did not disclose the price, which obviously wasn’t close to what the vendor paid for EqualLogic ($1.4 billion) in 2007 or Compellent ($820 million) last year. Dell also acquired the IP of scalable NAS vendor Exanet and primary data reduction software specialist Ocarina in 2010.

In a blog explaining the deal, Dell storage GM Darren Thomas wrote that Dell acquired AppAssure to “help Dell customers modernize their data protection strategies …”

AppAssure’s Backup and Replication is application-aware with continuous data protection (CDP) technology that facilitates quick recovery of a file or an application. Thomas wrote that Dell will sell AppAssure as standalone software and eventually expand its IP into Dell storage products.

The software could help link Dell’s EqualLogic and Compellent SANs. Thomas pointed out that customers can use it to back up an EqualLogic array to a Compellent system and recover it from anywhere. We expect further connection between the platforms as Dell completes its integration of Exanet’s file system, Ocarina’s deduplication and compression, and AppAssure data protection with its storage arrays.

There had been a lot of speculation that Dell’s storage acquisition strategy would lead it to buy a backup vendor. CommVault, a close Dell partner, was often mentioned as a possible target. But CommVault would cost billions of dollars. Dell is attacking backup in a different way, integrating AppAssure’s technology with its hardware and using Ocarina software in a recently launched DR4000 disk backup appliance.

We’ll be watching to see how the acquisition affects Dell’s relationships with CommVault and Symantec. Dell resells both vendors’ backup software in bundles with its hardware. As a standalone company, AppAssure wasn’t considered much of a threat to Symantec or CommVault. But that could change now that AppAssure is part of Dell. AppAssure gives Dell its own data protection IP to go with its new dedupe backup appliance. Dell ended its long-time OEM relationship with EMC so it could make more money selling its own storage arrays. It looks like Dell will move in the same direction with backup.


February 23, 2012  9:00 AM

For HP storage, there’s 3PAR and subpar

Dave Raffo Dave Raffo Profile: Dave Raffo

Despite good results with its 3PAR SAN arrays, Hewlett-Packard (HP)’s storage sales are sinking.

HP’s earnings report Wednesday was filled with poor results, and storage was no exception. HP said its 3PAR sales nearly doubled but the rest of the business stumbled, especially the midrange EVA platform. HP’s overall storage revenue was $955 million in the quarter, down 6% from a year ago and nearly 12% from the previous quarter.

HP acquired 3PAR for $2.35 billion in September 2010 after winning a bidding war against Dell. 3PAR is HP’s flagship storage product, but it will take more than just 3PAR to turn around HP’s storage business.

HP’s 6% drop compares to a 12% revenue increase for the overall storage industry in the fourth quarter of 2012, according to Aaron Rakers, the enterprise hardware analyst for Stifel Nicolaus Equity Research. In a research note published today, Rakers wrote that EMC’s fourth quarter results were up 10% year-over-year (5% without Isilon numbers) and 14% from the previous quarter. NetApp revenue increased 26% year-over-year (4% without Engenio) and Hitachi Data Systems revenue grew 15% year-over-year and 13% sequentially.

Dell last week reported storage revenue of $500 million. That was down 13% from last year, reflecting the loss of EMC storage that Dell used to sell through an OEM deal. Revenue from Dell-owned storage increased 33% from last year and 19% sequentially.


February 22, 2012  4:19 PM

Symantec set to hop on Hadoop, tackle ‘big data’

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec will jump into the “big data” market this year with an application designed to make Hadoop MapReduce more enterprise-ready.

Don Angspatt, VP of product management for Symantec’s storage and availability management group, said the vendor has a prototype of the application working and he expects the product to ship in 2012. He won’t provide many details yet, but said the concept is similar to what EMC is doing with its integration of Isilon scale-out NAS and Greenplum analytics file system. The difference is that the Symantec app will work across heterogeneous storage.

Last month, EMC gave its Isilon OneFS operating system native support for the Hadoop Distributed File System (HDFS) and released the EMC Greenplum HD on Isilon.

The idea is to remove limitations such as a single point of failure and lack of shared storage capabilities that prevent Hadoop from working well in enterprises.

Angspatt said the Hadoop product will be sold separately from Symantec’s Storage Foundation storage management suite, although it will work in a storage environment. He said the application will compete with Cloudera – which has a partnership with NetApp – and MapR Technologies software that EMC uses as part of its Greenplum HD.

“We want to make sure Hadoop and MapReduce work well in an enterprise environment,” Angspatt said. “We’re not going to do business intelligence. We’ll be involved in the infrastructure behind it to make sure it’s enterprise-ready. Our application will talk directly to Hadoop, similar to the EMC Greenplum-Isilon integration. But with us you don’t get locked into a specific hardware.”


February 21, 2012  9:07 AM

Organization structures need optimization

Randy Kerns Randy Kerns Profile: Randy Kerns

One of the most important parts of optimizing the data center involves improving storage efficiency. And that requires more than implementing the latest technologies. While working with IT operations in developing strategies to increase storage efficiency, it has become clear to me that organizational structure must change in order to expedite data center optimization.

Like storage systems, IT organizations tend to get more complex over time. The complexity affects the decision-making process involving the storage architects/administrators and the business owners responsible for the applications and information. There may be layers of groups with varying responsibilities between the staff that needs to develop and implement the storage technologies, and those who truly understand the requirements.

Having one or two levels of filtering makes it much more difficult to understand the needs of the “customer,” who in this case is the business owners or their staffs. Much of the technology optimization process involves understanding what is required and includes byplay between architects and the actual customer. The lack of that interaction and base understanding of customer needs often results in a solution that fails to address key needs and planning for the changes that will occur.

Optimizing organizational structure is perceived to be a difficult task. The changes affect many influential people and groups. The need is obvious, but the complex organization structures that have developed over time may be deep-rooted and require a commitment and direction from the most senior levels in IT. Working towards data center optimization and improved storage efficiency is much more difficult without this commitment and direction. Often, the process brings compromises that reduce the project’s effective value.

Understanding the problems in working toward an optimized environment means understanding the organization structure and those limitations. Any storage optimization strategy must take into account the structure as well as technology and products. This brings about more work, but it is the current situation in many IT operations.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


February 17, 2012  5:26 PM

HDS enhances non-disruptive data migration to VSP

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Hitachi Data Systems Corp. has added a non-disruptive data migration capability to its flagship Virtual Storage Platform (VSP) that minimizes downtime for hosts and applications when moving data  to the VSP from other storage arrays.

Hitachi Nondisruptive Migration moves data from HDS’ older Universal Storage Platform (USP) to the VSP. The target array spoofs the host operating system so it thinks it’s talking to the original system even as data is moved to the target. That allows applications to continue running during migration.

Patrick Allaire, an HDS senior product marketing manager, said this capability is based on three components:  a new persistent virtual Logical Device Identity (LDEV ID) that is used to manage multiple hosts, a logical identity takeover function, and Hitachi’s Universal Volume Manager (UVM).

The virtual LDEV ID is embedded in the SCSI layer of the Fibre Channel protocol. A virtual LDEV ID  is created and mapped from the source to the target, then the takeover function spoofs the operating system.  Finally, the Hitachi UVM copies or migrates the volume from the source to the VSP target.

Typically, a physical LDEV ID is used for this process but it makes it more manually intensive. “If you don’t have a virtual persistent ID, it forces you to do a shutdown of the application when changing the data path from the old storage system to the new one,” Allaire said.

With the new virtual LDEV ID and takeover function, Hitachi claims to maintain the quality of service levels for applications. Hitachi now supports host clustering configurations by synchronizing persistent group reserves on source volumes and keeping the host cluster operational. It also supports parallel migration of up to eight source systems to one Hitachi VSP.


February 17, 2012  3:57 PM

Actifio banks on IBM giving it a PAS with service providers

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM and Actifio struck up a partnership this week that startup Actifio hopes will bring its Protection and Availability (PAS) platform to the cloud and IBM sees as a way to fill data protection needs for service providers.

IBM and Actifio said they will offer bundles to cloud service providers and VARs. The packages include Actifio’s PAS data protection with IBM DS3500 Express, IBM Storwize V7000, XIV Gen3 and SAN Storage Volume Controller (SVC) systems.

IBM has its own backup, replication, disaster recovery and data management products, so it’s unclear why it needs Actifio. But Mike Mcclurg, IBM VP of global midmarket sales, said Actifio provides one tool to handle all those functions.

“We approach managed service providers form a business perspective,” he said. “How can a partnership with IBM grow their business? It’s challenging for managed service providers to find cost effective data solutions that requires cobbling together a lot of backup, replication, snapshot, and data management tools. Actifio is an elegant way of replacing a lot of technology and overlapping software products.”

Maybe the partnership is the beginning of a deeper relationship between the vendors. Actifio president Jim Sullivan is former VP of worldwide sales for IBM system storage. He maintains that the startup is keeping its partnership options open, but he is also counting on IBM to bring Actifio into deals the startup can’t land on its own.

“This is not an exclusive deal,” he said. “But we’re driving this with IBM. Showing up with service providers with IBM is a great opportunity for us to get reach and credibility.”


February 16, 2012  4:42 PM

HP goes all-flash with new LeftHand iSCSI system

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard today quietly launched an all solid-state drive (SSD) version of its LeftHand iSCSI SAN array.

Unlike the server and services announcements HP made at its Global Partner Conference, HP made its storage news with little fanfare on a company blog.

The HP P4900 SSD Storage System has 16 400 GB multi-level cell (MLC) SAS SSDs – eight in each of the system’s two nodes. Each two-node system includes 6.4 TB, and customers can add 3.2 TB expansion nodes to scale to clusters of 102.4 TB. Expansion nodes increase the system’s IOPS as well as capacity.

The systems use the HP SMARTSSD Wear Gauge, which is firmware that monitors SSD drives and sends out alerts when a drive gets close to the end of its life. The monitoring firmware is part of the P4000 Management Console.

HP claims the monitoring and scale-out architecture solve the major problems with solid-state storage arrays. “When it comes to SSDs in general, they are great for increasing IOPS and benefitting a business with lower power/cooling requirements,” P4000 product marketing manager Kate Davis wrote in the blog. “But the bad comes with unknown wear lifespan of the drive. And then it turns downright ugly when traditional dual-controller systems bottleneck the performance that was supposed to be the good part. … Other vendors must build towers of storage behind one or two controllers – LeftHand scales on and on.”

The large storage vendors offer SSDs in place of hard drives in their arrays, and there’s no reason they can’t ship a system with all flash. But the P4900 is the first dedicated all-flash system from a major vendor. Smaller vendors such as Nimbus Data, Pure Storage, SolidFire, Violin Memory, Whiptail and Texas Memory Systems have all-SSD storage systems.

A 6.4-TB P4900 costs $199,000. The expansion unit costs $105,000.


February 16, 2012  9:21 AM

NetApp developing server-side flash software

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens says he expects server-side flash to become a key part of his vendor’s flash strategy. However, NetApp will take a different approach than its rival EMC.

Asked about EMC’s VFCache product during NetApp’s earnings call Wednesday, Georgens said server-side flash is “a sure thing,” but NetApp will focus on data management software that works with PCIe cards instead of selling the cards. He doesn’t rule out selling cards either, though.

“I don’t think the opportunity is simply selling cards into the host, although we may do that,” he said. “But our real goal is we’re going to bring the data that’s stored in flash on the host into our data management methodology for backup, replication, deduplication and all of those things. It isn’t as simple as we’re going to make a PCI flash card. Our focus this year is the software component and bringing that into our broader data management capability.”

With VFCache, EMC sells PCIe cards from Micron or LSI with the storage vendor’s management software. NetApp appears intent on selling software that will work with any PCIe cards – or at least the most popular cards. The question is whether it can develop software that is integrated as tightly with many cards instead of focusing on one or two.

Georgens said NetApp was correct all along with its contention that using flash as cache is more effective than replacing hard drives in an array with solid-state drives (SSDs). NetApp’s Fast Cache card goes into the array to accelerate performance. It is included on all FAS6000 systems and as an option on NetApp’s other FAS systems. NetApp does offer SSDs in the array, but recommends flash as cache.

“Flash is going to be pervasive,” Georgens said. “I think you’re going to see it everywhere in the infrastructure. Our position all along has been that flash as a cache is where it has the most impact. And I would say that we actually see probably more pervasive deployment of flash in our systems than anybody else in the industry.”

On the hard drive front, Georgens said the impact from shortages caused by floods in Thailand weren’t as bad as anticipated last quarter although it will take another six to nine months before the “uncertainty” lifts.

“While drive vendors had little forward delivery visibility, most of the disk drives shipped in excess of initial estimates,” Georgens said. “However, not all drive types were universally available and some spot shortages impacted revenue and will likely do so in the upcoming quarter as well. … We expect the drive situation to continue to inject uncertainty into the revenue for the next nine months as availability, cost and pricing settle out in the market.”


February 15, 2012  6:16 PM

SanDisk acquires FlashSoft to accelerate flash performance

Dave Raffo Dave Raffo Profile: Dave Raffo

One by one, solid-state flash vendors are adding caching software to enhance their products. SanDisk picked up startup FlashSoft today in a move designed to make applications run faster with SanDisk’s and other vendors’ PCIe and solid-state drive (SSD) products.

Enterprise PCIe flash pioneer Fusion-io began the trend by acquiring IO Turbine last August, and OCZ picked up Sanrad for its PCIe caching software in January. Solid-state vendor STEC internally developed its EnhanceIO caching software, and EMC’s caching software and FAST auto-tiering appliance play a big role in its VFCache server-side flash product.

The acquisition of FlashSoft leaves startups Nevex, Velobit and perhaps a few other vendors still in stealth as obvious targets for solid-state vendors. Nevex and Texas Memory Systems last week said they were jointly developing software that would speed applications running on TMS SSD storage.

FlashSoft software turns SSD and PCIe sever flash into a cache for the most frequently accessed data. The company came out of stealth last June with FlashSoft SE for Windows and later added FlashSoft SE versions for Linux, VMware vSphere and Microsoft Hyper-V.

SanDisk said it will sell FlashSoft SE as standalone software and with the Lightning Enterprise SSDs and upcoming PCIe-based devices based on technology that it acquired by buying Pliant last May for $327 million. SanDisk’s SSDs are used by Dell EqualLogic, NetApp, Hewlett-Packard and others through OEM deals.

“We think this is the next step in our performance acceleration journey,” said Greg Goelz, VP of SanDisk’s enterprise storage solutions group.

Goelz said FlashSoft software was appealing because it can work with any hardware and that fits with SanDisk’s OEM model, and it scales better than competitors. “We looked at how did they scale in capacity? If you move from 100 gigabytes of SSDs to terabytes, does the metadata scale exponentially? Is the overhead low? Does it have the best approach to support what’s out there today and to support the evolution from single server to virtualization and clusters? FlashSoft was well ahead of anybody in the market by a substantial lead.”

SanDisk did not disclose the purchase price for FlashSoft.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: