Storage Soup


February 28, 2012  9:22 AM

Data protection in transition

Randy Kerns Randy Kerns Profile: Randy Kerns

The increase in data capacity demand makes it difficult for Information Technology to continue with existing data protection practices. Many organizations have realized their protection methods are unsustainable, mainly because of the impact of the increased capacity demand and budget limitations.

The increase in capacity demands come from many sources. These include business expansion, the need to retain more information for longer periods of time, data types such as rich media that are more voluminous than in the past, and an avalanche of machine-to-machine data used in big data analytics.

The data increase requires more storage systems, which are usually funded through capital expense. Often these are paid for as part of a project with one-time project funds.

The increase in data also changes the backup process. The amount of time required to protect the information may extend beyond what is practical from a business operations standpoint. The amount of data to protect may require more backup systems than can physically be accommodated.

It is common for new projects to budget for the capital expenses required. Unfortunately, the increase in operational expenses is rarely enough to support the data protection impact. The administration expenses from increased time spent by the staff in handling the data can be estimated, but is difficult to add to the project budget because it is an ongoing expense rather than a one-time expense.

Unexpected data growth can exceed capacity-based licensing thresholds and turn into an unpleasant budget-buster. Even expenses related to external resources such as disaster recovery copies of information may ratchet up past thresholds.

There are new approaches to data protection. However, there is usually not enough funding available to implement alternative data protection approaches. Changing procedures in IT is also difficult because of the training required and the amount of risk that is introduced.

Vendors see the opportunities, and address them with approaches that make the most economic sense for them. The most common approach is to enhance existing products, improving their speed and effective capability. Another vendor approach is to introduce new data protection appliances combining software and hardware to simplify operations. Whether these are long-term solutions or merely incremental improvements depends on the specific environment.

Another approach evolving with vendors is to include data protection as an integral part of a storage system. This involves adding a set of policy controls for protection and data movers for automated data protection. These come in the form of block storage systems with the ability to selectively replicate delta changes to volumes and in network attached storage systems that can migrate or copy data based on rules to another storage system. Implementing this type of protection requires software to manage recovery and retention of the protected data.

A change must be made to continue the IT mandate for protecting information. However, the fundamental problem with data protection addressing capacity demand is economics. For most IT operations, the solution cannot represent a major investment and it must be administratively cost-neutral to a great extent. Current data protection solutions that meet those requirements are hard to find.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

February 24, 2012  5:23 PM

Backup vendors take cloud cover

Dave Raffo Dave Raffo Profile: Dave Raffo

Backup vendors are paying especially close attention to the cloud these days. Asigra, Imation and CA launched backup products and services this week with various degrees of cloud connectivity.

Asigra has sold backup software to service and cloud providers for years. With its Cloud Backup 11.2 software, it is adding the NetApp-Asigra Data Protection as a Service (DPaaS) bundle for providers.

The Asigra-NetApp deal is a meet-in-the-channel relationship that Asigra director of strategic alliances Doug Ko said can help service providers and telcos get cloud backup services up and running faster. Ko said engineers from the vendors have worked together to insure compatibility. CloudBackup 11.2 supports deeper snapshot integration with NetApp arrays.

Asigra also added support for VMware vSphere 5.0 and Apple iOS 5 and Google Android 4.0 to beef up support for virtual machine and mobile device protection in version 11.2.

Imation launched two new models of a DataGuard SMB backup appliance that use standard hard disk drives and RDX removable hard drives. Built-in replication lets a DataGuard appliance serve as a gateway to the cloud.

Imation says the DataGuard R4 and DataGuard T5R appliances are compatible with cloud storage APIs, including those from Amazon S3, Dropbox and OpenStack-based cloud providers. Imation doesn’t have deduplication in its DataGuard software yet, so customers need to use seed units for initial backups. After that, customers can set the appliances to automatically replicate data to the cloud.

Bill Schilling, director of scalable storage marketing for Imation, said the early target market is Imation’s SMB tape customers. “That’s the low-hanging fruit,” he said. “And maybe SMBs that aren’t backing up or need to be backing up better.”

Pricing ranges from $2,000 to $5,000 for the new DataGuard appliances.

CA launched ARCserve D2D On Demand with a connector to Microsoft’s Windows Azure cloud service. ARCserve D2D on Demand lets SMBs or cloud service providers keep data on premise while using Azure for backup and bare metal restore. A subscription includes 25 GB of Windows Azure cloud storage per protected machine.


February 24, 2012  10:48 AM

Dell expands its backup plan with AppAssure

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell made its fifth storage acquisition in a little over four years today when it acquired AppAssure and its backup and replication software for virtual and physical machines.

Dell did not disclose the price, which obviously wasn’t close to what the vendor paid for EqualLogic ($1.4 billion) in 2007 or Compellent ($820 million) last year. Dell also acquired the IP of scalable NAS vendor Exanet and primary data reduction software specialist Ocarina in 2010.

In a blog explaining the deal, Dell storage GM Darren Thomas wrote that Dell acquired AppAssure to “help Dell customers modernize their data protection strategies …”

AppAssure’s Backup and Replication is application-aware with continuous data protection (CDP) technology that facilitates quick recovery of a file or an application. Thomas wrote that Dell will sell AppAssure as standalone software and eventually expand its IP into Dell storage products.

The software could help link Dell’s EqualLogic and Compellent SANs. Thomas pointed out that customers can use it to back up an EqualLogic array to a Compellent system and recover it from anywhere. We expect further connection between the platforms as Dell completes its integration of Exanet’s file system, Ocarina’s deduplication and compression, and AppAssure data protection with its storage arrays.

There had been a lot of speculation that Dell’s storage acquisition strategy would lead it to buy a backup vendor. CommVault, a close Dell partner, was often mentioned as a possible target. But CommVault would cost billions of dollars. Dell is attacking backup in a different way, integrating AppAssure’s technology with its hardware and using Ocarina software in a recently launched DR4000 disk backup appliance.

We’ll be watching to see how the acquisition affects Dell’s relationships with CommVault and Symantec. Dell resells both vendors’ backup software in bundles with its hardware. As a standalone company, AppAssure wasn’t considered much of a threat to Symantec or CommVault. But that could change now that AppAssure is part of Dell. AppAssure gives Dell its own data protection IP to go with its new dedupe backup appliance. Dell ended its long-time OEM relationship with EMC so it could make more money selling its own storage arrays. It looks like Dell will move in the same direction with backup.


February 23, 2012  9:00 AM

For HP storage, there’s 3PAR and subpar

Dave Raffo Dave Raffo Profile: Dave Raffo

Despite good results with its 3PAR SAN arrays, Hewlett-Packard (HP)’s storage sales are sinking.

HP’s earnings report Wednesday was filled with poor results, and storage was no exception. HP said its 3PAR sales nearly doubled but the rest of the business stumbled, especially the midrange EVA platform. HP’s overall storage revenue was $955 million in the quarter, down 6% from a year ago and nearly 12% from the previous quarter.

HP acquired 3PAR for $2.35 billion in September 2010 after winning a bidding war against Dell. 3PAR is HP’s flagship storage product, but it will take more than just 3PAR to turn around HP’s storage business.

HP’s 6% drop compares to a 12% revenue increase for the overall storage industry in the fourth quarter of 2012, according to Aaron Rakers, the enterprise hardware analyst for Stifel Nicolaus Equity Research. In a research note published today, Rakers wrote that EMC’s fourth quarter results were up 10% year-over-year (5% without Isilon numbers) and 14% from the previous quarter. NetApp revenue increased 26% year-over-year (4% without Engenio) and Hitachi Data Systems revenue grew 15% year-over-year and 13% sequentially.

Dell last week reported storage revenue of $500 million. That was down 13% from last year, reflecting the loss of EMC storage that Dell used to sell through an OEM deal. Revenue from Dell-owned storage increased 33% from last year and 19% sequentially.


February 22, 2012  4:19 PM

Symantec set to hop on Hadoop, tackle ‘big data’

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec will jump into the “big data” market this year with an application designed to make Hadoop MapReduce more enterprise-ready.

Don Angspatt, VP of product management for Symantec’s storage and availability management group, said the vendor has a prototype of the application working and he expects the product to ship in 2012. He won’t provide many details yet, but said the concept is similar to what EMC is doing with its integration of Isilon scale-out NAS and Greenplum analytics file system. The difference is that the Symantec app will work across heterogeneous storage.

Last month, EMC gave its Isilon OneFS operating system native support for the Hadoop Distributed File System (HDFS) and released the EMC Greenplum HD on Isilon.

The idea is to remove limitations such as a single point of failure and lack of shared storage capabilities that prevent Hadoop from working well in enterprises.

Angspatt said the Hadoop product will be sold separately from Symantec’s Storage Foundation storage management suite, although it will work in a storage environment. He said the application will compete with Cloudera – which has a partnership with NetApp – and MapR Technologies software that EMC uses as part of its Greenplum HD.

“We want to make sure Hadoop and MapReduce work well in an enterprise environment,” Angspatt said. “We’re not going to do business intelligence. We’ll be involved in the infrastructure behind it to make sure it’s enterprise-ready. Our application will talk directly to Hadoop, similar to the EMC Greenplum-Isilon integration. But with us you don’t get locked into a specific hardware.”


February 21, 2012  9:07 AM

Organization structures need optimization

Randy Kerns Randy Kerns Profile: Randy Kerns

One of the most important parts of optimizing the data center involves improving storage efficiency. And that requires more than implementing the latest technologies. While working with IT operations in developing strategies to increase storage efficiency, it has become clear to me that organizational structure must change in order to expedite data center optimization.

Like storage systems, IT organizations tend to get more complex over time. The complexity affects the decision-making process involving the storage architects/administrators and the business owners responsible for the applications and information. There may be layers of groups with varying responsibilities between the staff that needs to develop and implement the storage technologies, and those who truly understand the requirements.

Having one or two levels of filtering makes it much more difficult to understand the needs of the “customer,” who in this case is the business owners or their staffs. Much of the technology optimization process involves understanding what is required and includes byplay between architects and the actual customer. The lack of that interaction and base understanding of customer needs often results in a solution that fails to address key needs and planning for the changes that will occur.

Optimizing organizational structure is perceived to be a difficult task. The changes affect many influential people and groups. The need is obvious, but the complex organization structures that have developed over time may be deep-rooted and require a commitment and direction from the most senior levels in IT. Working towards data center optimization and improved storage efficiency is much more difficult without this commitment and direction. Often, the process brings compromises that reduce the project’s effective value.

Understanding the problems in working toward an optimized environment means understanding the organization structure and those limitations. Any storage optimization strategy must take into account the structure as well as technology and products. This brings about more work, but it is the current situation in many IT operations.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


February 17, 2012  5:26 PM

HDS enhances non-disruptive data migration to VSP

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Hitachi Data Systems Corp. has added a non-disruptive data migration capability to its flagship Virtual Storage Platform (VSP) that minimizes downtime for hosts and applications when moving data  to the VSP from other storage arrays.

Hitachi Nondisruptive Migration moves data from HDS’ older Universal Storage Platform (USP) to the VSP. The target array spoofs the host operating system so it thinks it’s talking to the original system even as data is moved to the target. That allows applications to continue running during migration.

Patrick Allaire, an HDS senior product marketing manager, said this capability is based on three components:  a new persistent virtual Logical Device Identity (LDEV ID) that is used to manage multiple hosts, a logical identity takeover function, and Hitachi’s Universal Volume Manager (UVM).

The virtual LDEV ID is embedded in the SCSI layer of the Fibre Channel protocol. A virtual LDEV ID  is created and mapped from the source to the target, then the takeover function spoofs the operating system.  Finally, the Hitachi UVM copies or migrates the volume from the source to the VSP target.

Typically, a physical LDEV ID is used for this process but it makes it more manually intensive. “If you don’t have a virtual persistent ID, it forces you to do a shutdown of the application when changing the data path from the old storage system to the new one,” Allaire said.

With the new virtual LDEV ID and takeover function, Hitachi claims to maintain the quality of service levels for applications. Hitachi now supports host clustering configurations by synchronizing persistent group reserves on source volumes and keeping the host cluster operational. It also supports parallel migration of up to eight source systems to one Hitachi VSP.


February 17, 2012  3:57 PM

Actifio banks on IBM giving it a PAS with service providers

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM and Actifio struck up a partnership this week that startup Actifio hopes will bring its Protection and Availability (PAS) platform to the cloud and IBM sees as a way to fill data protection needs for service providers.

IBM and Actifio said they will offer bundles to cloud service providers and VARs. The packages include Actifio’s PAS data protection with IBM DS3500 Express, IBM Storwize V7000, XIV Gen3 and SAN Storage Volume Controller (SVC) systems.

IBM has its own backup, replication, disaster recovery and data management products, so it’s unclear why it needs Actifio. But Mike Mcclurg, IBM VP of global midmarket sales, said Actifio provides one tool to handle all those functions.

“We approach managed service providers form a business perspective,” he said. “How can a partnership with IBM grow their business? It’s challenging for managed service providers to find cost effective data solutions that requires cobbling together a lot of backup, replication, snapshot, and data management tools. Actifio is an elegant way of replacing a lot of technology and overlapping software products.”

Maybe the partnership is the beginning of a deeper relationship between the vendors. Actifio president Jim Sullivan is former VP of worldwide sales for IBM system storage. He maintains that the startup is keeping its partnership options open, but he is also counting on IBM to bring Actifio into deals the startup can’t land on its own.

“This is not an exclusive deal,” he said. “But we’re driving this with IBM. Showing up with service providers with IBM is a great opportunity for us to get reach and credibility.”


February 16, 2012  4:42 PM

HP goes all-flash with new LeftHand iSCSI system

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard today quietly launched an all solid-state drive (SSD) version of its LeftHand iSCSI SAN array.

Unlike the server and services announcements HP made at its Global Partner Conference, HP made its storage news with little fanfare on a company blog.

The HP P4900 SSD Storage System has 16 400 GB multi-level cell (MLC) SAS SSDs – eight in each of the system’s two nodes. Each two-node system includes 6.4 TB, and customers can add 3.2 TB expansion nodes to scale to clusters of 102.4 TB. Expansion nodes increase the system’s IOPS as well as capacity.

The systems use the HP SMARTSSD Wear Gauge, which is firmware that monitors SSD drives and sends out alerts when a drive gets close to the end of its life. The monitoring firmware is part of the P4000 Management Console.

HP claims the monitoring and scale-out architecture solve the major problems with solid-state storage arrays. “When it comes to SSDs in general, they are great for increasing IOPS and benefitting a business with lower power/cooling requirements,” P4000 product marketing manager Kate Davis wrote in the blog. “But the bad comes with unknown wear lifespan of the drive. And then it turns downright ugly when traditional dual-controller systems bottleneck the performance that was supposed to be the good part. … Other vendors must build towers of storage behind one or two controllers – LeftHand scales on and on.”

The large storage vendors offer SSDs in place of hard drives in their arrays, and there’s no reason they can’t ship a system with all flash. But the P4900 is the first dedicated all-flash system from a major vendor. Smaller vendors such as Nimbus Data, Pure Storage, SolidFire, Violin Memory, Whiptail and Texas Memory Systems have all-SSD storage systems.

A 6.4-TB P4900 costs $199,000. The expansion unit costs $105,000.


February 16, 2012  9:21 AM

NetApp developing server-side flash software

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens says he expects server-side flash to become a key part of his vendor’s flash strategy. However, NetApp will take a different approach than its rival EMC.

Asked about EMC’s VFCache product during NetApp’s earnings call Wednesday, Georgens said server-side flash is “a sure thing,” but NetApp will focus on data management software that works with PCIe cards instead of selling the cards. He doesn’t rule out selling cards either, though.

“I don’t think the opportunity is simply selling cards into the host, although we may do that,” he said. “But our real goal is we’re going to bring the data that’s stored in flash on the host into our data management methodology for backup, replication, deduplication and all of those things. It isn’t as simple as we’re going to make a PCI flash card. Our focus this year is the software component and bringing that into our broader data management capability.”

With VFCache, EMC sells PCIe cards from Micron or LSI with the storage vendor’s management software. NetApp appears intent on selling software that will work with any PCIe cards – or at least the most popular cards. The question is whether it can develop software that is integrated as tightly with many cards instead of focusing on one or two.

Georgens said NetApp was correct all along with its contention that using flash as cache is more effective than replacing hard drives in an array with solid-state drives (SSDs). NetApp’s Fast Cache card goes into the array to accelerate performance. It is included on all FAS6000 systems and as an option on NetApp’s other FAS systems. NetApp does offer SSDs in the array, but recommends flash as cache.

“Flash is going to be pervasive,” Georgens said. “I think you’re going to see it everywhere in the infrastructure. Our position all along has been that flash as a cache is where it has the most impact. And I would say that we actually see probably more pervasive deployment of flash in our systems than anybody else in the industry.”

On the hard drive front, Georgens said the impact from shortages caused by floods in Thailand weren’t as bad as anticipated last quarter although it will take another six to nine months before the “uncertainty” lifts.

“While drive vendors had little forward delivery visibility, most of the disk drives shipped in excess of initial estimates,” Georgens said. “However, not all drive types were universally available and some spot shortages impacted revenue and will likely do so in the upcoming quarter as well. … We expect the drive situation to continue to inject uncertainty into the revenue for the next nine months as availability, cost and pricing settle out in the market.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: