Storage Soup


March 5, 2012  4:34 PM

Storage sales growth slows, perhaps because of cloud, dedupe

Dave Raffo Dave Raffo Profile: Dave Raffo

Although S\storage systems revenue grew during the fourth quarter of 2011 and for the entire year, that growth slowed compared to previous periods.

According to IDC’s worldwide quarterly disk storage systems tracker, external disk systems (networked storage) increased 7.7% year-over-year to $6.6 billion in the last quarter of 2011. That compares to 16.2% year-over-year growth in the fourth quarter of 2010, and 10.8% growth in the third quarter of 2011.

For the full year, external disk storage revenue increased 10.6% in 2011 compared to 18.3% growth in 2010.

The rate of growth slowed across categories that IDC tracks – open SAN, NAS and iSCSI. SAN storage revenue grew 14.1% and iSCSI storage revenue increased 16.6% year-over-year for the fourth quarter of 2011, while NAS disk storage revenue actually declined by 1.2%. In the fourth quarter of 2010, SAN revenue grew 15.1%, iSCSI increased 42.1% and NAS grew 21.7%.

IDC senior research analyst Amita Potnis said growth slowed after picking up sharply in 2010 as the industry moved out of the recession. That significant increase in late 2010 made for difficult comparisons in 2011. Also, the cloud and storage efficiency technologies probably tempered sales. She said the hard drive shortage caused by the Thailand floods had little impact during the end of 2011, but is hurting sales this year.

“Two technology trends have had a big impact on the market,” she said. “If a storage system is sold to cloud service providers, we count it. But a significant amount of cloud storage capacity does not come from external storage system purchases, so that has an impact. Also, storage efficiency technologies such as deduplication, compression, virtualization and thin provisioning have a significant impact on the market. End users can adjust their buying strategies and use what they have more efficiently.”

Potnis said solid-state drives (SSDs) remain at below 10% of external storage system revenue.

NAS revenue declined despite research showing file storage is outpacing block storage growth. Potnis said NAS grew more than 40% in every quarter of 2010 and that growth rate was difficult to sustain in 2011.

“Also, the types of data on NAS devices – backup and archive – are main candidates for data deduplication and transfer to the cloud,” she said. “So the impact from cloud and storage efficiency is greater on NAS than block data. But we expect file data will continue to grow faster than application or block storage.”

IDC also reported the high-end segment – systems selling for $250,000 and up – had the highest growth rate of all price segments. High-end storage systems increased to 30% in the fourth quarter of 2011, up from 28.2% in 2010.

EMC extended its lead in external storage market share during the fourth quarter of 2011, growing 22.4% — nearly triple the overall growth rate. EMC’s revenue share was 29.4%, compared to 25.9% in the fourth quarter of 2010. IBM retained second place with flat revenue year-over-year, but its share dropped from 16.4% last year to 15.2%. NetApp slipped slightly ahead of Hewlett-Packard into third place, although IDC lists them as tied because they are less than one percent apart. NetApp grew 16.6% in the quarter for an 11.2% market share. HP revenue fell 3.8% in the quarter for 10.3% share. In the fourth quarter of 2010, HP had 11.6% share with NetApp at 10.3%. Hitachi Data Systems (HDS) was fifth last quarter, growing 11.6% for 9.2% share.

For the entire year, EMC increased external disk systems revenue 23.6% and increased its market share 3% in 2011 to 28.5%. IBM grew 8.9% over the year and stands second with 13.5%. NetApp grew the most for 2011 following an acquisition of LSI’s Engenio storage business, increasing revenue 23.7% to take 12.4% market share. HP grew 7.7% and slipped from a statistical tie with NetApp in 2010 to fourth with 10.7% in 2011. HDS grew 18.8% for the year and held 8.8% of the market. HDS overtook Dell for fifth for the year, as Dell’s revenue tumbled following the end of its OEM partnership with EMC.

March 1, 2012  5:03 PM

Storage users spared downtime from Microsoft Azure crash

Dave Raffo Dave Raffo Profile: Dave Raffo

The good news for Microsoft Windows Azure cloud storage customers was found in the last sentence of the third paragraph of the blog update about its “Leap Year outage” Wednesday:

“Windows Azure Storage was not impacted by this issue.”

That doesn’t mean cloud storage won’t be impacted in the future, though. A high-profile cloud outage will have people thinking twice about moving important data to the cloud.

“Every time one of these things happens, the umbrella of the cloud gets tarnished,” said Andres Rodriguez, CEO of cloud NAS vendor Nasuni. “It hurts. Our customers know what they have, it’s the prospects that I’m worried about. Our sales guys get many more questions in the field because of it.”

Nasuni stores its customers’ data on Azure and Amazon S3 clouds. Amazon’s compute cloud, you may remember, had two outages last year. Cloud outages are one reason Nasuni bills its hardware and software NAS appliances a storage services systems, not cloud devices. Rodriguez said Nasuni treats the cloud as a hard drive, but uses the same architecture as mainstream storage vendors. And he wishes cloud providers would treat storage and compute as separate entities, just as data centers do.

“This would not happen if people separeated compute and services in the cloud,” he said. “Compute and storage are totally different things in the data center, and people somehow bundle them in the cloud. They’re not bundleable. They’re two different systems with different characteristics. Azure did not have any issues in its storage layer. The storage piece of Azure has been highly available for the last 48 hours.”

Microsoft said the Azure issue was resolved a little after 1 PM ET today.


February 29, 2012  5:55 PM

Amplidata gets funding from Intel, others for object storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Object-storage startup Amplidata picked up $8 million in funding and a new strategic investor today.

New investor Intel Capital joins previous Amplidata investors Swisscom Ventures, Big Bang Ventures and Endeavour Vision to bring the vendor’s total funding to $14 million. Amplidata CEO Wim De Wispelaere said the funding will be used to beef up sales and marketing for the AmpliStor Optimized Object Storage system that has been making its way into cloud and “big data” implementations.

The vendor’s headquarters are in Belgium and it also has a Redwood City, Calif.  office. Most of its early customers are in Europe, so you can expect to see a big marketing push in the U.S. now.

Object storage is considered one of the hottest emerging technologies and used for dealing with large data stores. AmpliStor features an erasure code technology called BitSpread to store data redundantly across a large number of disks, and its BitDynamics technology handles data integrity verification, self-monitoring, and automatic data healing.

De Wispelaere said Amplidata’s customers generally fall into two use case categories that require scalable storage. “The first use case is what we call online applications,” he said. “Customers have written their own application and need to scale out their storage to store photos, videos or file. Another big market is media and entertainment. We’re used as a nearline archive for a postproduction system, so data is readily available whenever it’s needed.”

Amplidata faces stiff competition in both areas. For the cloud, it’s going against startups such as Scality, Cleversafe and Mezeo as well as established players Hitachi Data Systems HCAP, EMC Atmos and Dell DX, and API-based service providers such as RackSpace OpenStack and Amazon S3. In scale-out storage, its competition includes EMC Isilon, HDS BlueArc, and NetApp.

Amplidata is part of Intel’s Cloud Builders alliance, and last fall demonstrated its system at the Intel Development Forum. That relationship –- and Intel’s investment – should ensure that Amplidata will be kept current on the Intel roadmap.

It’s possible that Amplidata is benefitting from its relationship with Swisscom as well. Swisscom offers cloud services, but De Wispelaere could not say if it uses Amplidata storage. “I have a strict NDA with Swisscom,” he said.


February 28, 2012  9:22 AM

Data protection in transition

Randy Kerns Randy Kerns Profile: Randy Kerns

The increase in data capacity demand makes it difficult for Information Technology to continue with existing data protection practices. Many organizations have realized their protection methods are unsustainable, mainly because of the impact of the increased capacity demand and budget limitations.

The increase in capacity demands come from many sources. These include business expansion, the need to retain more information for longer periods of time, data types such as rich media that are more voluminous than in the past, and an avalanche of machine-to-machine data used in big data analytics.

The data increase requires more storage systems, which are usually funded through capital expense. Often these are paid for as part of a project with one-time project funds.

The increase in data also changes the backup process. The amount of time required to protect the information may extend beyond what is practical from a business operations standpoint. The amount of data to protect may require more backup systems than can physically be accommodated.

It is common for new projects to budget for the capital expenses required. Unfortunately, the increase in operational expenses is rarely enough to support the data protection impact. The administration expenses from increased time spent by the staff in handling the data can be estimated, but is difficult to add to the project budget because it is an ongoing expense rather than a one-time expense.

Unexpected data growth can exceed capacity-based licensing thresholds and turn into an unpleasant budget-buster. Even expenses related to external resources such as disaster recovery copies of information may ratchet up past thresholds.

There are new approaches to data protection. However, there is usually not enough funding available to implement alternative data protection approaches. Changing procedures in IT is also difficult because of the training required and the amount of risk that is introduced.

Vendors see the opportunities, and address them with approaches that make the most economic sense for them. The most common approach is to enhance existing products, improving their speed and effective capability. Another vendor approach is to introduce new data protection appliances combining software and hardware to simplify operations. Whether these are long-term solutions or merely incremental improvements depends on the specific environment.

Another approach evolving with vendors is to include data protection as an integral part of a storage system. This involves adding a set of policy controls for protection and data movers for automated data protection. These come in the form of block storage systems with the ability to selectively replicate delta changes to volumes and in network attached storage systems that can migrate or copy data based on rules to another storage system. Implementing this type of protection requires software to manage recovery and retention of the protected data.

A change must be made to continue the IT mandate for protecting information. However, the fundamental problem with data protection addressing capacity demand is economics. For most IT operations, the solution cannot represent a major investment and it must be administratively cost-neutral to a great extent. Current data protection solutions that meet those requirements are hard to find.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


February 24, 2012  5:23 PM

Backup vendors take cloud cover

Dave Raffo Dave Raffo Profile: Dave Raffo

Backup vendors are paying especially close attention to the cloud these days. Asigra, Imation and CA launched backup products and services this week with various degrees of cloud connectivity.

Asigra has sold backup software to service and cloud providers for years. With its Cloud Backup 11.2 software, it is adding the NetApp-Asigra Data Protection as a Service (DPaaS) bundle for providers.

The Asigra-NetApp deal is a meet-in-the-channel relationship that Asigra director of strategic alliances Doug Ko said can help service providers and telcos get cloud backup services up and running faster. Ko said engineers from the vendors have worked together to insure compatibility. CloudBackup 11.2 supports deeper snapshot integration with NetApp arrays.

Asigra also added support for VMware vSphere 5.0 and Apple iOS 5 and Google Android 4.0 to beef up support for virtual machine and mobile device protection in version 11.2.

Imation launched two new models of a DataGuard SMB backup appliance that use standard hard disk drives and RDX removable hard drives. Built-in replication lets a DataGuard appliance serve as a gateway to the cloud.

Imation says the DataGuard R4 and DataGuard T5R appliances are compatible with cloud storage APIs, including those from Amazon S3, Dropbox and OpenStack-based cloud providers. Imation doesn’t have deduplication in its DataGuard software yet, so customers need to use seed units for initial backups. After that, customers can set the appliances to automatically replicate data to the cloud.

Bill Schilling, director of scalable storage marketing for Imation, said the early target market is Imation’s SMB tape customers. “That’s the low-hanging fruit,” he said. “And maybe SMBs that aren’t backing up or need to be backing up better.”

Pricing ranges from $2,000 to $5,000 for the new DataGuard appliances.

CA launched ARCserve D2D On Demand with a connector to Microsoft’s Windows Azure cloud service. ARCserve D2D on Demand lets SMBs or cloud service providers keep data on premise while using Azure for backup and bare metal restore. A subscription includes 25 GB of Windows Azure cloud storage per protected machine.


February 24, 2012  10:48 AM

Dell expands its backup plan with AppAssure

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell made its fifth storage acquisition in a little over four years today when it acquired AppAssure and its backup and replication software for virtual and physical machines.

Dell did not disclose the price, which obviously wasn’t close to what the vendor paid for EqualLogic ($1.4 billion) in 2007 or Compellent ($820 million) last year. Dell also acquired the IP of scalable NAS vendor Exanet and primary data reduction software specialist Ocarina in 2010.

In a blog explaining the deal, Dell storage GM Darren Thomas wrote that Dell acquired AppAssure to “help Dell customers modernize their data protection strategies …”

AppAssure’s Backup and Replication is application-aware with continuous data protection (CDP) technology that facilitates quick recovery of a file or an application. Thomas wrote that Dell will sell AppAssure as standalone software and eventually expand its IP into Dell storage products.

The software could help link Dell’s EqualLogic and Compellent SANs. Thomas pointed out that customers can use it to back up an EqualLogic array to a Compellent system and recover it from anywhere. We expect further connection between the platforms as Dell completes its integration of Exanet’s file system, Ocarina’s deduplication and compression, and AppAssure data protection with its storage arrays.

There had been a lot of speculation that Dell’s storage acquisition strategy would lead it to buy a backup vendor. CommVault, a close Dell partner, was often mentioned as a possible target. But CommVault would cost billions of dollars. Dell is attacking backup in a different way, integrating AppAssure’s technology with its hardware and using Ocarina software in a recently launched DR4000 disk backup appliance.

We’ll be watching to see how the acquisition affects Dell’s relationships with CommVault and Symantec. Dell resells both vendors’ backup software in bundles with its hardware. As a standalone company, AppAssure wasn’t considered much of a threat to Symantec or CommVault. But that could change now that AppAssure is part of Dell. AppAssure gives Dell its own data protection IP to go with its new dedupe backup appliance. Dell ended its long-time OEM relationship with EMC so it could make more money selling its own storage arrays. It looks like Dell will move in the same direction with backup.


February 23, 2012  9:00 AM

For HP storage, there’s 3PAR and subpar

Dave Raffo Dave Raffo Profile: Dave Raffo

Despite good results with its 3PAR SAN arrays, Hewlett-Packard (HP)’s storage sales are sinking.

HP’s earnings report Wednesday was filled with poor results, and storage was no exception. HP said its 3PAR sales nearly doubled but the rest of the business stumbled, especially the midrange EVA platform. HP’s overall storage revenue was $955 million in the quarter, down 6% from a year ago and nearly 12% from the previous quarter.

HP acquired 3PAR for $2.35 billion in September 2010 after winning a bidding war against Dell. 3PAR is HP’s flagship storage product, but it will take more than just 3PAR to turn around HP’s storage business.

HP’s 6% drop compares to a 12% revenue increase for the overall storage industry in the fourth quarter of 2012, according to Aaron Rakers, the enterprise hardware analyst for Stifel Nicolaus Equity Research. In a research note published today, Rakers wrote that EMC’s fourth quarter results were up 10% year-over-year (5% without Isilon numbers) and 14% from the previous quarter. NetApp revenue increased 26% year-over-year (4% without Engenio) and Hitachi Data Systems revenue grew 15% year-over-year and 13% sequentially.

Dell last week reported storage revenue of $500 million. That was down 13% from last year, reflecting the loss of EMC storage that Dell used to sell through an OEM deal. Revenue from Dell-owned storage increased 33% from last year and 19% sequentially.


February 22, 2012  4:19 PM

Symantec set to hop on Hadoop, tackle ‘big data’

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec will jump into the “big data” market this year with an application designed to make Hadoop MapReduce more enterprise-ready.

Don Angspatt, VP of product management for Symantec’s storage and availability management group, said the vendor has a prototype of the application working and he expects the product to ship in 2012. He won’t provide many details yet, but said the concept is similar to what EMC is doing with its integration of Isilon scale-out NAS and Greenplum analytics file system. The difference is that the Symantec app will work across heterogeneous storage.

Last month, EMC gave its Isilon OneFS operating system native support for the Hadoop Distributed File System (HDFS) and released the EMC Greenplum HD on Isilon.

The idea is to remove limitations such as a single point of failure and lack of shared storage capabilities that prevent Hadoop from working well in enterprises.

Angspatt said the Hadoop product will be sold separately from Symantec’s Storage Foundation storage management suite, although it will work in a storage environment. He said the application will compete with Cloudera – which has a partnership with NetApp – and MapR Technologies software that EMC uses as part of its Greenplum HD.

“We want to make sure Hadoop and MapReduce work well in an enterprise environment,” Angspatt said. “We’re not going to do business intelligence. We’ll be involved in the infrastructure behind it to make sure it’s enterprise-ready. Our application will talk directly to Hadoop, similar to the EMC Greenplum-Isilon integration. But with us you don’t get locked into a specific hardware.”


February 21, 2012  9:07 AM

Organization structures need optimization

Randy Kerns Randy Kerns Profile: Randy Kerns

One of the most important parts of optimizing the data center involves improving storage efficiency. And that requires more than implementing the latest technologies. While working with IT operations in developing strategies to increase storage efficiency, it has become clear to me that organizational structure must change in order to expedite data center optimization.

Like storage systems, IT organizations tend to get more complex over time. The complexity affects the decision-making process involving the storage architects/administrators and the business owners responsible for the applications and information. There may be layers of groups with varying responsibilities between the staff that needs to develop and implement the storage technologies, and those who truly understand the requirements.

Having one or two levels of filtering makes it much more difficult to understand the needs of the “customer,” who in this case is the business owners or their staffs. Much of the technology optimization process involves understanding what is required and includes byplay between architects and the actual customer. The lack of that interaction and base understanding of customer needs often results in a solution that fails to address key needs and planning for the changes that will occur.

Optimizing organizational structure is perceived to be a difficult task. The changes affect many influential people and groups. The need is obvious, but the complex organization structures that have developed over time may be deep-rooted and require a commitment and direction from the most senior levels in IT. Working towards data center optimization and improved storage efficiency is much more difficult without this commitment and direction. Often, the process brings compromises that reduce the project’s effective value.

Understanding the problems in working toward an optimized environment means understanding the organization structure and those limitations. Any storage optimization strategy must take into account the structure as well as technology and products. This brings about more work, but it is the current situation in many IT operations.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


February 17, 2012  5:26 PM

HDS enhances non-disruptive data migration to VSP

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Hitachi Data Systems Corp. has added a non-disruptive data migration capability to its flagship Virtual Storage Platform (VSP) that minimizes downtime for hosts and applications when moving data  to the VSP from other storage arrays.

Hitachi Nondisruptive Migration moves data from HDS’ older Universal Storage Platform (USP) to the VSP. The target array spoofs the host operating system so it thinks it’s talking to the original system even as data is moved to the target. That allows applications to continue running during migration.

Patrick Allaire, an HDS senior product marketing manager, said this capability is based on three components:  a new persistent virtual Logical Device Identity (LDEV ID) that is used to manage multiple hosts, a logical identity takeover function, and Hitachi’s Universal Volume Manager (UVM).

The virtual LDEV ID is embedded in the SCSI layer of the Fibre Channel protocol. A virtual LDEV ID  is created and mapped from the source to the target, then the takeover function spoofs the operating system.  Finally, the Hitachi UVM copies or migrates the volume from the source to the VSP target.

Typically, a physical LDEV ID is used for this process but it makes it more manually intensive. “If you don’t have a virtual persistent ID, it forces you to do a shutdown of the application when changing the data path from the old storage system to the new one,” Allaire said.

With the new virtual LDEV ID and takeover function, Hitachi claims to maintain the quality of service levels for applications. Hitachi now supports host clustering configurations by synchronizing persistent group reserves on source volumes and keeping the host cluster operational. It also supports parallel migration of up to eight source systems to one Hitachi VSP.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: