Storage Soup


March 20, 2012  2:57 PM

Which storage cloud is fastest?

Dave Raffo Dave Raffo Profile: Dave Raffo

Do you ever wonder how long it would take to move a dozen terabytes from one cloud provider to another, or even between two accounts in the same cloud?

Probably not, if you’re sane. But maybe you do if you have data in the cloud and think you might want to switch one day for performance or pricing reasons. And you definitely do if you’re a cloud storage vendor that promises service levels that might require non-disruptive cloud-to-cloud migration.

Nasuni fits in that last category, so the vendor conducted extensive testing of what it considers the top three cloud providers based on the stress testing it conducted last year. The latest results are entered in its Bulk Data Migration in the Cloud report issued today.

In case you were wondering, here’s how long Nasuni estimates it would take to migrate a 12 TB volume:

• Amazon S3 to another Amazon S3 bucket: Four hours
• Amazon S3 to Microsoft Windows Azure: 40 hours
• Amazon S3 to Rackspace: Just under one week
• Microsoft Windows Azure to Amazon S3: Four hours
• Rackspace to Amazon S3: Five hours

Nasuni CEO Andres Rodriguez said transmission speeds vary depending on time of day, but the biggest difference is the cloud providers’ write capabilities because S3 had by far the best transfer times.

Nasuni determines the best back-end cloud for its customers, and usually selects S3 with Azure as the second choice. Nasuni’s competitors sell storage appliances and let customers pick their cloud provider, but Rodriguez said Nasuni picks the cloud provider to meet its SLAs.

“Our enterprise customers using storage in their data centers let Nasuni be the one to move data,” he said. “All customers want from Nasuni is storage service. They don’t care about which cloud it’s unless they want data in a specific geographic location. But that’s a location issue, not a provider issue.”

That means Nasuni customers can’t decide to switch providers based on pricing changes, but Rodriguez said he doesn’t recommend that practice.

“This is not an operation you want to be doing dynamically daily so you can save a few cents here and there,” he said. “You do it to take advantage of better features and performance.”

March 19, 2012  7:57 AM

TCO vs. ROI: Remember transition costs

Randy Kerns Randy Kerns Profile: Randy Kerns

While talking to value added resellers (VARs) recently about selling storage systems, I noticed their presentations about vendor products featured return on investment (ROI) calculations.

These ROI calculations focused on cost of the solution, savings in maintenance, floor space, power, and cooling, performance gains that enabled business expansion or consolidation, and savings in day-to-day administration.

But limiting the economic view of buying new storage technology to ROI does not represent the true financial impact of the transition. Investment in technology also requires a time element to be considered. A specific technology has a lifespan that is dictated by other factors such as warranty periods (and associated service costs), technology replacement and the subsequent unavailability of the earlier technology.

For these reasons, information technology professionals generally focus on total cost of ownership (TCO) when evaluating storage. TCO includes the time element and the transition costs. Notable factors included in TCO calculations include product costs divided by the number of years the product will be in service, data migration costs and operational and administrative costs over the lifespan of the product.

TCO gives a more accurate look in the evaluation of a technology deployment. For example, the cost of one technology product may be less expensive than another but the transition costs may actually make it more expensive. The lifespan is a big factor. If the lifespan is relatively short, that could make it hard to base a decision on product cost.

Vendors recently began trying to address transition cost in storage systems by adding a built-in capability to non-disruptively migrate data to the new technology system. This has become a differentiating characteristic of primary disk storage systems but its impact is limited in archiving and non-existent in tape systems.

Evaluating storage technology solutions must go beyond the simplistic cost of the solution and use of a purely economic measure such as ROI. More detailed evaluation must be done with different factors. Looking at the past technology change rate can be a good predictor for assigning the longevity expectation. TCO, with the correct elements included, can reflect the real costs between different storage technologies and assist in making a more informed decision.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


March 14, 2012  8:43 AM

Dell offers deals for EMC, NetApp customers

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell is trying to bolster flagging storage sales with a trade-in program that offers cash credits and improved lease terms to EMC and NetApp customers.

The Dell Storage Swap program launched today promises price breaks for organizations willing to retire EMC VNX, Clariion and Celerra and NetApp FAS arrays to move to Dell Compellent and EqualLogic storage before July 31. Dell pledges “specialized migration services,” support and “other financial incentives” for customers who switch.

When a vendor offers such a swap program, you can bet sales are not strong. That is the case with Dell, which has lost market share since ending its OEM relationship with EMC after acquiring Compellent last year. The Compellent deal followed Dell’s 2008 purchase of EqualLogic. Now Dell is banking on customers switching for the incentives and becoming happy enough with the results to stay with Dell long-term.

Even if Dell’s formal swap program is new, the strategy isn’t.

Christopher Patti, director of technology for AccuWeather, said Dell gave him a good enough price to switch from EMC Clariion and Hewlett-Packard EVA storage to EqualLogic in 2008. That was merely months after Dell bought EqualLogic and while it was still an EMC OEM partner. Patti said upgrading his older Fibre Channel arrays from EMC and HP would have cost in the high-six figures. He bought two EqualLogic iSCSI SANs and has since added four more EqualLogic arrays.

“The Clariions and EVAs were ridiculously expensive, especially with day-to-day maintenance, upgrades and extra costs for replication, snapshotting and other things here and there,” Patti said. “Dell gave us a good price point.”

He said he also likes that EqualLogic data protection and management features are part of the base price and not add-on licenses. “Dell’s software makes it easy to manage the array, see where bottlenecks are and know when you have to purchase additional capacity,” he said.

Server vendors Dell, HP and IBM are losing share in the storage market to pure-play storage vendors EMC, NetApp and Hitachi Data Systems (HDS). According to Gartner’s worldwide external disk storage revenue report released this week, Dell’s storage revenue dipped 0.2% from 2010 to 2011 while the market grew 9.8%. Dell’s market share slipped from 8.2% to 7.4% during the year and it stands sixth behind EMC, IBM, NetApp, HP and HDS.

The targets of Dell’s trade-in program, EMC and NetApp, ranked first and second in revenue growth for last year.


March 12, 2012  9:56 AM

Amazon, Google, Microsoft slash storage cloud prices

Dave Raffo Dave Raffo Profile: Dave Raffo

When Microsoft Windows Azure dropped pricing for its cloud service last Friday, it marked the third cloud price cut of the week. Google and Amazon also dropped prices for storing data on their clouds earlier in the week.

All of this price slashing shows these companies are serious about getting enterprise data into their clouds. But SearchCloudStorage.com assistant site editor Rachel Kossman reports that customers need to do more than just look at a provider’s published list when cloud-shopping. They need to match their use cases to the way the providers price specific transactions or they could be in for a surprise when the bill comes.

Check out her story for details on all the news prices and more tips on getting the best price from cloud storage providers.


March 9, 2012  8:33 AM

Western Digital, Hitachi GST make it official

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital’s $4 billion-plus acquisition of Hitachi Global Storage Technologies (HGST) officially closed today – a year and two days after the hard drive vendors first declared their intention to merge.

Western Digital is paying $3.9 billion in cash and 25 million shares of its common stock currently valued at $900 million for HGST, the world’s second largest enterprise drive vendor.

The deal had to clear regulation hoops from regulatory groups around the world because the deal makes the combined companies the largest hard drive vendor with 47% of the market, surpassing Seagate’s 32% market share.

HGST owns 9% of the enterprise market compared to Seagate’s 56%. Western Digital has only 1% of the enterprise market and 30% of the overall hard drive market without HGST.

Western Digital said it will operate HGST as a subsidiary, and it will maintain the HGST brand and separate product lines.

HGST’s enterprise products include solid-state drives (SSDs), and it scored a big win this week when it revealed that EMC is shipping Hitachi Ultrastar SSD400S single-level cell (SLC) 2.5-inch SAS SSDs in its VNX unified storage arrays.

Mitch Abbey, HGST’s senior enterprise product line manager, said he expects more SSD qualifications with EMC and other storage vendors. He also said HGST has cheaper multi-level cell (MLC) drives in the works and is reviewing the PCIe market for server-based flash to determine if it’s worth putting out that type of product.


March 7, 2012  11:04 AM

Risks from IT changes are real

Randy Kerns Randy Kerns Profile: Randy Kerns

There is a great hesitancy on the part of IT in data center optimization initiatives due to the risks involved. IT pros are aware of the problems that can occur and almost everyone has painful stories to tell of things that have gone wrong. Even with the great benefits from data center optimization, the hard-earned experience IT pros gain prompt them to carefully consider the risks.

The risks are the negative outcomes and their consequences that can occur with any project. Unforeseen issues that arise are the most feared. These include factors beyond the influence of IT, such as delays in construction and power upgrades.

Another area of risk for IT comes from vendor solutions deployed for data center optimization projects. The vendor product may not work as advertised. For IT, there is“this has happened before” sensitivity. For vendors, there may be unique usage environments or changes in what was originally planned that can cause the system to be less than optimal and cause IT and vendors to scramble to address the new situation. Another risk that can have an impact is the loss of key staff at an inopportune time.

The positives of data center optimization initiatives include economic gains from greater efficiencies and capacity expansion. The efficiencies come from better use of server and storage resources, greater automation, and simplification that can lower administrative costs. Server utilization is addressed primarily with server virtualization and storage systems capable of supporting increased demand from the virtual machines running on a physical server. Storage efficiency is addressed with thin provisioning, data reduction, and tiering. Other efficiencies include better storage management and data protection.

The risks and the potential impacts from IT projects are real. We saw an example of that last weekend when a merger of major airlines (really a takeover but merger sounds nicer) led to a transition to a single reservation system and a single scheduling system. Even with more than a year for IT to prepare, big problems occurred causing delays and cancellations. Just to give a personal perspective on the problem:

• For my flight the next morning after the switch over, I could not check in because there was a problem with my reservation. I was supposed to call a customer service number to have it straightened out.

• There weren’t enough agents to handle the number of IT-related problems, resulting in a long hold time.

• The time to fix the problem was longer because the agent said she was learning a new system and the system seemed slow.

• I learned of a possible future problem because my mileage accumulated over years of flying on the airline was not transferred to the new system. A note said that this would get fixed in the next three days.

The person on the phone said at least three times that everything was going well in a scripted statement. Hearing that while the problems continued was at first amusing and then irritating. It did not take long to get to the infuriating stage.

These IT problems were significant and appear to be continuing, although the number of flight problems decreased between Saturday and Sunday. The risks were great and problems — expected and unexpected — occurred. These results show that the hesitancy of IT to make changes because of risk is well justified.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


March 5, 2012  4:34 PM

Storage sales growth slows, perhaps because of cloud, dedupe

Dave Raffo Dave Raffo Profile: Dave Raffo

Although S\storage systems revenue grew during the fourth quarter of 2011 and for the entire year, that growth slowed compared to previous periods.

According to IDC’s worldwide quarterly disk storage systems tracker, external disk systems (networked storage) increased 7.7% year-over-year to $6.6 billion in the last quarter of 2011. That compares to 16.2% year-over-year growth in the fourth quarter of 2010, and 10.8% growth in the third quarter of 2011.

For the full year, external disk storage revenue increased 10.6% in 2011 compared to 18.3% growth in 2010.

The rate of growth slowed across categories that IDC tracks – open SAN, NAS and iSCSI. SAN storage revenue grew 14.1% and iSCSI storage revenue increased 16.6% year-over-year for the fourth quarter of 2011, while NAS disk storage revenue actually declined by 1.2%. In the fourth quarter of 2010, SAN revenue grew 15.1%, iSCSI increased 42.1% and NAS grew 21.7%.

IDC senior research analyst Amita Potnis said growth slowed after picking up sharply in 2010 as the industry moved out of the recession. That significant increase in late 2010 made for difficult comparisons in 2011. Also, the cloud and storage efficiency technologies probably tempered sales. She said the hard drive shortage caused by the Thailand floods had little impact during the end of 2011, but is hurting sales this year.

“Two technology trends have had a big impact on the market,” she said. “If a storage system is sold to cloud service providers, we count it. But a significant amount of cloud storage capacity does not come from external storage system purchases, so that has an impact. Also, storage efficiency technologies such as deduplication, compression, virtualization and thin provisioning have a significant impact on the market. End users can adjust their buying strategies and use what they have more efficiently.”

Potnis said solid-state drives (SSDs) remain at below 10% of external storage system revenue.

NAS revenue declined despite research showing file storage is outpacing block storage growth. Potnis said NAS grew more than 40% in every quarter of 2010 and that growth rate was difficult to sustain in 2011.

“Also, the types of data on NAS devices – backup and archive – are main candidates for data deduplication and transfer to the cloud,” she said. “So the impact from cloud and storage efficiency is greater on NAS than block data. But we expect file data will continue to grow faster than application or block storage.”

IDC also reported the high-end segment – systems selling for $250,000 and up – had the highest growth rate of all price segments. High-end storage systems increased to 30% in the fourth quarter of 2011, up from 28.2% in 2010.

EMC extended its lead in external storage market share during the fourth quarter of 2011, growing 22.4% — nearly triple the overall growth rate. EMC’s revenue share was 29.4%, compared to 25.9% in the fourth quarter of 2010. IBM retained second place with flat revenue year-over-year, but its share dropped from 16.4% last year to 15.2%. NetApp slipped slightly ahead of Hewlett-Packard into third place, although IDC lists them as tied because they are less than one percent apart. NetApp grew 16.6% in the quarter for an 11.2% market share. HP revenue fell 3.8% in the quarter for 10.3% share. In the fourth quarter of 2010, HP had 11.6% share with NetApp at 10.3%. Hitachi Data Systems (HDS) was fifth last quarter, growing 11.6% for 9.2% share.

For the entire year, EMC increased external disk systems revenue 23.6% and increased its market share 3% in 2011 to 28.5%. IBM grew 8.9% over the year and stands second with 13.5%. NetApp grew the most for 2011 following an acquisition of LSI’s Engenio storage business, increasing revenue 23.7% to take 12.4% market share. HP grew 7.7% and slipped from a statistical tie with NetApp in 2010 to fourth with 10.7% in 2011. HDS grew 18.8% for the year and held 8.8% of the market. HDS overtook Dell for fifth for the year, as Dell’s revenue tumbled following the end of its OEM partnership with EMC.


March 1, 2012  5:03 PM

Storage users spared downtime from Microsoft Azure crash

Dave Raffo Dave Raffo Profile: Dave Raffo

The good news for Microsoft Windows Azure cloud storage customers was found in the last sentence of the third paragraph of the blog update about its “Leap Year outage” Wednesday:

“Windows Azure Storage was not impacted by this issue.”

That doesn’t mean cloud storage won’t be impacted in the future, though. A high-profile cloud outage will have people thinking twice about moving important data to the cloud.

“Every time one of these things happens, the umbrella of the cloud gets tarnished,” said Andres Rodriguez, CEO of cloud NAS vendor Nasuni. “It hurts. Our customers know what they have, it’s the prospects that I’m worried about. Our sales guys get many more questions in the field because of it.”

Nasuni stores its customers’ data on Azure and Amazon S3 clouds. Amazon’s compute cloud, you may remember, had two outages last year. Cloud outages are one reason Nasuni bills its hardware and software NAS appliances a storage services systems, not cloud devices. Rodriguez said Nasuni treats the cloud as a hard drive, but uses the same architecture as mainstream storage vendors. And he wishes cloud providers would treat storage and compute as separate entities, just as data centers do.

“This would not happen if people separeated compute and services in the cloud,” he said. “Compute and storage are totally different things in the data center, and people somehow bundle them in the cloud. They’re not bundleable. They’re two different systems with different characteristics. Azure did not have any issues in its storage layer. The storage piece of Azure has been highly available for the last 48 hours.”

Microsoft said the Azure issue was resolved a little after 1 PM ET today.


February 29, 2012  5:55 PM

Amplidata gets funding from Intel, others for object storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Object-storage startup Amplidata picked up $8 million in funding and a new strategic investor today.

New investor Intel Capital joins previous Amplidata investors Swisscom Ventures, Big Bang Ventures and Endeavour Vision to bring the vendor’s total funding to $14 million. Amplidata CEO Wim De Wispelaere said the funding will be used to beef up sales and marketing for the AmpliStor Optimized Object Storage system that has been making its way into cloud and “big data” implementations.

The vendor’s headquarters are in Belgium and it also has a Redwood City, Calif.  office. Most of its early customers are in Europe, so you can expect to see a big marketing push in the U.S. now.

Object storage is considered one of the hottest emerging technologies and used for dealing with large data stores. AmpliStor features an erasure code technology called BitSpread to store data redundantly across a large number of disks, and its BitDynamics technology handles data integrity verification, self-monitoring, and automatic data healing.

De Wispelaere said Amplidata’s customers generally fall into two use case categories that require scalable storage. “The first use case is what we call online applications,” he said. “Customers have written their own application and need to scale out their storage to store photos, videos or file. Another big market is media and entertainment. We’re used as a nearline archive for a postproduction system, so data is readily available whenever it’s needed.”

Amplidata faces stiff competition in both areas. For the cloud, it’s going against startups such as Scality, Cleversafe and Mezeo as well as established players Hitachi Data Systems HCAP, EMC Atmos and Dell DX, and API-based service providers such as RackSpace OpenStack and Amazon S3. In scale-out storage, its competition includes EMC Isilon, HDS BlueArc, and NetApp.

Amplidata is part of Intel’s Cloud Builders alliance, and last fall demonstrated its system at the Intel Development Forum. That relationship –- and Intel’s investment – should ensure that Amplidata will be kept current on the Intel roadmap.

It’s possible that Amplidata is benefitting from its relationship with Swisscom as well. Swisscom offers cloud services, but De Wispelaere could not say if it uses Amplidata storage. “I have a strict NDA with Swisscom,” he said.


February 28, 2012  9:22 AM

Data protection in transition

Randy Kerns Randy Kerns Profile: Randy Kerns

The increase in data capacity demand makes it difficult for Information Technology to continue with existing data protection practices. Many organizations have realized their protection methods are unsustainable, mainly because of the impact of the increased capacity demand and budget limitations.

The increase in capacity demands come from many sources. These include business expansion, the need to retain more information for longer periods of time, data types such as rich media that are more voluminous than in the past, and an avalanche of machine-to-machine data used in big data analytics.

The data increase requires more storage systems, which are usually funded through capital expense. Often these are paid for as part of a project with one-time project funds.

The increase in data also changes the backup process. The amount of time required to protect the information may extend beyond what is practical from a business operations standpoint. The amount of data to protect may require more backup systems than can physically be accommodated.

It is common for new projects to budget for the capital expenses required. Unfortunately, the increase in operational expenses is rarely enough to support the data protection impact. The administration expenses from increased time spent by the staff in handling the data can be estimated, but is difficult to add to the project budget because it is an ongoing expense rather than a one-time expense.

Unexpected data growth can exceed capacity-based licensing thresholds and turn into an unpleasant budget-buster. Even expenses related to external resources such as disaster recovery copies of information may ratchet up past thresholds.

There are new approaches to data protection. However, there is usually not enough funding available to implement alternative data protection approaches. Changing procedures in IT is also difficult because of the training required and the amount of risk that is introduced.

Vendors see the opportunities, and address them with approaches that make the most economic sense for them. The most common approach is to enhance existing products, improving their speed and effective capability. Another vendor approach is to introduce new data protection appliances combining software and hardware to simplify operations. Whether these are long-term solutions or merely incremental improvements depends on the specific environment.

Another approach evolving with vendors is to include data protection as an integral part of a storage system. This involves adding a set of policy controls for protection and data movers for automated data protection. These come in the form of block storage systems with the ability to selectively replicate delta changes to volumes and in network attached storage systems that can migrate or copy data based on rules to another storage system. Implementing this type of protection requires software to manage recovery and retention of the protected data.

A change must be made to continue the IT mandate for protecting information. However, the fundamental problem with data protection addressing capacity demand is economics. For most IT operations, the solution cannot represent a major investment and it must be administratively cost-neutral to a great extent. Current data protection solutions that meet those requirements are hard to find.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: