Storage Soup


January 11, 2012  10:53 AM

Dell upgrades Compellent’s capabilities

Dave Raffo Dave Raffo Profile: Dave Raffo

Along with launching a new backup deduplication appliance, Dell made other storage additions and enhancements today in London at its first Europe Dell Storage Forum. The biggest rollout, besides the DR4000 backup box, was an upgrade to Compellent Storage Center 6.0 software with new 64-bit support that doubles memory size.

The upgrade -– along with extended VMware support –- is part of Dell’s strategy to make the Compellent Fibre Channel SAN platform a better fit for the enterprise. The 64-bit support is a precursor to the addition of Ocarina Networks’ primary data reduction technology to Compellent systems, because deduping and compressing data will require more processing power. Another advantage for Compellent 6.0’s 64-bit support is it enables tiering data in smaller block sizes to automatically tier data more efficiently.

Compellent also now supports the full copy offload and hardware-assisted locking features that are part of VMware vSphere Storage APIs for Array Integration (VAAI). The storage vendor also added a Dell Compellent Storage Replication Adapter (SRA) for VMware’s Site Recovery Manager 5, and vSphere 5 Client Plug-in and Enterprise Manager to help manage virtualized storage pools with the latest version of vSphere.

Randy Kerns, senior strategist for the Evaluator Group IT consultant firm, said the 64-bit support will enable Compellent to better take advantage of next-generation Intel chip advances. He said that’s a nice benefit because of Compellent’s architecture and licensing model. “People underestimate the importance of this, but Compellent is about storage as an application and the applications are loaded on powerful Intel servers,” he said. “With Compellent, you buy a license and you don’t have to re-buy a license when you upgrade. This also lets them track the new technology brought out by Intel and leverage Intel’s research and development.”

Before the Dell acquisition, Compellent sold into the midrange. Dell already has the EqualLogic platform for the midrange and is looking for something more competitive with EMC VMAX, Hewlett-Packard 3PAR, IBM DS8000, Hitachi Data Systems Universal Storage Platform and NetApp FAS6200 systems. But to become a true enterprise option, Compellent may have to scale beyond its current two-controller limit.

“When Dell did not get 3PAR, Compellent was the only option left worth looking at, but it doesn’t go high enough,” said Arun Taneja, consulting analyst for the Taneja Group. “Dell is feverishly working on taking Compellent upstream. One of the elements needed is 64-bit support. But to compete with the likes of 3PAR and VMAX, Compellent has to go to more than two controllers. What if Dell cannot take it to four or eight controllers, what are they going to do? The next 12 months will be telling. For five years, the Compellent people have been telling me they can go beyond two controllers. We’ll find out if they were telling the truth.”

Dell also added support for 10-Gigabit Ethernet Force10 switches on its EqualLogic iSCSI SAN platform and support of Brocade 16-Gbps Fibre Channel switches for Compellent.

January 10, 2012  9:08 AM

OCZ grabs Sanrad for PCIe caching software

Dave Raffo Dave Raffo Profile: Dave Raffo

Not surprisingly, the first storage acquisition of 2012 involved solid-state flash. That technology figured prominently in 2011 acquisitions, and the trend is certain to accelerate this year with larger companies buying technology from smaller vendors.

OCZ Technology kicked off the year’s M&A Monday by dropping $15 million on privately held Sanrad. The acquisition is part of OCZ’s push into enterprise flash, specifically PCIe cards.

Sanrad has been around since 2000. It started off selling iSCSI SAN switches, and then adapted those switches for storage and server virtualization. But OCZ is most interested in the software that runs on those switches. Sanrad last September launched VXL software that caches data on flash solid-state storage.

VXL runs as a virtual appliance and distributes data and flash resources to virtual machines. The software enables caching more efficiently and lets customers distribute flash across more VMs without a performance hit. VXL software does not require an agent on each VM and supports VMware vSphere, Microsoft Hyper-V and Citrix Xen hypervisors.

Sanrad’s StoragePro software lets administrators manage storage across servers or storage devices as a single pool. Sanrad sold StoragePro with its V Series virtualization switches.

During OCZ’s earnings call Monday evening, CEO Ryan Peterson said the Sanrad software will be packaged with OCZ’s Z-Drive PCIe SSDs. The move can be considered competitive to Fusion-io’s acquisition of caching software startup IO Turbine last year.

Peterson said the Sanrad acquisition is part of OCZ’s strategy to see PCIe “as more than simply a component and truly as a storage system, which includes things like having VMware, virtualization capability, and support for vMotion, where there is mobility among the virtual machines of the cache …”

Ryan didn’t mention any plans for Sanrad’s switches. He said Sanrad’s revenue was in the “low single-digit millions” over the past few years, indicating low sales despite OEM deals with Brocade and Nexsan.

OCZ also revealed a new PCIe controller platform developed with chip maker Marvell. The new Kilimanjaro platform will be used in the next version of the Z-Drive, R5. That card will have a PCIe 3 interface. It can deliver about 2.4 million 4K file size IOPS per card and approximately 7 GBps of bandwidth, according to OCZ and Marvell. OCZ is demonstrating the R5 at CES and Storage Visions with an IBM server this week in Las Vegas.

It is also demonstrating new 6 Gbps SATA-based SSD controllers based on its 2011 acquisition of Indilinx.

OCZ’s push to the enterprise is beginning to pay off. Peterson said OCZ’s enterprise-class SSD revenue increased approximately 50% year over year last quarter and now makes up approximately 21% of its SSD sales.


January 5, 2012  9:13 AM

Life after RAID

Randy Kerns Randy Kerns Profile: Randy Kerns

Recent developments point to a change in how we protect the loss of a data element on a failed disk. RAID is the venerable method used to guard against damage from a lost disk, but RAID has limitations – especially with large-capacity drives that can hold terabytes of data. New developments address RAID’s limitations by providing advantages not specific to disk drives.

The new protection technology has been called several things. The name most associated with research done in universities is called information dispersal algorithms, or IDA. Probably the more correct term as it has been implemented is forward error correction, or FEC. Another name used based on implementation details is erasure codes.

The technology can address the loss of a disk drive that RAID was targeted to protect. It can also prevent the loss of a data element when data is distributed across geographically dispersed systems. The following diagram gives an overview of the coverage protection for data elements.  The implementation allows for a selection of the amount of coverage of protection across data.  An example that is commonly used is a protection setting of 12 of 16, which means only 12 of 16 data elements are needed to recreate data from a lost disk drive.

Vendors with products that use FEC/erasure codes include Amplidata, Cleversafe, and EMC Isilon and Atmos. Each uses a slightly different implementation, but they are all a form of dispersal and error correction.

The main reason to use erasure codes is for protection from multiple failures. This means multiple drives in a disk storage system could fail before data loss would occur. If data is stored at different geographic locations, you can handle having several locations unavailable to respond and still not lose data. This makes erasure codes a good fit for cloud storage.

Other advantages include shorter rebuild times after a data element fails and less performance impact during a rebuild. A disadvantage of erasure codes is they could add latency and require more compute power when making small writes.

One of the most potentially valuable benefits from using erasure codes is the reduction in service costs for disk storage systems. Using a protection ratio that has a long-term coverage probability (meaning multiple failures will not occur with the potential to lose data for a long period of time), a storage system may not require a failed device to be replaced over its economic lifespan. This would reduce the service cost. For a vendor, this reduces the amount of warranty reserve.

This form of data protection is not prevalent today and it will take time before a large number of vendors offer it. There are good reasons for using this type of protection and there are circumstances when it is not the best solution. Storage pros should always consider the value it brings to their environment.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


December 29, 2011  10:25 AM

Storage wrap: take a last glance at 2011, early peek at 2012

Dave Raffo Dave Raffo Profile: Dave Raffo

In case you weren’t paying attention to the storage world for most of 2011, don’t worry, we have you covered.

Our series of stories looking back at the highlights of 2011 will catch you up on what you may have missed:
Solid state, cloud make mark on storage in 2011
Hard drive, SSD consolidation highlights 2011 storage acquisitions
Top cloud storage trends of 2011
Compensation rose for storage pros in 2011

And if you want to get a jump on 2012, we have your back there too. These look-ahead stories highlight the key storage issues and technologies for the coming year:
What 2012 has in store for storage
2012 preview: more flash with auto tiering, archiving, FCoE

We’ve also gathered useful information to help do your job more efficiently if you work with data:
Popular storage tips of 2011
Top data deduplication tips of 2011
Top remote data replication tips of 2011
Top disaster recovery outsourcing tips of 2011
Top SMB backup tips of 2011


December 28, 2011  2:06 PM

Rear view mirror metrics don’t tell full story

Randy Kerns Randy Kerns Profile: Randy Kerns

I read all the reports on the how the storage industry is doing. These include many segments in storage hardware and software, sometimes going into great detail. These reports often come from data that is self-reported by vendors on how they’ve done in shipping products.

They draw comparisons with the previous quarter, the same quarter of the previous year and through the calendar year. These give us an idea of where we’ve been and how the different segments have fared.

But, these results look in the rear view mirror. They do not tell us how any of these vendors or the industry will do in the future. Determining future performance requires looking out the windshield.

A forecast is usually based on a projection of the trends that have occurred in the past. This indicator is often used in planning and estimating around investments, ordering, staffing, and other elements critical to making business decisions that have tremendous financial implications.

Even forecasts that meant to look through the windshield are usually based on past trends. One technique to project future trends is to look at what occurred in recent years, and assume that pattern will continue. That may be a bad assumption, and bring serious consequences.

Others use surveys to predict the opportunity, but surveys can also mislead. A survey’s accuracy depends on how the questions are asked, and who is responding to them. There is another factor that I can relate to in personal experience: the quality of the answers depends on when the questions are answered. There can be bad days…

I’ve found that conversations with IT professionals lead to a deeper understanding of what their problems are, and what they are doing. With enough of these conversations, a general direction emerges that can be used as guidance in a particular area with much greater confidence. There’s no sure-fire means, however. The best that can be done is to understand the limitations of the input you receive and use multiple inputs.

Another measure for me is gauging what the vendors believe the storage market is doing. This is much easier because the briefings, product launches, and press releases represent investments that are evidence of their belief in the opportunity. Lately, the briefings and announcements have increased – even as approaching the holidays and year-end distractions. Things do look good in the storage world – out through the windshield.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


December 22, 2011  10:01 AM

Solid state, server-based storage have staying power

Randy Kerns Randy Kerns Profile: Randy Kerns

Many new storage technologies show great promise. They have useful capabilities such as transferring data faster, storing information for less cost, migrating data with less disruption and administration, and utilizing storage resources more efficiently. Technologies get evaluated on the value they bring and are introduced into the mainstream over time.

New technologies also spawn start-up companies with investments from venture capital companies. Invariably there is promotion of the technology, paid for by the developing companies and others with a stake in the technology‘s success. There is even an industry that grows up around educating people on the new technology.

But few new technologies actually have staying power beyond five years or so. This relatively transient technology that had such great promise and promotion often leads to great disappointment, especially if you are involved in a start-up or the support industries. The investors would also lose their investments completely or have them reduced to an asset sale. The technology’s failure also colors the industry for a period, making new investments and career plans more limited.

Everyone in the storage industry seems to have their favorite technologies that crashed and burned. Remember bubble memories? Remember IPI-3? Many of us, me included, invested much of our time (way more than we should have) in developing products in the exciting new technology areas.

There are currently two technology trends to highlight that have staying power and are likely to continue evolving, justifying the investment and commitment. One is solid-state technology used as a storage medium. The other is the use of server technology (multi-core processors, bus technology, interface adapters, etc.) as the foundation for storage systems.

Solid-state usage has created many opportunities with more than one form of storage device. Currently, NAND Flash is the solid-state technology of choice. Eventually, that will be replaced with the next solid state technology, which is expected to be phase-change memory (PCM). The timing of the evolution to the next generation of solid state will probably be determined by the first company that has the technology and wants to achieve a greater competitive position than it has today. Developments in these areas may be held back by the successes and profits from NAND flash. When the price for NAND flash becomes more competitive, there may be more motivation to deliver on the next generation.

For storage systems, the transition from custom designed hardware to use of standard platforms has been underway for some time. The economics of the hardware and development costs have driven most vendors to deliver their unique intellectual property in the form of a storage application that runs on the server-based hardware. The next turn in this technology progression is utilizing the multi-core processors in the storage system more effectively by running I/O intensive applications on the storage system itself.

When a technology is in the early stages, it’s difficult to determine if it will have the staying power to justify the financial and personal investment. But, solid state and server-based storage are past those early stages and have relatively clear paths for the future. What comes next is an interesting conversation, though.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


December 20, 2011  8:47 AM

Seagate-Samsung close hard drive deal eight months later

Dave Raffo Dave Raffo Profile: Dave Raffo

One of two pending multibillion dollar hard drive vendor acquisitions closed today when Seagate Technology wrapped up its $1.4 billion transaction with Samsung Electronics.

Seagate is acquiring Samsung’s M8 product line of 2.5-inch high capacity hard drives and will supply disk drives to Samsung for PCs, notebooks and consumer devices. Samsung will supply Seagate with chips for enterprise solidstate drives (SSDs). Seagate already uses those chips for its SSD and hybrid drives. Seagate is also acquiring employees from Samsung’s Korea design center.

The deal was first disclosed in April, but the companies had to clear regulatory hurdles. Even with the close of the deal, Seagate and Samsung face a long transition. Seagate will retain the Samsung brand for some of its hard drives for a year, and establish independent sales, product and development operations during that period.

The other blockbuster hard drive deal in the works is Western Digital’s proposed $4.3 billion takeover of Hitachi Global Storage Technology (HGST). The European Union last month gave its blessing to that deal, but ordered Western Digital to sell off some assets. The Western Digital-HGST acquisition is expected to close next March, one year after it was first announced.


December 15, 2011  3:59 PM

EMC, customers stung by hard drive shortages from Thailand floods

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC has notified partners and customers that it will raise the list prices of its hard drives by up to 15% beginning Jan. 1 due to shortages caused by Thailand floods. The increases are expected to be temporary, depending on how long it takes damaged hard drive manufacturing plants to recover.

EMC vice chairman Bill Teuber sent an email to customers and partners stating the vendor has eaten price increases so far, but will begin to pass them along to customers after this month. He also wrote that EMC does not expect supply problems because it is the largest vendor of external storage systems, but it has to pay more for the available drives.

“EMC has absorbed the price increases that have been passed on to us and will continue to do so through the end of the month,” Teuber wrote. “Unfortunately we will not be able to sustain that practice. Beginning in Q1 2012 we will be increasing the list price of hard disk drives up to 15% for an indefinite period of time. While we hope that this increase is temporary, at this time we cannot forecast how long the flooding in Thailand will impact HDD [hard disk drive] pricing.”

Another email Teuber sent to EMC personnel said the price increases will be from 5% to 15%. He also wrote the increases will apply to all EMC product lines.

The shortage is expected to affect PC drives more than enterprise drives, but EMC enterprise storage rival NetApp lowered its revenue projection last month because of expected shortages.

Teuber referred to NetApp indirectly in his email, stating “Many of our competitors have already announced drive shortages and price increases and have stated that this will have a material impact on their ability to hit revenue expectations now and in the future.”

An EMC spokesman today said the vendor would give a full update on the supply chain issue during its earnings call in January.

The shortages are affecting vendors throughout the IT industry. Hard drive vendors Seagate and Western Digital have major manufacturing facilities in the flooded areas. Last month IT research firm IDC forecasted hard drive prices should stabilize by June of 2012 and the industry will run close to normal in the second half of next year. According to IDC, Thailand accounted for 40% to 45% of worldwide hard drive production in the first half of this year, and about half of that capacity was impacted by floods this November. Intel this week reduced its fourth quarter revenue forecast by $1 billion because the drive shortages will drive down PC sales.


December 14, 2011  2:03 PM

HDS sharpens file capabilities for the cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Hitachi Data Systems added a few more lanes to its cloud storage on-ramp today.

HDS brought out the Hitachi Data Ingestor (HDI) caching appliance a year ago, calling it an “on-ramp to the cloud” for use with its Hitachi Content Platform (HCP) object storage system. Today it added content sharing, file restore and NAS migration capabilities to the appliance.

Content sharing lets customers in remote offices share data across a network of HDI systems, as all of the systems can read from a single HCP namespace. File restore lets users retrieve previous versions of files and deleted files, and the NAS migration lets customers move data from NetApp NAS filers and Windows servers to HDI.

These aren’t the first changes HDS has made to HDI since it hit the market. Earlier this year HDS added a virtual appliance and a single node version (the original HDI was only available in clusters) for customers not interested in high availability.

None of these changes are revolutionary, but HDS cloud product marketing manager Tanya Loughlin said the idea is to add features that match the customers’ stages of cloud readiness.

“We have customers bursting at the seams with data, trying to manage all this stuff,” she said. “There is a lot of interest in modernizing the way they deliver IT, whether it’s deployed in a straight definition of a cloud with a consumption-based model or deployed in-house. Customers want to make sure what they buy today is cloud-ready. We’re bringing this to market as a cloud-at-your-own-pace.”


December 14, 2011  8:25 AM

Amazon S3, Microsoft Azure top performers in Nasuni stress tests

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Nasuni is in the business of connecting its customers’ cloud NAS appliances to cloud service providers in a seamless and reliable fashion. So the vendor set out to find which of those service providers work the best.

Starting in April 2009, Nasuni put 16 cloud storage providers through stress tests  to determine how they handled performance, availability and scalability in real-world cloud operations.

Only six of the initial 16 showed they are ready to handle the demands of the cloud, Nasuni claims, while some of the others failed a basic APIs functionality test. Amazon Simple Storage Service (S3) and Microsoft Azure were the leaders, with Nirvanix, Rackspace, AT&T Synaptic Storage as a Service and Peer 1 Hosting also putting up passing grades.

“You won’t believe what is out there,” Nasuni CEO Andres Rodriguez said. “Some had awful APIs that made them unworkable. Some had some crazy SOAP-based APIs that were terrible.”

Nasuni did not identify the providers that received failing grades, preferring to focus on those found worthy. Amazon and Microsoft Azure came out as the strongest across the board.

Amazon S3 had the highest availability with only 1.43 outages per month – deemed insignificant in duration – for a 100% uptime score. Azure, Peer 1 and Rackspace all had 99.9% availability.

Rodriguez described availability as the cloud providers’ ability to continue operations and receive reads and writes, even through upgrades. “If you can’t be up 99.9 percent, you shouldn’t be in this business,” he said.

For performance testing, Nasuni looked at how fast providers can write and read files. Their systems were put through multiple, simultaneous threads, varying object sizes and workload types. They were tested on their read and write speed of large (1 MB), medium (128 KB) and small (1 KB) files.

The tests found S3 provided the most consistently fast service for all file types, although Nirvanix was fastest at reading large files and Microsoft Azure wrote all size files fastest.

Nasuni tested scalability by continuously writing small files with many, concurrent threads for several weeks or until it hit 100 million objects. Amazon S3 and Microsoft Azure were also the top performers in these tests. Amazon had zero error rates for reads and writes. Microsoft Azure had a small error rate (0.07%) while reading objects.

The reported stated: “Though Nirvanx was faster than Amazon S3 for large files and Microsoft Azure was slightly faster when it comes to writing files, no other vendor posted the kind of consistently fast service level across all file types as did Amazon S3. It had the fewest outages and best uptime and was the only CSP to post a 0.0 percent error rate in both reading and writing objects.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: