Storage Soup


January 23, 2009  10:51 PM

Storage Headlines Podcast for 01-23-09

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Stories referenced:

Last week’s headlines

January 23, 2009  5:44 PM

EMC eyes primary dedupe for NAS

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC is preparing an upgrade to its Celerra NAS platform, including built-in deduplication for file systems. Internally, at least, EMC refers to this as primary deduplication although it is obviously limited to files and best suited for shared folders and home directories.

EMC hasn’t yet disclosed the upgrade and refused comment, but industry sources and EMC briefing materials obtained by SeachStorage indicate four new Celerra NS models are coming soon. Two of the new models will support solid state drives (SSDs). The dedupe is based on EMC’s Avamar host-based software with RecoverPoint compression.

The only major vendor offering dedupe for primary data is NetApp, which is also EMC’s main NAS competitor. NetApp Deduplication handles block-based data as well as files, although NetApp executives indicate it is used frequently for home directories. EMC documents claim a 30% to 40% primary storage savings is possible for “typical unstructured file share datasets” in primary and archive storage with the dedupe.

The Celerra dedupe will compress files with low usage activity, and single instance files to remove duplicates. EMC is also adding a compliance option to its Celerra File-Level Retention WORM software as a competitor to NetApp SnapLock. FLR-C, as EMC calls its new option, locks files to prevent file system deletions and has a non-spoofable clock to honor file retention time. To avoid competing with its Centera archiving system, EMC will recommend Celerra for archiving and locking files on its NAS systems and Centera for application data and fixed content.

The new models include two enterprise systems – the NS G8 gateway and NS-960 – and two midrange systems – the NS-480 and NS-120. The NS G8 replaces the NSX and NS80G, and the NS-960 replaces the NS80. The NS-480 will replace the NS40 and the NS-240 will replace the NS20. All of the systems support higher capacities than their predecessors, and the NS-480 and NS-960 will support flash SSDs.


January 15, 2009  5:20 PM

Seagate 1 TB Barracuda users report failures

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

More than 20 users have posted to a thread on Seagate’s official Community Forums reporting that their 1 TB 7200 RPM Seagate Barracuda desktop drives have failed. The drives failed soon after purchase, from as little as 2 weeks to as long as seven months; several users reported the problem to be the drive becoming undetectable to the BIOS. Users reporting failed drives on the thread also said the drives had been manufactured in Thailand.

Freezing problems were previously reported for the 1.5 TB version of the 7200.11. Seagate issued a firmware fix for those drives in late November.

Seagate officials did not comment when contacted by Storage Soup today.

UPDATE–Please see the comments section below for a response about a firmware fix from Seagate.


January 15, 2009  2:09 PM

SSDs: Your mileage almost certainly will vary

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Maybe this is what happens with any brand-new technology, but so far there’s been such wide variability in the solid-state drives that have been announced for the enterprise since EMC added STEC drives to Symmetrix last January that I can’t help but be curious about it. Here are the specs available on some of the latest SSDs rolled out:

  • STEC (as used by EMC):  73 GB and 146 GB, FC interface, 52,000 sustained random read IOPS; 17,000 sustained random write IOPS; 250 MBps sustained, sequential reads; and 200 MBps sustained, sequential writes.
  • Intel X-25 E: 32 and 64 GB, 170 MBps sequential write; 4 KB random-read IOPS 35,000; random writes at 3,300. 10 I/O channels on controller. Claims smaller write amplification than other drives.
  • pureSilicon: SLC capacities of 256 and 512 GB. MLC capacity of 1 TB. Up to 50,000 random read IOPS. 32 channels on controller.
  • Samsung: 100 GB; 25,000 random read IOPS; 6,000 random write IOPS; read data sequentially at 230 MBps and write sequentially at 180 MBps. Also claims to have a new ‘recipe’ for NAND Flash material itself that could boost SSD durability, but hasn’t decided when, where, or how to release it to OEMs.

This week I’ve also come across Toshiba, which announced a new SAS interface SLC drive in a 100 GB capacity this week. But it’s the darndest thing–another drive, another very different claim about random read and write IOPS–for Toshiba, the numbers are much closer together than they are for others with these two specs: 25,000 random read IOPS, and 20,000 random write IOPS.

The problem is trying to get any of these vendors to drill down further. Most of them won’t because it’s getting into proprietary information, which I understand, but I wish it was somehow otherwise. At most, I’m told it has to do with the number of channels in the controller, the Flash recipe, how many channels are dedicated to reads vs. writes, how data streams are interleaved and/or cached with DRAM on their way into or out of the Flash capacity. It gives me some idea, but I can’t wait until users start testing these in volume, in real world environments.

I also wonder whether the “biodiversity” of creatures in the SSD habitat will continue, or whether eventually enterprise SSDs will commoditize like any other storage medium. There’s some debate about this in the industry so far.


January 15, 2009  11:57 AM

Storage Headlines for 01-14-2009

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

We’re starting up a new weekly podcast, hosted by yours truly, to review the top stories of the last week you may have missed. Here’s our first installment.

Stories referenced:


January 13, 2009  4:34 PM

Pillar claims SPC-1 supremacy

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Pillar Data Systems has published its first system test benchmarks for the Axiom 600 disk array via the Storage Performance Council (SPC). It tested a system with 42 total TB, including 10 TB used and mirrored to allocate a total of about 20 TB. The results were 64,992 total IOPS.

These numbers come in ahead of competitive systems from IBM, NetApp and EMC. The EMC CX 3-40 was tested last January by archrival NetApp, making its SPC benchmarks controversial, but as listed a 22 TB system with 8.5 TB used and mirrored produced 24,997 SPC-1 IOPS. A NetApp 3170 with 32 TB and 19.6 used and mirrored resulted in 60,515 June 10. A 37.5 TB IBM 5300 with 13.7 TB used and mirrored produced 58,158 SPC-1 IOPS Sept. 25.

I found it interesting that with one 2,000-IOPS deviation between the IBM 5300 and NetApp 3170, the systems generally performed better according to which had been most recently tested. Note also how much of an outlier EMC is, both in terms of capacity used and total capacity. It was also an outlier in its free space, with just under 1 TB unused. IBM and NetApp both left approximately 5 TB of free space in their configurations, and Pillar had 16 TB of free space.

I do have to wonder how much weight users give to these industry benchmarks when selecting a product.  NetApp’s submitting EMC systems to SPC, a flap last summer over server virtualization benchmark testing, and  continued inconsistency among vendors as to who submits systems for benchmarking leaves a lot of potential reasons to take benchmarks with a grain of salt.


January 12, 2009  5:05 PM

Startup claims 50,000 random read IOPS on 1 TB SSD

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A company called pureSilicon came out of stealth last week at CES with new solid state drives in 256 GB, 512 GB and 1 TB capacities. The drives, expected to ship this summer, also include a proprietary 32-channel controller architecture that company founder and president Jason Breakstone said has been clocked at 50,000 random read IOPS.

By comparison, the drives EMC Corp. ships today from STEC are 73 GB and 146 GB. Intel’s X-25 E SSDs have 10-channel controllers and are offered in 32 GB and 64 GB capacities, and recently announced enterprise SSDs from Samsung have 8-channel controllers and are offered at 100 GB capacity. This is an already crowded market, but if pureSilicon can do what it says it’s going to do, it’s found some differentiators already.

A 1 TB SSDs might grab enterprise customers’ attention, but the drive is manufactured using multi-level cell (MLC) technology. The 256 GB and 512 GB sizes are single-level cell (SLC) drives, which generally have a longer lifecycle, and are viewed as more reliable than MLC drives because only one layer of data is stored in each Flash cell at a time. For now,  pureSilicon offers the 1 TB MLC drives for enterprise applications with a three-year warranty. Breakstone maintains the the bigger the SSD, the more compelling its value proposition for consolidating large numbers of short-stroked hard drives.

On the other side, however, is the expense of SSDs, despite the fact that Flash pricing has declined over the last year. This is part of the reason for smaller capacities for SSDs so far. Breakstone said pricing won’t be set for the new 1 TB behemoth until closer to its release date, but with high capacity and high I/O, “our product can perform at the level of a larger array. If you can achieve the same results using a factor of 10 or 100 fewer drives, it’s a win-win.”


January 12, 2009  11:29 AM

IBM changes storage chiefs, but says it’s routine

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

IBM officials confirmed rumblings from last week that IBM veteran Curtis Tearte has replaced Andy Monshaw as its storage boss.

Tearte takes over as GM of the Storage Systems and Technology Group, after previously heading IBM’s Industry Growth Initiatives, Infrastructure Solutions, Innovation Solutions, and the Industry Systems Division organizations.

The IBM spokesperson who confirmed the move said the change was “part of IBM’s normal job rotation among its senior executives.” Monshaw, who became storage GM in January of 2005, becomes  general manager of sales and operations for IBM Japan.

Tearte takes over a storage division that added several pieces last year when it acquired systems startup XIV, VTL vendor Diligent Technologies, and continuous data protection (CDP) software maker FilesX. But Big Blue’s storage strategy hasn’t always been well defined since the acquisitions. 


January 12, 2009  10:26 AM

Seagate drives out its CEO

Dave Raffo Dave Raffo Profile: Dave Raffo

Seagate sent its top two executives packing this morning, and they’ll be followed by 10% of the U.S. staff by the end of the month.

The surprising moves are the latest sign that all is not well with the disk drive maker, which already cut its revenue forecast for last quarter from $3.05 billion to $2.85 billion.

Former CEO and current chairman Stephen Luczo is replacing CEO Bill Watkins, who will stay on as an adviser to Luczo according to Seagate’s news release. What Seagate didn’t put in its release – but added in its SEC filing – is that president Dave Wickersham resigned and will be replaced by current CTO Bob Whitmore.

The SEC filing also confirmed the layoff of 10% of the U.S. workforce, saying the cuts will “impact a broad range of departments, including research and development” and are the results of the troubled economy. Seagate will probably give more details of the executive changes and layoffs when it reports earnings Jan. 21.

Financial analyst Aaron Rakers of Stifel Financial Corp. says the changes show that things might be even worse than anybody thought at Seagate. He says while it’s a good sign that Seagate is making the tough decisions to realign the company after recent struggles, the shakeup could be “a signal that more meaningful negatives are going on within the company.”

Luczo was an investment banker for Bear, Stearns & Co., before becoming Seagate’s CEO from 1998-2004. During that period, Seagate went private in 2000 before re-emerging as a public company in 2002. He is also on the board of storage system vendor Xiotech, which Seagate spun out during Luczo’s term as CEO.


January 8, 2009  10:50 AM

Sun to meld identity management with storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Sun’s Chief Identity Strategist Sachin Nayyar and I had an interesting discussion today about Sun’s plans to bring together role-based access management with storage provisioning this year.

Nayyar, who was CEO of identity management software maker Vaau when Sun acquired it in late 2007, said that Sun is now looking to integrate role-based identity management software with storage provisioning. So, for example, when a new employee joins a company, provisioning of storage on a shared device could be triggered by a call from the software registering that employee’s identity on the network. When that employee leaves the company, the identity management software could also remove the employee’s data from production storage, migrating it to archival storage or making it a part of the employee’s supervisor’s storage capacity.

Nayyar said the identity management software has some data migration capabilities, so that it could handle that process, or it could integrate with other elements in the environment. Policies could also be set to migrate an employee’s data to archival storage when a project they’re involved with finishes, or a department they’re in is restructured.  

“It’s something we already do today with Outlook,” Nayyar said. “We’re not sure on the details with the open storage software, if it would provide some of the migration capability, but our identity software has the ability to move content.” 

There are always political ramifications within a data center’s staff when one piece of software from a certain discipline ( identity management is generally part of the security infrastructure) looks to control a task or device in another (in this case, provisioning storage). However, Nayyar pointed out users across data centers are already integrating with access management software such as Microsoft’s Active Directory. “Every provisioning process has set of approvals and the storage admin has to sign off before anything is triggered,” he said. “It’s similar to what’s done today when an account is created with Active Directory–the administrator has to approve it. It’s not a big jump in the identity space.”

Given the challenges that are facing Sun of late and the fact that the idea is still in the “discussion phase” within Sun, as Nayyar put it, it’s probably best to take it with a grain of salt, but as a concept I found it interesting. I wouldn’t be surprised to see similar offerings emerge from other companies with storage and security IP, like EMC and IBM. During a conversation I had with EMC CTO Jeff Nick last month, he emphasized the importance of linking data across repositories to individual users.

I can also see this potentially playing a role in multi-tenant cloud environments, particularly in the consumer and SOHO space, where storage needs to be organized according to an individual client’s identity. The automated process that would be involved is also supposed to generally appeal in sprawling cloud data centers. Meanwhile, Sun yesterday purchased a Belgian company called Q-Layer, whose software automates the deployment and management of public and private clouds.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: