Storage Soup

October 16, 2009  6:13 PM

SNW chatter: primary dedupe, scale-out NAS, cloud offerings expanding

Dave Raffo Dave Raffo Profile: Dave Raffo

Heard and overheard at SNW:

Get ready for a mini-wave of block-based primary deduplication/compression products.

Perhaps the most ambitious primary dedupe product is WhipTail Technologies new Racerunner solid state disk appliances, which will ship with Exar’s Hifn BitWackr deduplication and compression cards starting around the end of the year.

RaceRunner uses Samsung NAND MLC SSDs and will soon add Intel X25-M SSDs, WhipTail CTO James Candelaria said. “We slide Exar’s layer into our interface for inline-primary deduplication,” Candelaria said.

He says testing shows a dedupe ratio of about 4-1, with higher ratios for database data. WhipTail is also adding dedupe at the same price of its current products — $49,000 for a 1.5 TB appliance, $79,000 for 3 TB and $129,000 for 6 TB. Next up, Candelaria said, will be InfiniBand support for Racerunner appliances.

Two players who already shrink primary data are preparing to expand their product lines. Storwize is about to go into beta with a Fibre Channel version of its current file compression product with iSCSI to follow, Storwize CEO Ed Walsh said.

Permabit CEO Tom Cook says his company, which today sells file-based dedupe for “non-tier 1” primary storage, is working on an OEM deal for a block and file deduplication product.

NetApp is the only vendor today who offers block-based dedupe for primary storage. …

Hewlett-Packard is planning tweaks to its Ibrix NAS and data deduplication products around the end of the year or early next year, HP director of marketing for unified computing Lee Johns said.

Johns said HP so far has been selling Ibrix as software-only, just as Ibrix sold the product before HP acquired the scale-out NAS vendor in July. But he says HP will announce an HP-branded Ibrix product around the end of the year. “We’ll predominantly drive Ibrix as an appliance model,” he said. “We’ll focus on packaging it with other HP solutions.”

One of those other solutions is the LeftHand iSCSI SAN platform. Johns said Ibrix partner Dell had success selling Ibrix in front of its EqualLogic iSCSI SANs, and HP will probably do the same.

On the deduplication front, Johns says HP has been successful selling its Sepaton-driven Virtual Library Systems VTL in the enterprise but “in the midrange, we’ve been a little invisible. That’s an area we will be focusing on at the end of the year.” By the midrange, he was referring to the D2D platform that runs HP’s home-grown software as part of HP’s two-tier dedupe strategy. …

If cloud storage is such a hot new technology, how come it’s been around for a decade ore more?

“We’ve been doing what people today consider the cloud, since 2000,” said Bycast CEO Moe Kermani, whose company’s StorageGrid clustered NAS software is frequently mentioned as a building block for cloud providers. “I never knew anybody who got up in the morning and said, ‘I want to buy the cloud today.’”

Bycast customer Tony Langenstein, IT director of infrastructure at Iowa Health System, gave an SNW presentation called “Disaster Recovery in the Storage Cloud.” Langenstein said he’s been using Bycast software since 2005 but “we just started calling it the cloud at the beginning of this year. We had a cloud, I just didn’t know it.”

October 16, 2009  3:54 PM

CommVault fires back at EMC’s Slootman

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Former Data Domain CEO Frank Slootman, now president of EMC’s data backup and recovery division, sat down for a Q&A with that’s been getting some attention from the industry, particularly other deduplication competitors.

Among those competitors, one with a contentious relationship with EMC/Data Domain is former partner CommVault, with whom Data Domain had a messy breakup after CommVault introduced its own deduplication with Simpana 8.

Here’s what Slootman had to say about them:

SearchDataBackup: Will you continue to work closely with Symantec Corp.’s OpenStorage (OST) API now that you’re EMC?

Slootman: Yes. I’m not throwing my partners under the bus. We’ll compete, but we’re all competitors and partners these days. We won’t screw them. We’ll screw other companies, like CommVault. We {Data Domain] treated them as a good partner and they came after us.

In an email to Storage Soup this week, CommVault vice president of marketing and business development Dave West had this response:

As I said back in June, I applaud Frank and Data Domain’s ability to create momentum for deduplication and a tremendous return for its shareholders. In the Dave Raffo piece, Frank calls out CommVault simply because we’re giving them a run for their money. Simpana, with built-in dedupe, works really well, and we are winning business. Now, I find it ludicrous to suggest a product vision that forces a customer to deploy 3 or more disparate products to achieve basic data protection. (Pile on more products for replication, encryption, archive and SRM).  At the end of the day, customers want less complexity, improved operational efficiency and ultimately, to spend less money. That means fewer, not more solutions. Less hardware and smarter software. EMC’s product portfolio is both complicated and costly for customers, so buyer beware. Also, in our opinion, this interview should raise some serious flags among the thousands of already nervous NetWorker customers out there looking for reassurance in the wake of the Data Domain acquisition.

I asked West to elaborate on the “red flags” about NetWorker, and he pointed to this statement by Slootman in another part of the interview:

SearchDataBackup: If Avamar is the future of data backup software, where does that leave NetWorker?

Slootman: Well, Avamar is augmenting NetWorker in a lot of places. People are moving a good part of their workload to Avamar, but not all. They’re still running applications like big, fat databases on traditional backup software. NetWorker can support conventional backup on tape and mixed media and people can integrate it with Data Domain.

“Former EMC customers are telling us that there is no real investment or innovation going into the Networker product and they’re tired of it,” West added.

This dedupe feud will get really interesting if CommVault partner Dell Inc. starts selling Data Domain, which is a likely scenario because Dell sells much of EMC’s storage products. CommVault’s Simpana is currently a big piece of Dell’s deduplication strategy.

October 16, 2009  7:25 AM

10-15-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Another busy week, and I make my triumphant (if slightly raspy) return to the podcast.

(0:23) Hitachi implicated in Sidekick outage
EMC denies blog claim that its SAN was involved in Sidekick outage
Microsoft says it has recovered Sidekick data

(1:42) Storage clouds gather over Storage Networking World

(3:30) EMC’s Slootman: No data deduplication for Disk Library virtual tape library

(4:23) IBM adds STEC SSDs to its SAN Volume Controller (SVC) storage virtualization device

(6:16) 3PAR fattens its thin provisioning arsenal

October 15, 2009  1:33 PM

Microsoft says it has recovered Sidekick data

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The Sidekick data-loss debacle may be drawing to a close.

According to a post on Microsoft’s website by corporate vice president Roz Ho,

We are pleased to report that we have recovered most, if not all, customer data for those Sidekick customers whose data was affected by the recent outage. We plan to begin restoring users’ personal data as soon as possible, starting with personal contacts, after we have validated the data and our restoration plan. We will then continue to work around the clock to restore data to all affected users, including calendar, notes, tasks, photographs and high scores, as quickly as possible.

Ho also went on to provide some further details as to what caused the outage and how it was handled:

We have determined that the outage was caused by a system failure that created data loss in the core database and the back-up. We rebuilt the system component by component, recovering data along the way. This careful process has taken a significant amount of time, but was necessary to preserve the integrity of the data…we have made changes to improve the overall stability of the Sidekick service and initiated a more resilient backup process to ensure that the integrity of our database backups is maintained.

All’s well that ends well, but I do wonder if this will make people more conscientious about making local copies of important data sent to a public cloud.

October 14, 2009  8:51 PM

New standards emerge for power consumption testing and SAS connectivity

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In a week chock full of product news from Storage Networking World (SNW) and elsewhere, some new standards have slipped in under the radar that may become important once the dust settles.

The first of these is the announcement of a new Storage Performance Council (SPC) benchmark for testing the power consumption of storage devices in the data center. The new SPC-1E spec follows the SPC-1 C/E spec announced in June. Where the SPC-1C/E spec covered storage components and small subsystems (limited to a maximum of 48 storage devices in no larger than a 4U enclosure profile), the SPC-1E spec expands that support to include larger, more complex storage configurations.

According to an SPC presentation on the new benchmark,  “SPC-1/E is applicable to any SPC-1 storage
configuration that can be measured with a single SPC approved power meter/analyzer.”

For more on how the SPC-1C/E and SPC-1E benchmarks work, see our story on the SPC-1C/E announcement. Users should especially be aware of the parts of the benchmark calculation that can only be specified by vendors.

Still, even an approximate or idealized lab result for power consumption of storage systems would be an improvement over the tools avialable to reliably spec power consumption, increasingly a key cost factor for data centers that users in economically strapped times are looking to cut.


Speaking of cutting costs, Serial Attached SCSI (SAS) devices are widely regarded as the cheaper choice of the future to replace Fibre Channel systems. With 6 Gbps SAS products now beginning to ship, the SCSI Trade Association laid out its roadmap for the future of connectivity between Serial Attached SCSI drives and other elements of the infrastructure.

3 Gbps SAS devices connected via InfiniBand connectors; the Mini-SAS HD connector will be used with most 6 Gbps devices. The new roadmap laid out this week specifies that the Mini-SAS HD connector will be the hardware of choice going forward for all types of connectivity into SAS devices.

Why do you care? Because the development plans for the Mini-SAS HD connector going forward will allow it to serve optical, active and passive copper cables with one connector device, and automatically detect the type of cable it’s attached to — meaning that by the time 12 Gbps SAS rolls around, less hardware wil need to be ripped and replaced to support it. Another thing the connector will support in the future is managed connections, meaning a tiny bit of memory in the connector itself that allows the devices to be queried for reporting and monitoring.

The ability to connect SAS devices over optical and active copper cables is a pretty big deal — cable length and expandability limitations have improved significantly with SAS-2, but native cable lengths currently remain limited to 10 meters. While this is already making data center SAS subsystems a reality, it will need more robust connectivity attributes to compete directly with Fibre Channel. Optical cables can stretch as far as 100 meters, and active copper (so called because it contains transcievers that boost signals) to 20 meters.

October 14, 2009  8:40 PM

Quantum: EMC customers still want us

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum’s chief marketing officer said it was news to her that EMC customer are swapping out Quantum’s deduplication software installed on EMC Disk Libraries, as EMC division president Frank Slootman claims. According to Quantum CMO Janae Lee, EMC customers have continued to buy Quantum software with DLs even since EMC spent $2.1 billion on Data Domain.

“We don’t have visibility to the swapouts he’s talking about,” Lee said, “but we do see their sales reports and customers are continuing to install what we’re offering. It shows a difference in our approach to Data Domain’s approach. We don’t feel deduplication should be a disrupting standalone product. We’re leveraging installed hardware. There’s a basic difference of opinion about how deduplicsation fits.”

EMC has sold Quantum software with its Disk Libraries as part of an OEM deal signed last year.

October 13, 2009  8:28 PM

EMC denies blog claim that its SAN was involved in Sidekick outage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

EMC officials are saying today that a new blog post, which cites an anonymous source as saying an EMC storage area network (SAN) was involved in the recent Sidekick outage, is inaccurate.

According to the blog post, which appeared at RoughlyDrafted Magazine:

To the engineers familiar with Microsoft’s internal operations who spoke with us, that suggests two possible scenarios. First, that Microsoft decided to suddenly replace Danger’s existing infrastructure with its own, and simply failed to carry this out. Danger’s existing system to support Sidekick users was built using an Oracle Real Application Cluster, storing its data in a SAN (storage area network) so that the information would be available to a cluster of high availability servers. This approach is expressly designed to be resilient to hardware failure.


Danger’s Sidekick data center had ”been running on autopilot for some time, so I don’t understand why they would be spending any time upgrading stuff unless there was a hardware failure of some kind,“ wrote the insider. Given Microsoft’s penchant for ”for running the latest and greatest,“ however, ”I wouldn’t be surprised if they found out that [storage vendor] EMC had some new SAN firmware and they just had to put it on the main production servers right away.“

Reached for comment today, an EMC spokesperson said no EMC products were involved.

Another blog yesterday also cited an anonymous source in saying that a SAN upgrade project allegedly involved in the outage was outsourced to Hitachi, but did not identify the brand of SAN involved. Multiple HDS spokespeople have not returned phone calls and emails seeking comment since yesterday.

A Microsoft spokesperson made the following comment for Storage Soup:

I can clarify that the Sidekick runs on Danger’s proprietary service that Microsoft inherited when it acquired Danger in 2008. The Danger service is built on a mix of Danger created technologies and 3rd party technologies. However, other than that we do not have anything else to share right now.  

It actually may not matter at the end of the day whose SAN it was — it seems it was human error (or, as the RoughlyDrafted blog goes on to speculate, possible sabotage) responsible for the outage. The RoughlyDrafted blog goes on to claim:

A variety of ”dogfooding“ or aggressive upgrades could have resulted in data failure, the source explained, ”especially when the right precautions haven’t been taken and the people you hired to do the work are contractors who might not know what they’re doing.“ The Oracle database Danger was using was ”definitely one of the more confusing and troublesome to administer, from my limited experience. It’s entirely possible that they weren’t backing up the ’single copy’ of the database properly, despite the redundant SAN and redundant servers.“

“Just because there may have been an error during a SAN upgrade doesn’t mean the guy’s an idiot or that the storage vendor’s stuff doesn’t work. The fundamental question here is where are the backups?” said backup expert W. Curtis Preston.

This remains an open question as of this hour, as a new statement issued by T-Mobile suggests there may be some data that’s recoverable– “We…remain hopeful that for the majority of our customers, personal content can be recovered.”

A New York Times report released this week cited a T-Mobile official as saying data on the Sidekick server and its backup server were corrupted.

But it also can’t be assumed that thorough secondary copies of data were made by the cloud service. Slightly higher-end online PC backup services like Carbonite and SpiderOak, previously questioned about geographic redundancy available for their services should their primary data centers fail (this following a high-profile outage and lawsuit for Carbonite–where users experienced data loss), have cited costs and pricing pressures as reasons for not offering that level of redundancy for consumer customers.

Another important point in all this is that users might not be losing data if they synced data to their PCs as well as the cloud. T-Mobile offers an IntelliSync service for a fee to sync data between the Sidekick and the PC; there are also free synchronization clients available online. Users would’ve had to have those services in place prior to the outage, however.

“The bottom line is that a free cloud service shouldn’t be your only copy of data,” Preston said.

October 12, 2009  7:58 PM

Hitachi implicated in Sidekick outage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

News broke this morning of an outage for users of the Sidekick mobile smartphone,  in which T-Mobile warned users of the device not to power down their phones, or personal data would be irretrievably lost thanks to a server outage at Danger, a Microsoft subsidiary that supports the Sidekick.

Meanwhile, Engadget has blogged that the storage and backup infrastructure at Danger was to blame for the outage:

Alleged details on the events leading up to Danger’s doomsday scenario are starting to come out of the woodwork, and it all paints a truly embarrassing picture: Microsoft, possibly trying to compensate for lost and / or laid-off Danger employees, outsources an upgrade of its Sidekick SAN to Hitachi, which — for reasons unknown — fails to make a backup before starting. Long story short, the upgrade runs into complications, data is lost, and without a backup to revert to, untold thousands of Sidekick users get shafted in an epic way rarely seen in an age of well-defined, well-understood IT strategies. 

If confirmed, it would be the second high-profile outage Hitachi has been associated with in the last six months. An HDS SAN was also implicated when Barclay’s ATMs in the UK stopped working in June.

Regardless of the source of the failure, outages like this usually draw attention to the fundamental risk of cloud computing — the things that can happen when all of users’ data “eggs” are put in one service provider’s “basket.”

Requests for comment are in to Microsoft and HDS and have not yet been returned. Stay tuned.

October 12, 2009  3:32 PM

Oracle OpenWorld keynotes emphasize hardware

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Oracle OpenWorld kicked off yesterday in San Francisco (at the Moscone Center, same place VMWorld was held). Sun Microsystems Chairman and co-founder Scott McNealy and Oracle founder and CEO Larry Ellison took the stage for keynotes Sunday night, highlights of which were available on Oracle’s website this morning.

For perhaps the first time at an official public event, the word “storage” was uttered by an exec from the merging companies, who have already assured the world that server hardware development will continue.

According to McNealy,

If you think about the Sun technology that we’re bringing to the party, here, it’s the data center. It’s the servers, the storage, the networking, the infrastructure software, all the pieces, all of the executable environment within the cloud, the data center, the distributed computing environment, whatever else you want to say, and then you bring in the database, and the applications and ERP and middleware capabilities and developer tool capabilities of Oracle, and you have a very nice data center. A very robust, very scalable…enterprise data center.

This end to end “stack” vision would be in keeping with the other big players in the market, which are beginning to offer prepackaged product bundles and looking to be soup-to-nuts suppliers to the enterprise data center. Oracle’s competitive landscape for end-to-end stacks includes Cisco Systems Inc., IBM Corp., Hewlett-Packard Co. (HP) and Dell Inc.

There are advantages, Ellison said, in a company being able to control the engineering of both hardware and software. “We are not selling the hardware business-no part of the hardware business are we selling,” Ellison said in his keynote, though he went on to specifically discuss mostly server technologies like Sun’s SPARC chips. (Here’s where Sun might point out that it recently merged servers and storage together in terms of its engineering departments and in terms of its strategic thinking with Amber Road…)

So the biggest question for the storage hardware market with this merger still comes down to tape. Some of the competitive “stack” offerings like those from IBM include tape — in fact, with its latest Information Archive appliance, IBM is offering tape as an option managed by the GPFS global namespace, a setup highly remeniscent of the way Sun’s SAM-FS can manage data in disk repositories as well as StorageTek tape libraries.

Judging by the speeches from McNealy and Ellison, it seems no hardware product is being taken completely off the table yet, but what the newly merged entity will do with tape storage hardware specifically remains uncertain at this point.

October 9, 2009  2:06 PM

10-08-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I am sick this week, with a croaky voice, so my colleague Chris Griffin kindly filled in for me on this podcast. It’s a long’un this week — plenty of news going out this time of year.

Stories referenced:

(0:21) Symantec launches file platform for cloud storage

(2:20) VMware upgrades Site Recovery Manager for disaster recovery

(3:24) Avere looks to optimize performance of tiered storage with FXT Series
Storspeed comes out of stealth with SP5000 NAS caching and monitoring appliance

(7:30) IBM offers Smart Business Storage Cloud and Information Archive for cloud storage, data archiving

(12:00) Iomega launches ix2-200 NAS desktop backup appliance with replication and iSCSI support

(14:00) i365 launches EVault Offsite Replication cloud data backup and disaster recovery service

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: