Storage Soup


October 20, 2009  3:35 PM

QLogic ‘swipes’ another FCoE win

Dave Raffo Dave Raffo Profile: Dave Raffo

Another piece of the Fibre Channel over Ethernet (FCoE) puzzle was put in place today when IBM said it will ship QLogic’s 8142 converged network adapters (CNAs) inside the Power Systems server platform for native FCoE connectivity.

IBM’s p Series servers run Unix and Linux operating systems. QLogic reps are crowing over the design win because it comes in its biggest rival’s backyard. IBM ships Emulex Fibre Channel HBAs exclusively with its p series, but has gone to its rival for FCoE adapters. Satish Lakshmanan, QLogic’s director of product marketing for host solutions, says QLogic’s is the only CNA that will ship with the IBM p series.

The 8142 is part of QLogic’s 8100 Series single-chip CNA platform that handles 10-Gigabit Ethernet traffic and has an integrated FCoE offload engine.

“This gives them a foot in the door in the p series,” Enterprise Strategy Group analyst Bob Laliberte said of QLogic.

Laliberte says the long-time FC HBA rivals have taken different paths in the early days of converged networking, with QLogic racking FCoE design wins while Emulex picks up 10-Gigabit Ethernet wins with its universal converged network adapters (UCNAs).

“QLogic was first to market with a single-chip architecture for FCoE that’s dramatically lower in power consumption, footprint and heat,” Laliberte said. “Being first out of the gate gives them a chance to get design wins in FCoE. Emulex is going in a different direction. You see Emulex getting wins for 10-gigE and down the road partners have an option to turn on FCoE.”

“This isn’t the only customer we’ve swiped away,” Lakshmanan said. “There’s more that will be coming.”

NetApp said in August that it would rebrand QLogic’s 8152 CNA as a built-in adapter for its FAS storage arrays. NetApp also qualified Brocade’s 1020 CNA but not Emulex’s yet.

IBM also sells QLogic’s 8100 CNAs on its System x and BladeCenter servers, and EMC qualified QLogic’s CNA on its Symmetrix, Clariion and Celerra NS storage platforms.

These are still early days for FCoE and converged adapters, though. Widspread adoption of FCoE on servers isn’t expected before 2011 and it will likely take years after that for it to show up in any volume on the storage side.

October 19, 2009  4:24 PM

Data Domain rolls out new midrange dedupe boxes

Dave Raffo Dave Raffo Profile: Dave Raffo

Continuing its strategy of upgrading its data deduplication appliances with faster processors and larger drives, EMC’s Data Domain rolled out bigger and faster midrange boxes today.

Data Domain’s DD610 and DD630 systems replace the DD510 and DD530 models, and the DD140 aimed primarily at remote offices replaces the DD120. The new systems have dual-core Intel Xeon processors and support 500 GB SATA drives.

Data Domain claims the new systems nearly double the performance of the DD500 series boxes they replace. The DD610 ingests data at up to 675 GB per hour, with 6 TB of raw and 3.98 TB of usable capacity. The DD630 performs at up to 1.1 TB per hour with 12 TB of raw and 8.4 TB of usable capacity. Factoring in deduplication, Data Domain claims the DD610 can protect up to 195 TB and the DD630 up to 420 TB. The DD140 ingests data at up to 450 GB per hour, holds 1.5 TB raw and 860 GB of usable data, and protects up to 43 TB.

It has long been Data Domain’s position that its inline deduplication is the best method for deduping data, and faster systems will drive efficiencies. “Our bottlenck is always the CPU because of the way we do inline deduplication,” Data Domain product marketing director Shane Jackson said. “Our first system went 40 megabytes per hour, now we’re at 1.1 terabytes per hour for a midrange system.”

The DD610 with 3.5 TB and NFS and CIFS connectivity costs $22,000. The DD630 with 3.5 TB costs $50,000 and the DD140 with NFS, CIFS and Replicator replication software costs $13,900.

Data Domain has kept busy upgrading its systems this year, launching the highest end of its midrange family, the DD660, in March and bringing out the DD880 for the enterprise in July. The DD660 and DD880 are quad-core systems.

The $2.1 billion acquisition by EMC in July hasn’t slowed Data Domain down, although the deal may be changing the parent company’s backup strategy. Data Domain upgraded its operating system to add cascaded replication last month before pushing out this system upgrade. Financial analysts say the transition hasn’t hurt sales, either. Aaron Rakers of Stifel Nicolaus Equity Research wrote in an EMC earnings preview (EMC announces earnings Thursday) that Data Domain revenue could be as high as $80 million for the quarter, well above the $65 million forecast.

“Our checks have been very positive on EMC’s Data Domain momentum post the acquisition,” he wrote. “Checks suggest that EMC has largely left the Data Domain go-to-market strategy in place.”


October 16, 2009  6:13 PM

SNW chatter: primary dedupe, scale-out NAS, cloud offerings expanding

Dave Raffo Dave Raffo Profile: Dave Raffo

Heard and overheard at SNW:

Get ready for a mini-wave of block-based primary deduplication/compression products.

Perhaps the most ambitious primary dedupe product is WhipTail Technologies new Racerunner solid state disk appliances, which will ship with Exar’s Hifn BitWackr deduplication and compression cards starting around the end of the year.

RaceRunner uses Samsung NAND MLC SSDs and will soon add Intel X25-M SSDs, WhipTail CTO James Candelaria said. “We slide Exar’s layer into our interface for inline-primary deduplication,” Candelaria said.

He says testing shows a dedupe ratio of about 4-1, with higher ratios for database data. WhipTail is also adding dedupe at the same price of its current products — $49,000 for a 1.5 TB appliance, $79,000 for 3 TB and $129,000 for 6 TB. Next up, Candelaria said, will be InfiniBand support for Racerunner appliances.

Two players who already shrink primary data are preparing to expand their product lines. Storwize is about to go into beta with a Fibre Channel version of its current file compression product with iSCSI to follow, Storwize CEO Ed Walsh said.

Permabit CEO Tom Cook says his company, which today sells file-based dedupe for “non-tier 1” primary storage, is working on an OEM deal for a block and file deduplication product.

NetApp is the only vendor today who offers block-based dedupe for primary storage. …

Hewlett-Packard is planning tweaks to its Ibrix NAS and data deduplication products around the end of the year or early next year, HP director of marketing for unified computing Lee Johns said.

Johns said HP so far has been selling Ibrix as software-only, just as Ibrix sold the product before HP acquired the scale-out NAS vendor in July. But he says HP will announce an HP-branded Ibrix product around the end of the year. “We’ll predominantly drive Ibrix as an appliance model,” he said. “We’ll focus on packaging it with other HP solutions.”

One of those other solutions is the LeftHand iSCSI SAN platform. Johns said Ibrix partner Dell had success selling Ibrix in front of its EqualLogic iSCSI SANs, and HP will probably do the same.

On the deduplication front, Johns says HP has been successful selling its Sepaton-driven Virtual Library Systems VTL in the enterprise but “in the midrange, we’ve been a little invisible. That’s an area we will be focusing on at the end of the year.” By the midrange, he was referring to the D2D platform that runs HP’s home-grown software as part of HP’s two-tier dedupe strategy. …

If cloud storage is such a hot new technology, how come it’s been around for a decade ore more?

“We’ve been doing what people today consider the cloud, since 2000,” said Bycast CEO Moe Kermani, whose company’s StorageGrid clustered NAS software is frequently mentioned as a building block for cloud providers. “I never knew anybody who got up in the morning and said, ‘I want to buy the cloud today.’”

Bycast customer Tony Langenstein, IT director of infrastructure at Iowa Health System, gave an SNW presentation called “Disaster Recovery in the Storage Cloud.” Langenstein said he’s been using Bycast software since 2005 but “we just started calling it the cloud at the beginning of this year. We had a cloud, I just didn’t know it.”


October 16, 2009  3:54 PM

CommVault fires back at EMC’s Slootman

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Former Data Domain CEO Frank Slootman, now president of EMC’s data backup and recovery division, sat down for a Q&A with SearchDataBackup.com that’s been getting some attention from the industry, particularly other deduplication competitors.

Among those competitors, one with a contentious relationship with EMC/Data Domain is former partner CommVault, with whom Data Domain had a messy breakup after CommVault introduced its own deduplication with Simpana 8.

Here’s what Slootman had to say about them:

SearchDataBackup: Will you continue to work closely with Symantec Corp.’s OpenStorage (OST) API now that you’re EMC?

Slootman: Yes. I’m not throwing my partners under the bus. We’ll compete, but we’re all competitors and partners these days. We won’t screw them. We’ll screw other companies, like CommVault. We {Data Domain] treated them as a good partner and they came after us.

In an email to Storage Soup this week, CommVault vice president of marketing and business development Dave West had this response:

As I said back in June, I applaud Frank and Data Domain’s ability to create momentum for deduplication and a tremendous return for its shareholders. In the Dave Raffo piece, Frank calls out CommVault simply because we’re giving them a run for their money. Simpana, with built-in dedupe, works really well, and we are winning business. Now, I find it ludicrous to suggest a product vision that forces a customer to deploy 3 or more disparate products to achieve basic data protection. (Pile on more products for replication, encryption, archive and SRM).  At the end of the day, customers want less complexity, improved operational efficiency and ultimately, to spend less money. That means fewer, not more solutions. Less hardware and smarter software. EMC’s product portfolio is both complicated and costly for customers, so buyer beware. Also, in our opinion, this interview should raise some serious flags among the thousands of already nervous NetWorker customers out there looking for reassurance in the wake of the Data Domain acquisition.

I asked West to elaborate on the “red flags” about NetWorker, and he pointed to this statement by Slootman in another part of the interview:

SearchDataBackup: If Avamar is the future of data backup software, where does that leave NetWorker?

Slootman: Well, Avamar is augmenting NetWorker in a lot of places. People are moving a good part of their workload to Avamar, but not all. They’re still running applications like big, fat databases on traditional backup software. NetWorker can support conventional backup on tape and mixed media and people can integrate it with Data Domain.

“Former EMC customers are telling us that there is no real investment or innovation going into the Networker product and they’re tired of it,” West added.

This dedupe feud will get really interesting if CommVault partner Dell Inc. starts selling Data Domain, which is a likely scenario because Dell sells much of EMC’s storage products. CommVault’s Simpana is currently a big piece of Dell’s deduplication strategy.


October 16, 2009  7:25 AM

10-15-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Another busy week, and I make my triumphant (if slightly raspy) return to the podcast.

(0:23) Hitachi implicated in Sidekick outage
EMC denies blog claim that its SAN was involved in Sidekick outage
Microsoft says it has recovered Sidekick data

(1:42) Storage clouds gather over Storage Networking World

(3:30) EMC’s Slootman: No data deduplication for Disk Library virtual tape library

(4:23) IBM adds STEC SSDs to its SAN Volume Controller (SVC) storage virtualization device

(6:16) 3PAR fattens its thin provisioning arsenal


October 15, 2009  1:33 PM

Microsoft says it has recovered Sidekick data

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The Sidekick data-loss debacle may be drawing to a close.

According to a post on Microsoft’s website by corporate vice president Roz Ho,

We are pleased to report that we have recovered most, if not all, customer data for those Sidekick customers whose data was affected by the recent outage. We plan to begin restoring users’ personal data as soon as possible, starting with personal contacts, after we have validated the data and our restoration plan. We will then continue to work around the clock to restore data to all affected users, including calendar, notes, tasks, photographs and high scores, as quickly as possible.

Ho also went on to provide some further details as to what caused the outage and how it was handled:

We have determined that the outage was caused by a system failure that created data loss in the core database and the back-up. We rebuilt the system component by component, recovering data along the way. This careful process has taken a significant amount of time, but was necessary to preserve the integrity of the data…we have made changes to improve the overall stability of the Sidekick service and initiated a more resilient backup process to ensure that the integrity of our database backups is maintained.

All’s well that ends well, but I do wonder if this will make people more conscientious about making local copies of important data sent to a public cloud.


October 14, 2009  8:51 PM

New standards emerge for power consumption testing and SAS connectivity

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In a week chock full of product news from Storage Networking World (SNW) and elsewhere, some new standards have slipped in under the radar that may become important once the dust settles.

The first of these is the announcement of a new Storage Performance Council (SPC) benchmark for testing the power consumption of storage devices in the data center. The new SPC-1E spec follows the SPC-1 C/E spec announced in June. Where the SPC-1C/E spec covered storage components and small subsystems (limited to a maximum of 48 storage devices in no larger than a 4U enclosure profile), the SPC-1E spec expands that support to include larger, more complex storage configurations.

According to an SPC presentation on the new benchmark,  “SPC-1/E is applicable to any SPC-1 storage
configuration that can be measured with a single SPC approved power meter/analyzer.”

For more on how the SPC-1C/E and SPC-1E benchmarks work, see our story on the SPC-1C/E announcement. Users should especially be aware of the parts of the benchmark calculation that can only be specified by vendors.

Still, even an approximate or idealized lab result for power consumption of storage systems would be an improvement over the tools avialable to reliably spec power consumption, increasingly a key cost factor for data centers that users in economically strapped times are looking to cut.

***

Speaking of cutting costs, Serial Attached SCSI (SAS) devices are widely regarded as the cheaper choice of the future to replace Fibre Channel systems. With 6 Gbps SAS products now beginning to ship, the SCSI Trade Association laid out its roadmap for the future of connectivity between Serial Attached SCSI drives and other elements of the infrastructure.

3 Gbps SAS devices connected via InfiniBand connectors; the Mini-SAS HD connector will be used with most 6 Gbps devices. The new roadmap laid out this week specifies that the Mini-SAS HD connector will be the hardware of choice going forward for all types of connectivity into SAS devices.

Why do you care? Because the development plans for the Mini-SAS HD connector going forward will allow it to serve optical, active and passive copper cables with one connector device, and automatically detect the type of cable it’s attached to — meaning that by the time 12 Gbps SAS rolls around, less hardware wil need to be ripped and replaced to support it. Another thing the connector will support in the future is managed connections, meaning a tiny bit of memory in the connector itself that allows the devices to be queried for reporting and monitoring.

The ability to connect SAS devices over optical and active copper cables is a pretty big deal — cable length and expandability limitations have improved significantly with SAS-2, but native cable lengths currently remain limited to 10 meters. While this is already making data center SAS subsystems a reality, it will need more robust connectivity attributes to compete directly with Fibre Channel. Optical cables can stretch as far as 100 meters, and active copper (so called because it contains transcievers that boost signals) to 20 meters.


October 14, 2009  8:40 PM

Quantum: EMC customers still want us

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum’s chief marketing officer said it was news to her that EMC customer are swapping out Quantum’s deduplication software installed on EMC Disk Libraries, as EMC division president Frank Slootman claims. According to Quantum CMO Janae Lee, EMC customers have continued to buy Quantum software with DLs even since EMC spent $2.1 billion on Data Domain.

“We don’t have visibility to the swapouts he’s talking about,” Lee said, “but we do see their sales reports and customers are continuing to install what we’re offering. It shows a difference in our approach to Data Domain’s approach. We don’t feel deduplication should be a disrupting standalone product. We’re leveraging installed hardware. There’s a basic difference of opinion about how deduplicsation fits.”

EMC has sold Quantum software with its Disk Libraries as part of an OEM deal signed last year.


October 13, 2009  8:28 PM

EMC denies blog claim that its SAN was involved in Sidekick outage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

EMC officials are saying today that a new blog post, which cites an anonymous source as saying an EMC storage area network (SAN) was involved in the recent Sidekick outage, is inaccurate.

According to the blog post, which appeared at RoughlyDrafted Magazine:

To the engineers familiar with Microsoft’s internal operations who spoke with us, that suggests two possible scenarios. First, that Microsoft decided to suddenly replace Danger’s existing infrastructure with its own, and simply failed to carry this out. Danger’s existing system to support Sidekick users was built using an Oracle Real Application Cluster, storing its data in a SAN (storage area network) so that the information would be available to a cluster of high availability servers. This approach is expressly designed to be resilient to hardware failure.

[..]

Danger’s Sidekick data center had ”been running on autopilot for some time, so I don’t understand why they would be spending any time upgrading stuff unless there was a hardware failure of some kind,“ wrote the insider. Given Microsoft’s penchant for ”for running the latest and greatest,“ however, ”I wouldn’t be surprised if they found out that [storage vendor] EMC had some new SAN firmware and they just had to put it on the main production servers right away.“

Reached for comment today, an EMC spokesperson said no EMC products were involved.

Another blog yesterday also cited an anonymous source in saying that a SAN upgrade project allegedly involved in the outage was outsourced to Hitachi, but did not identify the brand of SAN involved. Multiple HDS spokespeople have not returned phone calls and emails seeking comment since yesterday.

A Microsoft spokesperson made the following comment for Storage Soup:

I can clarify that the Sidekick runs on Danger’s proprietary service that Microsoft inherited when it acquired Danger in 2008. The Danger service is built on a mix of Danger created technologies and 3rd party technologies. However, other than that we do not have anything else to share right now.  

It actually may not matter at the end of the day whose SAN it was — it seems it was human error (or, as the RoughlyDrafted blog goes on to speculate, possible sabotage) responsible for the outage. The RoughlyDrafted blog goes on to claim:

A variety of ”dogfooding“ or aggressive upgrades could have resulted in data failure, the source explained, ”especially when the right precautions haven’t been taken and the people you hired to do the work are contractors who might not know what they’re doing.“ The Oracle database Danger was using was ”definitely one of the more confusing and troublesome to administer, from my limited experience. It’s entirely possible that they weren’t backing up the ’single copy’ of the database properly, despite the redundant SAN and redundant servers.“

“Just because there may have been an error during a SAN upgrade doesn’t mean the guy’s an idiot or that the storage vendor’s stuff doesn’t work. The fundamental question here is where are the backups?” said backup expert W. Curtis Preston.

This remains an open question as of this hour, as a new statement issued by T-Mobile suggests there may be some data that’s recoverable– “We…remain hopeful that for the majority of our customers, personal content can be recovered.”

A New York Times report released this week cited a T-Mobile official as saying data on the Sidekick server and its backup server were corrupted.

But it also can’t be assumed that thorough secondary copies of data were made by the cloud service. Slightly higher-end online PC backup services like Carbonite and SpiderOak, previously questioned about geographic redundancy available for their services should their primary data centers fail (this following a high-profile outage and lawsuit for Carbonite–where users experienced data loss), have cited costs and pricing pressures as reasons for not offering that level of redundancy for consumer customers.

Another important point in all this is that users might not be losing data if they synced data to their PCs as well as the cloud. T-Mobile offers an IntelliSync service for a fee to sync data between the Sidekick and the PC; there are also free synchronization clients available online. Users would’ve had to have those services in place prior to the outage, however.

“The bottom line is that a free cloud service shouldn’t be your only copy of data,” Preston said.


October 12, 2009  7:58 PM

Hitachi implicated in Sidekick outage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

News broke this morning of an outage for users of the Sidekick mobile smartphone,  in which T-Mobile warned users of the device not to power down their phones, or personal data would be irretrievably lost thanks to a server outage at Danger, a Microsoft subsidiary that supports the Sidekick.

Meanwhile, Engadget has blogged that the storage and backup infrastructure at Danger was to blame for the outage:

Alleged details on the events leading up to Danger’s doomsday scenario are starting to come out of the woodwork, and it all paints a truly embarrassing picture: Microsoft, possibly trying to compensate for lost and / or laid-off Danger employees, outsources an upgrade of its Sidekick SAN to Hitachi, which — for reasons unknown — fails to make a backup before starting. Long story short, the upgrade runs into complications, data is lost, and without a backup to revert to, untold thousands of Sidekick users get shafted in an epic way rarely seen in an age of well-defined, well-understood IT strategies. 

If confirmed, it would be the second high-profile outage Hitachi has been associated with in the last six months. An HDS SAN was also implicated when Barclay’s ATMs in the UK stopped working in June.

Regardless of the source of the failure, outages like this usually draw attention to the fundamental risk of cloud computing — the things that can happen when all of users’ data “eggs” are put in one service provider’s “basket.”

Requests for comment are in to Microsoft and HDS and have not yet been returned. Stay tuned.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: