Storage Soup

October 27, 2009  1:50 PM

Industry bloggers debate dedupe to tape

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It just wouldn’t be the storage industry if there weren’t technical debates popping up on a daily basis.

One that caught my eye today is an ongoing conversation between some storage bloggers about data deduplication to tape, and whether or not it’s a crazy idea. Or, more accurately, whether it’s “good crazy” or “bad crazy.”

Backup expert W. Curtis Preston got things started with a blog written after he visited CommVault’s headquarters in Oceanport, N.J., and discussed the concept of CommVault’s data deduplication to tape feature added in Simpana 8. “Dedupe to tape is definitely crazy.  But is it crazy good or crazy bad?” Preston wrote.

Everyone (including the CommVault folks) agrees that no one would want to do any significant portion of their restores from deduped tape.  But I also agree that if I typically do all my restores from within the last 30 days, and someone asks me for a 31 day-old file, it’s generally going to be the type of restore where the fact that it might take several minutes to complete is not going to be a huge deal.  (In the case that you did need to do a large restore from a deduped tape set, you could actually bring it back in to disk in its entirety before you initiate the restore.)

Now here’s the business case. Anyone who has done consulting in this business for a while has met the customer where everyone knows that 99% of the restores come from the last 30-60 days — and yet they keep their backups for 1-7 years.  What a waste of resources.  CommVault is saying, “Hey.  If you’re going to do that, at least dedupe the tapes.”  They showed me two business cases from two customers that doing this was saving them over $500K per year in their Iron Mountain bill.

Curtis made some declarative statements in that blog post, and when that happens you can expect someone in the storage blogosphere to write a post in opposition. EMC Networker data backup consultant Preston de Guise did the honors this time, with a reponse titled “Dedupe to tape is “crazy bad” if the architecture is crazy.”

Yes, it’s undoubtedly the case that the CommVault approach will reduce the amount of data stored on tape, which will result in some cost savings. However, penny pinching in backup environments has a tendency to result in recovery impacts – often significant recovery impacts. For example, NetBackup gives “media savings” by not enforcing dependencies. Yes, this can result in in saving money here and there on media, but can result in being unable to do complete filesystem recoveries approaching the end of a total retention period, which is plain dumb.

The CommVault approach while saving some money on tape will significantly expand recovery times (or require large cache areas and still take a lot of recovery time). Saving money is good. Wasting a little time during longer-term recoveries is likely to be perceived as being OK – until there’s a pressing need. Wasting a lot of time during longer-term recoveries is rarely going to be perceived as being OK.

An IT admin/blogger writing at Standalone Sysadmin picked up on de Guise’s post and had this to say:

My problem with this is tape failure. If one of the 50 individual backup tapes fails, it’s no problem. Sure, you lose that particular arrangement of the data, but it’s not that big of an issue. Unfortunate, sure, but not tragic. If you lose the 1 tape that contains the deduplicated data, though, then you immediately have a Bad Day(tm).

Essentially, you are betting on one tape not failing over the course of (in the argument of Mr Preston) 7+ years. And if something does happen in that 7 years, whether it’s degaussing, loss, theft, fire, water, or aliens, you don’t lose one backup set. You lose every backup that referenced that set of data.

So I would, if I could afford one, buy a deduplicated storage array in a heartbeat for my backup needs. But I would not trust a deduplcated archival system at all. The odds of loss are too great, and it’s not worth the savings. I’d rather cut the frequency of my backups than save money by making my archives co-dependent.

Of course, another user we talked to around the launch of Simpana 8 felt differently:

The global deduplication with Simpana 8 also extends to tape, making it the first product of its kind to allow for writes to physical tape libraries without requiring reinflation of deduplicated data. “That’s very appealing,” said Paul Spotts, system engineer for Geisinger Health, a network of hospitals and clinics in central Pennsylvania. “We added a VTL [virtual tape library] because we were running out of capacity in our physical tape libraries, but we lease the VTL, so we’re only allowed to grow so much per quarter.”

What say the rest of you?

October 27, 2009  10:37 AM

LiveDrive looks to hop across the pond with online data backup

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

U.K.-based LiveDrive, a competitor to consumer online backup services like MozyHome and Carbonite, is getting U.S. distribution thanks to a new partnership with LifeBoat Distribution.

Online marketing manager Jamie Brown says LiveDrive has 300,000 unique accounts worldwide, 120,000 of them already located in the U.S. LiveDrive creates a network drive that shows up on users’ PCs. Any files sent to that L:\ drive will be backed up to LiveDrive’s cloud; data can be stored there for safekeeping or users can use LiveDrive to keep the L:\ drive synced and share data among multiple machines. Users can also access their data through LiveDrive’s Web portal, which also offers mini-applications that allow users to edit or play back photos and video.

The company has data center infrastructure in the United States through collocation, but currently all users access data through load balancers in the UK. Brown said there are plans to expand the US infrastructure organically, but won’t rush, saying currently users aren’t experiencing performance issues with the way the infrastructure is set up.

After a year, though, Brown said LiveDrive hopes to have an office in the US within 12 months, and may also add a business-level service to compete with services like MozyPro and i365’s EVault Small Business Edition. It currently does not offer service level agreements or geographic redundancy for consumer users.

Despite its claims about its client base, LiveDrive was unable to provide a public customer reference before the announcement this morning.

Enterprise Strategy Group analyst Lauren Whitehouse said this is becoming an increasingly tough space for new players to differentiate themselves in. “Mozy has more than a million customers, and for Symantec’s SwapDrive the number’s even greater,” she said. “LiveDrive has plenty of formidable competition.”

One factor that might hurt LiveDrive, at least in the beginning, is the fact that data must currently be accessed through the U.K. “Anyone who has discomfort sending data out to the cloud might have more discomfort knowing there’s a geographic distance there,” Whitehouse said. Even if performance isn’t bad, “there could be ramifications if there’s a dispute.”

As for the differentiation of being able to manipulate content within the cloud, Whitehouse said LiveDrive will also face competition from players like Memeo and Ricoh’s Quanp, to say nothing of photo-sharing sites like Flickr and Photobucket, boith of which offer small photo-editing software suites with their services. “It’s somewhat of a Wild West situation right now ith different companies trying to do a ‘land grab’, capturing customers and then building from there,” she said, including LiveDrive in that mix.

October 23, 2009  7:38 AM

10-22-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Stories referenced:

(0:22) Cloud storage provider Zetta looks to replace production network-attached storage

(3:30) Quantum launches midrange data deduplication backup appliances

(5:02) IBM unveils new flagship storage system, DS8700

(7:14) Panasas delivers clustered NAS with SSD

(7:51) Riverbed updates RiOS; Steelhead WAFS device now supports Citrix and disaster recovery

(8:22) EMC cautiously optimistic about storage spending

October 21, 2009  8:58 PM

TIP: 2010 storage spending outlook slightly optimistic

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A report released this week by TheInfoPro says 2010 storage spending will probably be an improvement over 2009, but that’s not saying much.

The numbers released this week are the result of interviews with Fortune 1000 storage professionals, according to the TIP report.

Out of 252 respondents to the ongoing Wave 13 study, 27% said they expect a decrease in spending between 2010 and 2009, 31% said their budgets would remain flat, and 42% said budgets would increase.

This is better than the earlier Wave 13 numbers from the beginning of the year in which 36% of 258 respondents expected a decrease between 2008 and 2009, 26% thought it would be flat and 38% expected an increase.

TIP also broke out the size of the increases or decreases expected next year. Nearly 20% of those who expect an increase expect it to be between 1% and 10 %. Interestingly, the next largest category among those who expect an increase, close to 15%, expect an increase of 50% or more. However, of those who expect a decrease, more than 10% — the largest group — expect that decrease to be more than 25%.

Overall, 71% are in the are ranging from a 10% increase to a 10% decrease. That’s better than a sharp decrease, but hardly the “pent up demand” you hear about in some areas of the market (though maybe what’s being referring to is that 15% who expect a 50% increase, in which case I hope those few customers have someone screening calls and guarding the doors for them…). After spending plummeted between 2008 and 2009, flat-lining into 2010 isn’t necessarily good news, it’s just not more bad news.

October 20, 2009  7:07 PM

Email archiving vendor sues Gartner over Magic Quadrant

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Claiming that Gartner’s Magic Quadrant vendor-ranking reports constitute “disparaging, false/misleading, and unfair statements” about its email archiving product that have done damage to its sales prospects, ZL Technologies Inc. said today that it filed suit May 29 in US District Court in San Jose, Cali. against the analyst firm. The suit seeks damages of $132 million to account for what ZL says are lost sales, as well as punitive damages.

Gartner responded with a motion to dismiss the lawsuit in July, to which ZL filed a counter-motion. A hearing on these later filings is scheduled for this Friday.

ZL alleged in its initial complaint that Gartner consistently ranks its product in the lower left quadrant of its report, the “Niche” category, because its sales and marketing are not as strong as Symantec’s Enterprise Vault, which is consistently ranked in the highest Leader category. ZL’s contention is that this consitutes an unfair and defamatory means of evaluating and recommending products that has caused damage to its business. “Gartner continues to harm ZL and help entrench vastly inferior products in the American economy whose principal virtue, according to Gartner, is good sales and marketing,” the complain alleges.

What I found surprising about this initial complaint was the arguments ZL provided about just how much some end users rely on the Magic Quadrant report for purchasing decisions. Some examples:

Purchasers of the ZL Products have consistently and uniformly raised objections to even consider purchasing the ZL Products because of the Defamatory Statements. Even Oracle Corporation, one of the largest software vendors in the world, which resells the ZL Products, complains that it gets “Gartnered” when pursuing prospective customers for the ZL Products, i.e., that a prospect would not even consider looking at the ZL Products because of the low Gartner rankings. The power of a positive ranking in Gartner is immense because it is often the case that large purchases of technology are based exclusively on the MQ Reports.

For instance, the Office of the Inspector General, Department of Veterans Affairs (VA) recently conducted an investigation into the use of the Gartner’s MQ reports in connection with the VA’s $16,000,0000 purchase of certain leases and services from Dell. The Office of Inspector General reported that the VA made this large purchase based solely on the leadership rankings in the relevant Gartner MQ report.


In March 2009, ZL entered into contract to provide the email archive solution for one of the largest and most influential companies in the Silicon Valley. What makes this customer win especially relevant to this action is that: (a) the customer did not initially invite ZL to compete because of the Defamatory Statements, and (b) ZL won the contract only after beating out Symantec in an exhaustive, side-by-side “proof of concept” evaluation. Such a large customer win demonstrates that, but for the Defamatory Statements, ZL would have made many more sales than it has.

The complaint goes on to cite 10 more examples of potential sales in which ZL claims it was not invited to participate in a customer’s evaluation process, or pre-sales discussions were discontinued on the basis of the Magic Quadrant.

Gartner’s response is that the Magic Quadrant report amounts to a First-Amendment-protected statement of opinion, and that its rankings do not constitute a “false or misleading statement of fact” but rather a subjective conclusion.

ZL’s response in an opposition statement to the motion to dismiss argues, “Gartner tells the public that its research is “objective, defensible and credible”—it cannot now be allowed to escape the consequences of its misconduct by claiming the exact opposite, that its statements cannot be taken as anything more than its subjective opinion based on pure speculation and conjecture.”

ZL further argues that “a defendant may be held liable for statements of opinion that imply the existence of undisclosed facts. Gartner expressly stated that its statements had a factual basis, and intended that they be understood as being derived from a fact-based analysis, and can therefore be held liable even if the statements themselves are couched as opinion.”

“Try as it might, ZL cannot create a dispute where there is none,” Gartner countered further in a reply to ZL’s opposition to the motion to dismiss.

ZL alleges at great length in its Complaint (and recapitulates in its Opposition) that it has a strong product and satisfied customers. The Magic Quadrant reports do not say otherwise; the real point of contention here is not the quality of ZL’s product, but instead the subjective analytical model Gartner used to assess ZL’s market position and prospects. ZL does not contest Gartner’s basic assessments of ZL—that it has a good product but needs to expand its sales and marketing—but ZL challenges its placement on the Magic Quadrant Report because Gartner uses a “misguided analytical model” that gives “undue weight to sales and marketing.” Complaint ¶ 10. As the law makes clear, such analysis constitutes non-actionable opinion. None of ZL’s arguments dispute this bedrock principle. Instead, ZL focuses on a straw-man defamatory statement (that ZL’s product is “inferior”) that never appeared in—and is contradicted by—the plain text of the Magic Quadrant reports.

Personally, though I’m not a lawyer or legal expert, I think it’s unlikely that ZL can win its case. In one recent case in which securities rating agencies’ opinions relating to the subprime mortgage crisis were found not to be protected by the First Amendment, there were some different circumstances, such as one judge’s opinion that plaintiffs had sufficiently proven ratings agencies did not sincerely believe their own statements had basis in fact. The lawsuit between ZL and Gartner, on the other hand, seems to come down to the weight given to sales and marketing strength over technical product specifications rather than Gartner’s sincerity.

It would be easy to point out here that potential customers could also perform more of their own internal testing of products and not weigh the Gartner quadrants so heavily in their purchasing process, but it’s unclear whether that’s realistic in all cases. It also seems to be a matter of subjective opinion how important size of vendor and strength of sales are in the evaluative process of purchasing technical products. In a perfect world, maybe product evaluation would be a true meritocracy, but who among us hasn’t heard the old chestnut that “nobody ever got fired for buying IBM?”

I’m very interested in the peanut gallery’s response to this. Does ZL have a point about the weight being given to a subjective report in technical purchasing decisions? Or is this a case of impugning an evaluative process because of a disliked outcome?

October 20, 2009  3:35 PM

QLogic ‘swipes’ another FCoE win

Dave Raffo Dave Raffo Profile: Dave Raffo

Another piece of the Fibre Channel over Ethernet (FCoE) puzzle was put in place today when IBM said it will ship QLogic’s 8142 converged network adapters (CNAs) inside the Power Systems server platform for native FCoE connectivity.

IBM’s p Series servers run Unix and Linux operating systems. QLogic reps are crowing over the design win because it comes in its biggest rival’s backyard. IBM ships Emulex Fibre Channel HBAs exclusively with its p series, but has gone to its rival for FCoE adapters. Satish Lakshmanan, QLogic’s director of product marketing for host solutions, says QLogic’s is the only CNA that will ship with the IBM p series.

The 8142 is part of QLogic’s 8100 Series single-chip CNA platform that handles 10-Gigabit Ethernet traffic and has an integrated FCoE offload engine.

“This gives them a foot in the door in the p series,” Enterprise Strategy Group analyst Bob Laliberte said of QLogic.

Laliberte says the long-time FC HBA rivals have taken different paths in the early days of converged networking, with QLogic racking FCoE design wins while Emulex picks up 10-Gigabit Ethernet wins with its universal converged network adapters (UCNAs).

“QLogic was first to market with a single-chip architecture for FCoE that’s dramatically lower in power consumption, footprint and heat,” Laliberte said. “Being first out of the gate gives them a chance to get design wins in FCoE. Emulex is going in a different direction. You see Emulex getting wins for 10-gigE and down the road partners have an option to turn on FCoE.”

“This isn’t the only customer we’ve swiped away,” Lakshmanan said. “There’s more that will be coming.”

NetApp said in August that it would rebrand QLogic’s 8152 CNA as a built-in adapter for its FAS storage arrays. NetApp also qualified Brocade’s 1020 CNA but not Emulex’s yet.

IBM also sells QLogic’s 8100 CNAs on its System x and BladeCenter servers, and EMC qualified QLogic’s CNA on its Symmetrix, Clariion and Celerra NS storage platforms.

These are still early days for FCoE and converged adapters, though. Widspread adoption of FCoE on servers isn’t expected before 2011 and it will likely take years after that for it to show up in any volume on the storage side.

October 19, 2009  4:24 PM

Data Domain rolls out new midrange dedupe boxes

Dave Raffo Dave Raffo Profile: Dave Raffo

Continuing its strategy of upgrading its data deduplication appliances with faster processors and larger drives, EMC’s Data Domain rolled out bigger and faster midrange boxes today.

Data Domain’s DD610 and DD630 systems replace the DD510 and DD530 models, and the DD140 aimed primarily at remote offices replaces the DD120. The new systems have dual-core Intel Xeon processors and support 500 GB SATA drives.

Data Domain claims the new systems nearly double the performance of the DD500 series boxes they replace. The DD610 ingests data at up to 675 GB per hour, with 6 TB of raw and 3.98 TB of usable capacity. The DD630 performs at up to 1.1 TB per hour with 12 TB of raw and 8.4 TB of usable capacity. Factoring in deduplication, Data Domain claims the DD610 can protect up to 195 TB and the DD630 up to 420 TB. The DD140 ingests data at up to 450 GB per hour, holds 1.5 TB raw and 860 GB of usable data, and protects up to 43 TB.

It has long been Data Domain’s position that its inline deduplication is the best method for deduping data, and faster systems will drive efficiencies. “Our bottlenck is always the CPU because of the way we do inline deduplication,” Data Domain product marketing director Shane Jackson said. “Our first system went 40 megabytes per hour, now we’re at 1.1 terabytes per hour for a midrange system.”

The DD610 with 3.5 TB and NFS and CIFS connectivity costs $22,000. The DD630 with 3.5 TB costs $50,000 and the DD140 with NFS, CIFS and Replicator replication software costs $13,900.

Data Domain has kept busy upgrading its systems this year, launching the highest end of its midrange family, the DD660, in March and bringing out the DD880 for the enterprise in July. The DD660 and DD880 are quad-core systems.

The $2.1 billion acquisition by EMC in July hasn’t slowed Data Domain down, although the deal may be changing the parent company’s backup strategy. Data Domain upgraded its operating system to add cascaded replication last month before pushing out this system upgrade. Financial analysts say the transition hasn’t hurt sales, either. Aaron Rakers of Stifel Nicolaus Equity Research wrote in an EMC earnings preview (EMC announces earnings Thursday) that Data Domain revenue could be as high as $80 million for the quarter, well above the $65 million forecast.

“Our checks have been very positive on EMC’s Data Domain momentum post the acquisition,” he wrote. “Checks suggest that EMC has largely left the Data Domain go-to-market strategy in place.”

October 16, 2009  6:13 PM

SNW chatter: primary dedupe, scale-out NAS, cloud offerings expanding

Dave Raffo Dave Raffo Profile: Dave Raffo

Heard and overheard at SNW:

Get ready for a mini-wave of block-based primary deduplication/compression products.

Perhaps the most ambitious primary dedupe product is WhipTail Technologies new Racerunner solid state disk appliances, which will ship with Exar’s Hifn BitWackr deduplication and compression cards starting around the end of the year.

RaceRunner uses Samsung NAND MLC SSDs and will soon add Intel X25-M SSDs, WhipTail CTO James Candelaria said. “We slide Exar’s layer into our interface for inline-primary deduplication,” Candelaria said.

He says testing shows a dedupe ratio of about 4-1, with higher ratios for database data. WhipTail is also adding dedupe at the same price of its current products — $49,000 for a 1.5 TB appliance, $79,000 for 3 TB and $129,000 for 6 TB. Next up, Candelaria said, will be InfiniBand support for Racerunner appliances.

Two players who already shrink primary data are preparing to expand their product lines. Storwize is about to go into beta with a Fibre Channel version of its current file compression product with iSCSI to follow, Storwize CEO Ed Walsh said.

Permabit CEO Tom Cook says his company, which today sells file-based dedupe for “non-tier 1” primary storage, is working on an OEM deal for a block and file deduplication product.

NetApp is the only vendor today who offers block-based dedupe for primary storage. …

Hewlett-Packard is planning tweaks to its Ibrix NAS and data deduplication products around the end of the year or early next year, HP director of marketing for unified computing Lee Johns said.

Johns said HP so far has been selling Ibrix as software-only, just as Ibrix sold the product before HP acquired the scale-out NAS vendor in July. But he says HP will announce an HP-branded Ibrix product around the end of the year. “We’ll predominantly drive Ibrix as an appliance model,” he said. “We’ll focus on packaging it with other HP solutions.”

One of those other solutions is the LeftHand iSCSI SAN platform. Johns said Ibrix partner Dell had success selling Ibrix in front of its EqualLogic iSCSI SANs, and HP will probably do the same.

On the deduplication front, Johns says HP has been successful selling its Sepaton-driven Virtual Library Systems VTL in the enterprise but “in the midrange, we’ve been a little invisible. That’s an area we will be focusing on at the end of the year.” By the midrange, he was referring to the D2D platform that runs HP’s home-grown software as part of HP’s two-tier dedupe strategy. …

If cloud storage is such a hot new technology, how come it’s been around for a decade ore more?

“We’ve been doing what people today consider the cloud, since 2000,” said Bycast CEO Moe Kermani, whose company’s StorageGrid clustered NAS software is frequently mentioned as a building block for cloud providers. “I never knew anybody who got up in the morning and said, ‘I want to buy the cloud today.’”

Bycast customer Tony Langenstein, IT director of infrastructure at Iowa Health System, gave an SNW presentation called “Disaster Recovery in the Storage Cloud.” Langenstein said he’s been using Bycast software since 2005 but “we just started calling it the cloud at the beginning of this year. We had a cloud, I just didn’t know it.”

October 16, 2009  3:54 PM

CommVault fires back at EMC’s Slootman

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Former Data Domain CEO Frank Slootman, now president of EMC’s data backup and recovery division, sat down for a Q&A with that’s been getting some attention from the industry, particularly other deduplication competitors.

Among those competitors, one with a contentious relationship with EMC/Data Domain is former partner CommVault, with whom Data Domain had a messy breakup after CommVault introduced its own deduplication with Simpana 8.

Here’s what Slootman had to say about them:

SearchDataBackup: Will you continue to work closely with Symantec Corp.’s OpenStorage (OST) API now that you’re EMC?

Slootman: Yes. I’m not throwing my partners under the bus. We’ll compete, but we’re all competitors and partners these days. We won’t screw them. We’ll screw other companies, like CommVault. We {Data Domain] treated them as a good partner and they came after us.

In an email to Storage Soup this week, CommVault vice president of marketing and business development Dave West had this response:

As I said back in June, I applaud Frank and Data Domain’s ability to create momentum for deduplication and a tremendous return for its shareholders. In the Dave Raffo piece, Frank calls out CommVault simply because we’re giving them a run for their money. Simpana, with built-in dedupe, works really well, and we are winning business. Now, I find it ludicrous to suggest a product vision that forces a customer to deploy 3 or more disparate products to achieve basic data protection. (Pile on more products for replication, encryption, archive and SRM).  At the end of the day, customers want less complexity, improved operational efficiency and ultimately, to spend less money. That means fewer, not more solutions. Less hardware and smarter software. EMC’s product portfolio is both complicated and costly for customers, so buyer beware. Also, in our opinion, this interview should raise some serious flags among the thousands of already nervous NetWorker customers out there looking for reassurance in the wake of the Data Domain acquisition.

I asked West to elaborate on the “red flags” about NetWorker, and he pointed to this statement by Slootman in another part of the interview:

SearchDataBackup: If Avamar is the future of data backup software, where does that leave NetWorker?

Slootman: Well, Avamar is augmenting NetWorker in a lot of places. People are moving a good part of their workload to Avamar, but not all. They’re still running applications like big, fat databases on traditional backup software. NetWorker can support conventional backup on tape and mixed media and people can integrate it with Data Domain.

“Former EMC customers are telling us that there is no real investment or innovation going into the Networker product and they’re tired of it,” West added.

This dedupe feud will get really interesting if CommVault partner Dell Inc. starts selling Data Domain, which is a likely scenario because Dell sells much of EMC’s storage products. CommVault’s Simpana is currently a big piece of Dell’s deduplication strategy.

October 16, 2009  7:25 AM

10-15-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Another busy week, and I make my triumphant (if slightly raspy) return to the podcast.

(0:23) Hitachi implicated in Sidekick outage
EMC denies blog claim that its SAN was involved in Sidekick outage
Microsoft says it has recovered Sidekick data

(1:42) Storage clouds gather over Storage Networking World

(3:30) EMC’s Slootman: No data deduplication for Disk Library virtual tape library

(4:23) IBM adds STEC SSDs to its SAN Volume Controller (SVC) storage virtualization device

(6:16) 3PAR fattens its thin provisioning arsenal

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: