Storage Soup

A SearchStorage.com blog.

» VIEW ALL POSTS May 28 2008   11:47AM GMT

Storage experts pan report on tape archiving TCO



Posted by: Beth Pariseau
Tags:
tape data storage

The disk vs. tape debate that has been going on for years is heating up again, given technologies like data deduplication that are bringing disk costs into line with tape.

Or, at least, so some people believe.

The Clipper Group released a report today sponsored by the LTO Program which compared five-year total cost of ownership (TCO) for data in tiered disk-to-disk-to-tape versus disk-to-disk-to-disk configurations. The conclusion?

“After factoring in acquisition costs of equipment and media, as well as electricity and data center floor space, Clipper found that the total cost of SATA disk archiving solutions were up to 23 times more expensive than tape solutions for archiving. When calculating energy costs for the competing approaches, the costs for disk were up to 290 times that of tape.”

Let’s see. . .sponsored by the LTO trade group. . .conclusion is that tape is superior to disk. In Boston, we would say, “SHOCKA.”

This didn’t get by “Mr. Backup,” Curtis Preston, either, who gave the whitepaper a thorough fisking on his blog today. His point-by-point criticism should be read in its entirety, but he seems primarily outraged by the omission of data deduplication and compression from the equation on the disk side.

How can you release a white paper today that talks about the relative TCO of disk and tape, and not talk about deduplication?  Here’s the really hilarious part: one of the assumptions that the paper makes is both disk and tape solutions will have the first 13 weeks on disk, and the TCO analysis only looks at the additional disk and/or tape needed for long term backup storage.  If you do that AND you include deduplication, dedupe has a major advantage, as the additional storage needed to store the quarterly fulls will be barely incremental.  The only additional storage each quarterly full backup will require is the amount needed to store the unique new blocks in that backup.  So, instead of needing enough disk for 20 full backups, we’ll probably need about 2-20% of that, depending on how much new data is in each full.

TCO also can’t be done so generally, as pricing is all over the board.  I’d say there’s a 1000% difference from the least to the most expensive systems I look at.  That’s why you have to compare the cost of system A to system B to system C, not use numbers like “disk cost $10/GB.” 

Jon Toigo isn’t exactly impressed, either:

Perhaps the LTO guys thought we needed some handy stats to reference.  I guess the tape industry will be all over this one and referencing the report to bolster their white papers and other leave behinds just as the replace-disk-with-tape have been leveraging the counter white papers from Gartner and Forrester that give stats on tape failures that are bought and paid for by their sponsors.

Neither Preston nor Toigo disagrees with the conclusion that tape has a lower TCO than disk. But for Preston, it’s a matter of how much. “Tape is still winning — by a much smaller margin than it used to — but it’s not 23x or 250x cheaper,” he writes.

For Toigo, the study doesn’t overlook what he sees as a bigger issue when it comes to tape adoption:

The problem with tape is that it has become the whipping boy in many IT shops.  Mostly, that’s because it is used incorrectly – LTO should not be applied when 24 X 7 duty cycles are required, for example…Sanity is needed in this discussion… 

Even when analysts agree in general, they argue.

5  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Beth Pariseau
    Beth; I couldn't agree more with most of Curtis' points. There are many deeply flawed assumptions in the report. Nor is it exactly the first time that Clipper has released such flawed material. (I blogged about another one at: http://thebackupblog.typepad.com/thebackupblog/2008/05/7.html). My analysis is less thorough than Curtis' perhaps, but I didn't really need thorough to seem huge gaps in the analysts' logic. Here is another way of looking at this, just to prove that assumptions are everything. Assume you do a full backup every day for 180 days. Assume 10 TB of disk, and a 2% change rate. Lets assume the data compresses 2:1. With tape, I would need 900 TB of tape. With disk that is deduplicated, I would need 23 TB of disk. That amounts to slightly more than 2 trays of disk. With LTO4, a small SL8500, or two L700s, or a mid-sized 3584 would be required to hold all that tape. You would also need 6 or so LTO4 drives to back it up every night. So, without going through the numbers, I would say there is (intuitively) a pretty big disjoint when an analyst claims that 2 trays of disk and a server is 23 times more expensive than 1000 tapes, 6 drives, and a library that would be 4 racks long. But here is the assumption we made: full backups every day of everything. You would never do this with tape (it costs too much!) But with deduplicated disk, you require no more capacity to do this than a traditional backup rotation. So you have 180 recovery points rather than 15 (5 incrementals plus 4 weeklies plus 6 monthlies). Not only is the cost analysis flawed, but they deliberately ignore the strengths of disk.
    0 pointsBadges:
    report
  • Beth Pariseau
    Beth, I am not an analyst. And never have been. I leave that to the other guys. There are a lot of flawed numbers in this industry, both from the disk community and the tape community. What is needed is for both groups to get beyond their self serving sales and marketing antics and begin working on addressing, in a cooperative way, the problems of backup. A group was started to do just this about five years ago called the Advanced Backup Solutions Initiative, which received a lot of consumer and vendor buy-in. But, it was squashed by the Storage Network Industry Association, which preferred for the money being spent on such an effort to be spent within SNIA instead.
    0 pointsBadges:
    report
  • Beth Pariseau
    Hey Jon, thanks for your comment. Didn't realize you objected to the analyst title--what would you prefer to be called?
    0 pointsBadges:
    report
  • Beth Pariseau
    Beth - Being the provider of backup software that is device agnostic for over 23 years - we'll even backup to your monitor - I'm a bit concerned by both sides of this story. As Jon Toigo mentions above, the marketing that is occurring currently is designed for one purpose - to either sell more disks or to sell more tapes. The problem is that both industries disguise the facts through claims of improved backup performance for the user in each case; I say that they are both right AND wrong. First, the myth of deduplication - I believe that even the venerable Mr. Preston has stated that if you only have one copy of data you don't have a backup. However, deduplication is just that - the creation of ONE copy of given data. And, not only is this one copy of given data for ONE system, it's actually one copy of given data for potentially THOUSANDS of systems. While I do not try to lessen the impact of thousands of copies of the same file over the life of a system's backup, I do not believe that one copy of the corporate financial reports from 2005 in a single, locally stored disc array is very smart. A smart compromise must be reached. Additionally, if I'm using incremental backups with tape or disc, deduplication adds nothing as the only files backed up in my incremental backups are the files that have changed - instant deduplication and it cost me nothing extra. As to performance, in the 95th percentile of environments the network bandwidth is going to be the gating factor (unless we're just backing up a locally attached volume). Otherwise, LTO-3 and LTO-4 tapes writing at the 120MB/sec or better are quite capable of keeping up with the network stream. The only time disc is a real winner is in environments where the backup software and network infrastructure allow multiple, simultaneous client backups - if the software is writing to disc (vs. virtual tape), then the only limit to the number of concurrent streams coming into the backup server is the network performance; tape and VTL implementations are limited by the number of physical or virtual tape drives available. Also, with software that understands QFA and tape management, even restoring the last file on an LTO-4 tape cartridge will only require 4-5 minutes. Finally, for long term storage, even high-end server discs are not designed to be used for a period of time and then stored in an unspun state for 15 years. An LTO tape, however, will happily sit in a vault for 30+ years and still give up its data when asked. Plus, you can drop an LTO tape off the back of a FEDEX truck and all the bits will still be there. Try dropping a disc drive and see how long it lasts. Rodney King said it best - "Why can't we all just get along?". A combination of disc (including deduplication) for short term, near-line storage, and tape for long term secure storage and archival offer the best solution for any organization - even a home user. Plus as mentioned above, if you're backup software supports QFA on tape, even the restore performance gains touted by the disc vendors is just so much smoke.
    0 pointsBadges:
    report
  • Beth Pariseau
    Disk and tape are both going to be here for a very long time, as both have their place. The TCO discussion is vital as we are seeing explosive growth of data and along with it the costs to manage and preserve all these 1's and 0's. I think we tend to loose sight of things when we focus simply on the media, rather, we need to focus on what it is we are trying to accomplish, then deploy the proper solutions. Traditional storage solutions tend to waste space by their very design. RAID groups stacked in a box, then provisioning LUNS. I have seen recent reports that somewhere between 60 - 70% of all disk space purchased, is wasted. Wasted on an old methodology that carves out a "canister" into which you can then put data into. When you provision a 10GB LUN and put 1GB of data into it, you have 9GB's of wasted space. Then, look at the mere fact that of all the "real" data, only 20% is generally active. Yet most clients purchase expensive 15K disks to ensure performance of what is only 5% of their entire storage environment. No wonder the costs are out of hand. Real Thin Provisioning (only provided by 3PAR and Compellent) solves the first element of this problem. Automated tiered storage (Compellent only) solves the second. This type of enviornment allows you to create a pool of 15K drives to handle the IOPS you need, while everything else is automatically moved to SATA when it becomes inactive. This type of architecture reduces the overall problem (footprint, power consmption, runaway space, outrageous expenditures.) Now, you apply deduplication where needed, and you find that your important data can remain on-line indefinitely. (This does not preclude intelligent management and discussion with end users as to what data should be kept and which should be discarded.) When we utilize intelligent tools to fix the underlying problems, then when we talk about methods to restore data, we begin with a clean slate. Paul Clifford Davenport Group www.davenportgroup.com
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: