We’re talking about servers that need to be up 24/7 and 365 days a year and lots of data going through them on a daily basis. Once you get into counting your backup size by terabytes vs gigabytes it’s time to start looking at some different strategies.
VTL has always been a good idea, offloading to disk and then offloading to tape later when you have more time to do it. This way you can offload your backups at high speed in your short window. However an often overlooked method which has started to gain serious traction is deduplication. Many vendors offer it and some even combine it into their VTL offerings. Deduplication can achieve an average compression ratio of 20:1 and at it’s best go as high as 50:1 and even higher sometimes.
So take a moment to look at the deduplication offerings out there. Some of them even offer the ability to replicate the deduped data to an offsite location over the WAN without using tape at all. This way you can save on the cost of tape.
So who here has heard of deduplication? Deduplication is essentially a form of single instance storage. What it allows for is the ability to store lots of data without replicating the similar pieces of data over and over again while it’s storing it. Instead there are just pointers to the duplicate pieces of data so what happens is you end up with greatly enhanced storage capacity.
The storage ratio goes along to the tune of 20 to 1 on average for most deduplication solutions out there and as large as 50 to 1 in best cas scenarios. A few companies can provide you with this solution such as HP or even Data Domain. If you haven’t had a chance to read about it, check it out!