For years Acronis had one basic platform—Backup & Recovery—which put backup, data deduplication, disaster recovery and data protection features under one interface. Then it felt rising demand for VMware-specific backup and developed its vmProtect 6 product, released earlier this year at VMworld in the US.
So far, so good. Acronis seems to be a company responding well to its market.
So, how is the game stretched?
Well, there’s the VMware backup scene. This has changed rapidly over the past four or five years. First people backed up virtual machines just like physical machines, then they were able to use VCB’s rather awkward two-stage backup process. Since 2009, with the release of VMware’s APIs for Data Protection, backup products have been able to back up virtual machines pretty seamlessly.
So, vendors like Acronis have had to keep up with these developments and make sure their products are capable in the latest ways of doing virtual machine backup. But, in doing so, they are getting way ahead of the end-user experience and market. SearchStorage.co.UK research earlier this year showed nearly 25% of users are still at the two-stage VCB process, while 20% still back up virtual machines with traditional backup products and agents.
That’s to be expected. End users are rightly conservative. When something works and/or isn’t too painful and you only implemented it 12 months ago, you’re not going to replace it just because the vendors suddenly came up with a better way of doing things.
At the same time, however, we can’t expect VMware to be the only virtualisation game in town forever. Microsoft’s Hyper-V has the advantage of being cheap or free to buy into, and though it currently lacks the ecosystem around it that VMware has, that may well not last. There are other thoroughbreds in the hypervisor market too, like Red Hat.
So, to use the footballing phrase, the game is stretched for backup vendors in this newly virtualised world. Existing markets need to be satisfied. New products need to be anticipated and developed, and it all presents incredible amounts of opportunity and risk.
(*For those not familiar with football [soccer] terminology the game is often said to be “stretched” in its latter stages. Tiredness takes hold, and the teams can’t hold a neat formation any longer. So, instead of all 20 outfield players bunched within easy passing distance, they are stretched up and down the pitch.)]]>
Historically, and perhaps stereotypically, tape has been seen as a cheaper alternative to disk; it consumes no power when not in use and it is easily moved to an offsite location. Disk, on the other hand, has often been considered as more expensive although it requires less administration, is unaffected by slow or small data streams and has much improved restore times, particularly when restoring individual files.
Technological advancement, however, means that we should challenge these long held beliefs. Perhaps the biggest change to backup strategies has come with widespread adoption of data deduplication. Few backup vendors recommend (or support) deduplication to tape, so deduplication remains predominantly a disk technology.
Tape has long been held as the storage medium with the densest footprint, but data deduplication is changing that. For example, take a medium-sized tape library with, say, 600 LTO4 tapes each holding 1.2 TB. You’d expect this library to consume the majority of a rack. But, if this data is deduplicated at a ratio of 20:1 the library could be replaced by just a couple of disk shelves. Since space has become as precious to data centre managers as power and cooling capacity the potential savings of moving from tape to high capacity deduplicated disk cannot be ignored.
One key advantage of tape is the ability to move data to a remote location easily, but again data deduplication is allowing organisations to work smarter. Historically, in all but the largest companies the cost involved in acquiring network infrastructure to allow you to replicate large volumes of backup data between physically remote sites has been prohibitively high.
Now, because optimised deduplication only replicates unique data chunks between storage devices, smaller organisations that could not previously afford it can copy backup data between sites over low bandwidth networks or networks with high latencies.
Indeed, as WAN performance increases and data deduplication technologies mature, smaller organisations may look to send backup data to the cloud, although the fear of long recovery times will still deter many IT managers.
Long term retention of data is a cornerstone of many organisations’ compliance policy and while many studies show disk can be cheaper than tape for short- to medium-term retentions, backup data that requires long term retention (a year upwards) is still far better placed on tape. Over these longer durations tape consumes less power, doesn’t incur an ongoing maintenance cost, and the data stored on it is less likely to become corrupt.
Disk is becoming increasingly attractive as a replacement for tape in the backup arena and some of the long held advantages that tape has had over disk are being challenged. However, tape still has a place in the data centre. Backups with long retention periods and archived data are still based placed on tape and this is unlikely to change. Indeed, the roadmaps provided by tape and tape library vendors suggest that the future of tape in large organisations remains strong – technology investments of this scale would not be happening if tape was doomed.]]>
Disasters occur. Water pipes burst, roofs leak, electricity supplies fail, network paths get dug up, and much worse. Fortunately, most IT managers will never get to see their disaster recovery plans put into practice for real. But some will and all too often it is only at that point that their shortcomings are realised.
Once the dust has settled and normal service has resumed IT managers will be under pressure to fix what went wrong with the original plan. Knee-jerk reactions and over-engineered solutions are too often implemented when a measured approach may be more efficient and much less costly.
I have a customer who is in exactly this situation. A water leak resulted in the failure of the entire storage environment. Following the disaster our customer rapidly acquired a new storage array and set about configuring it and allocating storage back to the clients.
Fortunately, the backup environment was unaffected and all services were resumed within a couple of weeks. It could have been much worse. If the disaster had happened at month end or if their hardware suppliers had not been able to mobilise so quickly the impact of the disaster could have been catastrophic.
Valuable lessons can be learned from a disaster. As a result of this exercise my customer has a detailed knowledge of the relative importance of each application. They know exactly what components link together to provide service; in my experience something that a lot of organisations don’t have a handle on. They also know exactly how long it takes to recover a service – including build, restore and configuration times. Despite being armed with that information, senior management have decreed that going forward a recovery time objective (RTO) of 24 hours be implemented for all services, including test and development.
There is no doubt that the outage caused severe pain to the organisation but reacting by stipulating a single recovery tier across the organisation is going to cost a lot of money. Replacement hardware can rarely be sourced within such a timeframe and therefore an exact replica of the environment needs to be purchased and will only be used in the event of a disaster.
A better way would be to take the hard lessons learned and align applications to defined disaster recovery service offerings. If you have an understanding of your disaster recovery requirements then you can start building a service catalogue which reflects that. Create distinct service tiers, based upon the RTO and RPO (recovery point objective) and align a technology configuration to those tiers.
For example, the “platinum” tier might include application level clustering and synchronous data replication whereas your “bronze” tier would involve data recovery from tape after the procurement of replacement hardware. In this way, applications that have little impact on the organisation’s daily activities can be assigned to a lower tier which gives sufficient time to procure replacement hardware, negating the need to have everything duplicated.
Once the service catalogue is in place, solutions that meet those requirements can be investigated and purchased. Of course, a full schedule of DR testing is a vital part of any solution.
While taking longer, this approach will help to prevent the knee-jerk reactions that often follow disasters.