Storage system-based asynchronous replication isn’t perfect, but for many corporations it is good enough. Having just completed researching and writing a feature on the topic of storage system-based asynchronous replication for an upcoming issue of Storage magazine, it appears user adoption of asynchronous replication is no longer a rarity, at least if one believes the storage system vendors.
While I did not speak to every storage system vendor for this report (there are dozens), the ones I did speak to consistently said that anywhere from 30% to 50% of their users employ this technology. To a certain degree, one might expect these numbers from a storage system vendor like EqualLogic, that includes asynchronous replication as part of its storage system’s base software package. But, when Hitachi Data Systems (HDS) went on the record and said that they are seeing similar adoption rates among their user base, it caught my attention.
Users of HDS storage systems generally need to license asynchronous software separately so it gives some indication as to the value users now ascribe to making copies of their data on a secondary storage system. Though it would take some time and a lot of cooperation on the part of HDS to find out what percentage of their licensed users actually use this feature and on what scale, it does follow that if users paid for it that a high percentage of them are probably using it.
Companies are figuring out they can repurpose money budgeted for tape and offsite storage and instead use it to buy cheaper secondary storage systems with asynchronous software. Companies can then do point in time snapshot of their production data, replicate it offsite and use it for daily backups, faster restores and, in a worse case scenario, recover their application from the data copy.
Is this architecture perfect? No. But companies are running out of time waiting for the perfect scenario and tape is certainly not it. At least in this scenario, recoveries happen much faster than waiting on restores from tapes residing in someone’s warehouse. Companies are looking for a more cost effective means to improve their backup and recoveries without breaking the bank and it looks like for a growing number of companies, storage system-based asynchronous replication is a reasonable compromise between perfection and what is affordable.
Generally the drumbeat of messaging from HDS is as constant as a metronome: array-based virtualization is the answer. Storage virtualization will heal your environment, bring about peace in the Middle East, and solve global warming.
So when an HDS exec writes a piece on his blog about who might not benefit from storage virtualization, it’s definitely worth a read.
David Merrill, storage consultant and solution architect with HDS since 1996, recently got back from what sounds like a rather thorny customer engagement in Korea. The customer, who is not named, wanted to extend its XP array’s virtualization to legacy systems (the XP being a rebranding of HDS). During a TCO analysis, Merrill writes, “Total purchase cost for the virtualization solutions was, as you can guess, less than a monolithic, but the 4-year TCO costs were higher” due to power and cooling costs, and maintenance costs with legacy systems (“when virtualizing older systems, the old hardware maintenance comes along too,” notes Merrill).
The user still went with virtualization because there was a “tipping point” with 20% storage growth over the next three years during which the virtualization will become more cost effective. “Moral of the story, be sure to look at many factors when considering different architectures. Just because you can virtualize does not mean that every old system needs to be kept around indefinitely…Your mileage will vary,” Merrill concludes.
Wonder what Mr. T would think of that.
42 man years of work and 18 months of development. That’s the amount of time and effort that CommVault put into its Simpana 7.0 Software Suite announced on June 10th, according to Dave West, CommVault’s VP of Marketing and Business Development.
While it is encouraging to note that CommVault spent so much time on this release, it’s equally sobering to ponder that data protection upgrades now take this much time and effort to complete. But, based upon what enterprise customers have needed for the last 5 to 10 years, this is the first product that comes close to delivering on those requirements.
Consider this. Frank Albi, the President of Business Information Solutions, a records management provider in Cincinnati, OH, manages paper, tape and optical media. In this role, he often is asked to help his clients develop a records disposal policy. He can with a high degree of certainty deliver one for his client’s paper records. Not so with tape and optical media. He does not even know where to begin, because his clients can’t easily identify which files or records are on which media so how can he develop an appropriate disposal schedule for the media? So, customers end up keeping it all — resulting in higher data storage costs and unnecessarily exposing them to future legal discovery costs.
What is compelling about CommVault’s Simpana is that it opens the door to address this dilemma that Albi and many others face.
It combines backup and archive data into one common pool and, using its newly licensed FAST search engine, allows users to search, access and retrieve archived and backed up data stored in this new pool. Since they both use a common policy engine, Simpana can set retention and expiration schedules for any file in the pool. Simpana’s new Single Instance Store (SIS) feature only sweetens the deal since it eliminates redundant file copies, which also reduces the size of data stores and expedites backups.
Granted, to gain Simpana’s benefits administrators need to upgrade or install backup agents on servers – something I always looked forward to as an administrator. Not. But, as CommVault’s West points out, users can deploy them with push technologies. This may take some of the sting out of the deployment plus the value-add of shortened backups and conducting centralized enterprise searches into archives and backups should appeal to most organizations and offset whatever concerns they have.
CommVault’s Simpana also still lacks the breadth and scope of features that data protection products from Symantec NetBackup, EMC NetWorker and Tivoli Storage Manager offer. But, with disk a growing part of the backup equation and e-discovery a shadow over most companies’ future, the features that traditional data protection products offer may not carry the same weight they once did.
Bottom line, for companies willing and able to standardize on a single data protection product, CommVault has jumped to the head of the pack and is the one by which data protection products should now be measured. It can reduce the size of data stores, expedite backup and recoveries and search across multiple data stores. Plus, CommVault offers continuous data protection, email archiving and replication products that administrators can manage through the same policy engine — making Simpana without equal in the industry. CommVault’s Simpana 7.0 Software Suite sets the mark high for data protection and is a template that other data protection products will be hard-pressed to match.
I have a hard time imagining that anyone who reads this blog isn’t already aware of The Onion, but just in case you missed it, no one in storage–particularly backup–should miss this video report on wide-scale DR from America’s Finest News Source ™.
Be sure to watch until about 1:40 for that rarest of birds: storage-related humor on a mainstream website. Even rarer: backup-specific storage-related humor.
If only The Onion could fill in the rest of what would surely follow this story: a huge swath of the US workforce left to office-chair races to pass the time; dramatic TV footage of Al Gore flying in to help troubleshoot his invention; and of course, every storage vendor in the world putting out press releases about how if the government had been backing up the InterWebs with [insert product name here], none of this would’ve happened.
Unfortunately, given the nature of the disaster, they’d probably have to start hanging their announcements up on telephone poles.
Meanwhile, however, as anyone reading this post on company time is no doubt keenly aware, there’s another very real workplace problem facing our nation right now, which leaves almost no one unaffected. For more, see this report.
Nope, not kidding. Brocade has had a boat named after it, specifically this boat:
Images courtesy of Brocade
It’s a 24-foot single-person rowboat suited for crossing oceans; its pilot, Roz Savage, has already rowed across the Atlantic solo and is now bidding to become the first woman to row across the Pacific solo in, you guessed it, The Brocade.
Brocade’s rationale for sponsoring the trip is slightly byzantine–the trip is also being done as a project of the Blue Frontier Campaign, a non-profit marine conversation organization, and is in cooperation with the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Sanctuary Program. The stated goal of the voyage is to raise awareness about plastics pollution in the ocean.
Brocade, in turn, has tied its sponsorship of the boat in with its “green” data center campaign, with environmentalism the overarching theme. “Roz’s row will embody ‘efficiency and sustainability’ – a mantra at Brocade,” the company said on a website it has launched to follow Savage’s journey.
It remains unclear to this blogger what raising awareness about plastics pollution in the ocean has to do with data center energy consumption–though if there isn’t a glossy commercial spot for Brocade featuring this boat in at least one storage conference presentation this fall, I’ll eat my laptop.
But at the same time, it’s hard not to be rooting for a woman who is willing to take on such an enormous challenge for a good cause. It’s hard to read the bio she wrote for herself on her website and not come away at least a little bit inspired. And in the end, it’s hard to blame Brocade for wanting to align itself with such a compelling story–whether or not a rowboat really has anything to do with enterprise storage networking.
There are few announcements by major storage vendors that really get my juices flowing, and the ones that reveal major new industry or products trends are the ones I find the most thought provoking.
Symantec’s recent NetBackup 6.5 announcement in mid-June at Symantec Visions was exactly that type of announcement. What especially piqued my interest was this slide, which was part of the press kit that they sent me that illustrated how they plan to architect NetBackup going forward.
While NetBackup obviously will do enterprise backup and restores for a very long time to come, what NetBackup lacked prior to this announcement was any overarching reason for users to get excited about the future of this product. “NetBackup can do SAN backups” or “NetBackup supports PureDisk” didn’t cut it anymore. These were just product announcements in response to larger consumer trends that Symantec needed to provide in order to remain competitive in the backup software space.
But this announcement is a plan of attack that, if Symantec can execute on, will give them a leg up on most other storage software vendors in the enterprise data protection space. The idea that a company can use one tool to centrally manage the functionality of multiple other vendor’s data protection products is one that companies sorely need, whether they realize it or not.
Positioning NetBackup, a product most enterprises already use in some capacity, in this role allows storage architects to build an enterprise data protection strategy around it. Enterprise companies have too many products coming in from too many sources for storage architects to get into the details and politics of whether an application should use Symantec’s NetBackup or BakBone Software’s NetVault:Backup agent for Oracle. If one product is a better fit for an application than another and the company can centrally manage either one, who cares?
This does not mean by any stretch that companies should now mark off enterprise data protection as solved. The challenge of managing every vendor’s data protection product is akin to what SRM vendors promised just a few years ago. Buy our product and we will manage all of your storage devices. We all are still waiting for that to happen.
Symantec’s challenge is no less daunting. Trying to manage every vendor’s replication, snapshot, CDP, VTL and tape backup product from NetBackup’s central console presents the same challenge. Then, toss in the fact that to do so will likely require some level of cooperation from their competitors. Somehow, I don’t see that happening.
Symantec has the right idea, and even good intentions, as to where to take NetBackup so that it will provide the most value for their customers. But, the road that lays before them is rocky and one that, when traveled before by the SRM folks, has yet to deliver on its promise. It will be interesting to see whether Symantec’s journey proves to be more productive.
This morning Plan B failed. I have high speed internet access into my office, but I pay a small monthly fee to keep a pay-per-minute dial-up account just in case my high speed internet provider ever goes offline. So, this morning, when I lost my internet access, I mentally started preparing myself for 56K upload and download speeds. What I had not mentally prepared myself for, was my phone lines also being down. Thankfully, I still had my cell phone and was able to reach the outside world and let some individuals know about my situation.
But, it occurred to me that this was fairly typical of how disasters go. Not that losing internet access or phone service is necessarily a disaster, but disasters are rarely neat and tidy, they never happen when it is convenient and you can generally count on them not to follow the plan you laid out.
In no way am I implying that companies should abandon either their data protection or disaster recovery planning efforts. What I am suggesting is that after you have backed up all your data, laid out your recovery plans and then tested them, introduce some reality back into the situation.
For instance, a concern that one records management provider recently expressed to me is that companies should evaluate their disaster preparedness after they have just finished a disaster recovery exercise. Tapes are out of order, the recovery environment is not properly configured and people are exhausted. How quickly and how well could your company recover in this situation if a disaster happened then?
Another important aspect to include in your plan is to identify someone who knows the plan but is not afraid to think outside the box. I was once in a disaster recovery situation where an entire production database had failed and there was not enough unallocated disk in the free disk pool at that site to recover the database. The plan called for us to recover to another site, but one individual asked “Do we have a SAN?” and “Can we move some allocated but unused disk on another server over to this one?” In both cases the answer was yes, and we were able to recover the application in 2 or 3 hours instead of 8 to 12 hours.
Disaster recovery plans are just that, plans – no more, no less. But like all plans, they were created at a past point in time and may not reflect the current reality. That is where having someone around who can assess the entire situation and not just follow the script becomes imperative if one is to turn the disaster into a recovery.
Former Brocade CEO, Greg Reyes, went to court Monday to face the music for options backdating. His trial is the first of over 100 cases against companies accused of options backdating and the results could signal a collapsing house of cards for the technology industry. The criminal indictment against Reyes charges him with conspiracy to commit securities fraud, mail fraud, making false statements in filings with the SEC and falsifying books and records. He faces decades in prison and millions of dollars in fines if convicted.
Options backdating refers to the practice of reaching back to a date when the company’s stock price was at a low, and selecting that date for the option grant’s exercise price, or the price an employee will pay for the stock. The goal is to boost the potential windfall for the recipient. It’s said to have been common practice during the hay days of the dotcom era to lure talented employees. The criminal part of this action is when a company hides this practice from its shareholders, therefore not having to pay the correct compensation on options at the time they were awarded.
But the trial is not about whether Greg Reyes did this or not. His signature is all over hundreds of documents signing off on the practice. The question is whether he knew it was wrong, but did it anyway. Reyes’ defense says that he didn’t know he was doing anything wrong.
So, for the sake of argument, let’s say he was totally clueless and scribbling over financial statements and falsifying board meeting documents seemed perfectly normal to him.
Now think about this. The IRS calls to audit your taxes and they discover you claimed more than you should have. Can you say sorry officer, the rules are so complex, I must have filled out the form wrong? From my limited knowledge of tax laws, even if you get the wrong information from an IRS agent, you are still liable!
The most complex part for the government in this case is proving intent. And Reyes claims he didn’t profit personally from any options backdating. That may be so, but what about his staff and close associates? Should they be called to account as well?
I was talking with a friend in the business world about this topic and he says that CEOs hire legal advisors and executive management to advise them on such matters, as they themselves cannot be expected to know, in detail, every aspect and legal loophole of the law. So where are these guys then? Perhaps they should be in the dock? My friend also felt that with hindsight it’s clear that backdating was a nefarious practice, but at the time, it really didn’t appear to be.
In high-profile trials such as Reyes’, the defense and prosecution legal eagles will spend weeks jousting over semantics. In the end the jury will get worn down by the attorneys, who will create enough confusion and doubt in their minds as to Reyes’ actions, and he will be let off, perhaps with a slapped wrist and a fine. That’s my bet.
In the meantime, is it possible for the legal system to monitor the business world a little more closely and for the business world to try to act ethically, to prevent years of wasted legal wrangling and millions of dollars in fees? Or am I just hopelessly optimistic that things can change for the better? Anyone have any thoughts?
Welcome back to Las Vegas, where water (a precious commodity in the desert) will cost you $4 a bottle. This week, storage users moved from the Venetian to the Mandalay Bay hotel, drinking in its tropical theme and all the news they could handle on HP. A few miscellaneous items from the show:
HP CEO: “no confusion” about need to improve business
During his keynote speech Monday night to officially open the show, HP’s CEO Mark Hurd said he’s proud of the progress the company has made “over the last two and a half years” (i.e. since the “Carly Era” ended and the “Mark Era” began), but said “there isn’t any confusion about how much work we have in front of us.”
Hurd told the gathered audience of thousands in Mandalay Bay’s Event Center (where Pay-Per-View boxing matches are also held) that “we’ve invested millions in R&D, only to underdistribute our products in the market.” He estimated the market for HP’s products at $1.1 trillion, compared to HP’s revenues of $100 billion. The company has added 1000 salespeople and is working with its 140,000 channel partners “to get them deeper into the markets they cover,” Hurd said, particularly in midmarket accounts.
“The biggest complaint I get is how hard it is to do business with HP,” Hurd said. “We have to take the complexity that comes with being a big company and turn it into a capability that works for you.”
HP backup tapes climb Everest
Among the presentations at this year’s conference for media was the video-illustrated story of the brothers Clowes, who climbed Everest in 2006 carrying an HP DAT 160 tape drives and LTO-2 tape cartridges to backup photos and video they took on their ascent to the summit. HP “helped out” with the climbing trip and loaned the brothers a laptop as well as the drives, according to Thomas, the excursion’s photographer and the owner of a small residential building company in the UK. He said his two-man climbing team (consisting also of brother Benedict, a UK financial analyst) bailed out a professional video team from the BBC filming a documentary when their laptop choked on the dust flying around the air at base camp. HP also had the brothers do temperature testing on the tapes, which it is reported survived the wintry ordeal intact.
HP consolidates data centers
What would a conference be without more keynotes? On Tuesday morning, Randy Mott, EVP and CIO of HP, took attendees inside HP’s data center consolidation, in which it was estimated the company invested approximately 2% of its gross revenue (gross revenue, according to Mott’s presentation: $97.18 billion) in a three-year project to consolidate 85 data centers into three “zones”, each with two data center locations, in Houston, Austin and Atlanta. Construction is expected to finish on the data centers by this November, and the majority will have been completed in the next 60 days. The consolidated data centers now boast 21.6 football fields’ worth of raised floor space, 40,000 servers, 16,000 racks and 4 PB of storage. Mott said HP has halved the cost of its storage for double the capacity, though the consolidation won’t stave off storage growth for long–the company expects to be at 10 PB by the end of its three-year project.
That’s all the news that’s fit to print from where I sit–if you’re attending the HP Executive Forum, StorageWorks Americas or Technology Forum conferences this week, feel free to add your thoughts below.
One of the common questions I am getting from IT people that I meet with is how can they protect remote offices, typically those with no local, at least officially anyway, IT staff. It is important to accept upfront that you may need a couple of solutions to address this challenge. This is especially true if you have remote offices that vary in size. There may be local databases and, more often than not, local file servers.
The first option is to eliminate the problem by eliminating the local need. Products like Citrix or Windows Terminal Server can eliminate the need for local applications and a wide area file system (WAFS) can eliminate the need for local file servers. The Citrix/Windows Terminal server solutions are best explained by those two companies so we’ll spend our time on WAFS. WAFS essentially places a cache at the remote site. At a high level, this cache is a server appliance with a small local disk that can replicate changes as they happen to a central server at a primary data center. The most frequently accessed data is stored on the remote file server, which serves up data to those users in a local fashion and at local performance. Typically, proprietary but enhanced network protocols are utilized to get higher performance on data transfers for data that is not in the remote cache. Some of the WAFS companies are also providing the ability to utilize data deduplication on the network, in a similar fashion to how some disk-to-disk backup products use data deduplication to optimize storage. NAS that can do data deduplication would be an ideal central NAS for this environment since typically there is a high level of file redundancy between remote offices and the primary data center.
From a data protection standpoint, the centralized repository for all the remote cache’s data is now also a server at your primary data center and can fall under the umbrella of your normal protection scheme. Other advantages to WAFS is that it can eliminate the need for buying additional servers and storage for remote offices, delay the need for bandwidth upgrades and can even enable better collaboration between offices.
Another option is to use replication. This is ideal for sites that in addition to a remote file server also have some remote database applications or email, especially if there are just a few of this type of sever present. While most data replication products are considered for disaster recovery solutions, they also make for an ideal remote or branch office backup solution. With these products all data is replicated as it changes to a centralized disk at the primary site. This disk can then be mounted to a backup server and backed up locally at the primary data center. Cost can be a concern if the local server count is more than just a few servers and you are not leveraging the other value points (disaster recovery and failover) of replication. Also, unless your data replication solution can produce snapshots to freeze moments in time or you can leverage snapshot technology by replicating to storage that supports it at the primary data center, be aware that if you do experience corruption at the remote site, you will then very quickly replicate that corruption to the primary site.
If you don’t need the near instant protection of WAFS or replication you can also leverage some D2D solutions to perform a backup locally and utilize the D2D solution’s replication capability to send that data to a similar device at the primary data center. This typically makes sense when you have a fairly large data set or number of servers and maybe a small remote IT staff. It will also almost certainly require a data deduplication device, as straight disk-to-disk backup will not work well for remote backups. You need to be able to deploy a D2D solution that can leverage data deduplication. Standard backup to disk essentially consists of several large backup files. Those files are typically created too fast and are too large to be copied across the common WAN segment in time to meet the nightly backup window. Data deduplication appliances on the other hand only have to send the changed block between the backup jobs, greatly reducing the WAN requirement. Also, you now have the data locally at the branch office instead of only at the primary data center, allowing for faster recovery at the branch if needed.
In the past four or five years, we have expanded from hardly any options for remote office data protection to many. Which of these solutions you deploy is a function of budget and business requirement, and in some cases it may make sense to blend the solutions. Assessing the needs of the remote offices while focusing on the business realities at the primary data center, will help you make those choices.