Storage Soup


July 10, 2007  2:06 PM

All aboard the good ship Brocade

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Nope, not kidding. Brocade has had a boat named after it, specifically this boat:


Images courtesy of Brocade

It’s a 24-foot single-person rowboat suited for crossing oceans; its pilot, Roz Savage, has already rowed across the Atlantic solo and is now bidding to become the first woman to row across the Pacific solo in, you guessed it, The Brocade.

Brocade’s rationale for sponsoring the trip is slightly byzantine–the trip is also being done as a project of the Blue Frontier Campaign, a non-profit marine conversation organization, and is in cooperation with the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Sanctuary Program.  The stated goal of the voyage is to raise awareness about plastics pollution in the ocean.

Brocade, in turn, has tied its sponsorship of the boat in with its “green” data center campaign, with environmentalism the overarching theme. “Roz’s row will embody ‘efficiency and sustainability’ – a mantra at Brocade,” the company said on a website it has launched to follow Savage’s journey.

It remains unclear to this blogger what raising awareness about plastics pollution in the ocean has to do with data center energy consumption–though if there isn’t a glossy commercial spot for Brocade featuring this boat in at least one storage conference presentation this fall, I’ll eat my laptop. 

But at the same time, it’s hard not to be rooting for a woman who is willing to take on such an enormous challenge for a good cause. It’s hard to read the bio she wrote for herself on her website and not come away at least a little bit inspired. And in the end, it’s hard to blame Brocade for wanting to align itself with such a compelling story–whether or not a rowboat really has anything to do with enterprise storage networking.

July 2, 2007  10:11 AM

Symantec got my juices going

Beth Pariseau Maggie Wright Profile: mwright16

There are few announcements by major storage vendors that really get my juices flowing, and the ones that reveal major new industry or products trends are the ones I find the most thought provoking.

Symantec’s recent NetBackup 6.5 announcement in mid-June at Symantec Visions was exactly that type of announcement. What especially piqued my interest was this slide, which was part of the press kit that they sent me that illustrated how they plan to architect NetBackup going forward.

While NetBackup obviously will do enterprise backup and restores for a very long time to come, what NetBackup lacked prior to this announcement was any overarching reason for users to get excited about the future of this product. “NetBackup can do SAN backups” or “NetBackup supports PureDisk”  didn’t cut it anymore. These were just product announcements in response to larger consumer trends that Symantec needed to provide in order to remain competitive in the backup software space.

But this announcement is a plan of attack that, if Symantec can execute on, will give them a leg up on most other storage software vendors in the enterprise data protection space. The idea that a company can use one tool to centrally manage the functionality of multiple other vendor’s data protection products is one that companies sorely need, whether they realize it or not.

Positioning NetBackup, a product most enterprises already use in some capacity, in this role allows storage architects to build an enterprise data protection strategy around it. Enterprise companies have too many products coming in from too many sources for storage architects to get into the details and politics of whether an application should use Symantec’s NetBackup or BakBone Software’s NetVault:Backup agent for Oracle. If one product is a better fit for an application than another and the company can centrally manage either one, who cares?

This does not mean by any stretch that companies should now mark off enterprise data protection as solved. The challenge of managing every vendor’s data protection product is akin to what SRM vendors promised just a few years ago. Buy our product and we will manage all of your storage devices. We all are still waiting for that to happen.

Symantec’s challenge is no less daunting. Trying to manage every vendor’s replication, snapshot, CDP, VTL and tape backup product from NetBackup’s central console presents the same challenge. Then, toss in the fact that to do so will likely require some level of cooperation from their competitors. Somehow, I don’t see that happening.

Symantec has the right idea, and even good intentions, as to where to take NetBackup so that it will provide the most value for their customers. But, the road that lays before them is rocky and one that, when traveled before by the SRM folks, has yet to deliver on its promise. It will be interesting to see whether Symantec’s journey proves to be more productive.


June 21, 2007  10:48 AM

When Plan B fails

Beth Pariseau Maggie Wright Profile: mwright16

This morning Plan B failed. I have high speed internet access into my office, but I pay a small monthly fee to keep a pay-per-minute dial-up account just in case my high speed internet provider ever goes offline. So, this morning, when I lost my internet access, I mentally started preparing myself for 56K upload and download speeds. What I had not mentally prepared myself for, was my phone lines also being down. Thankfully, I still had my cell phone and was able to reach the outside world and let some individuals know about my situation.

But, it occurred to me that this was fairly typical of how disasters go. Not that losing internet access or phone service is necessarily a disaster, but disasters are rarely neat and tidy, they never happen when it is convenient and you can generally count on them not to follow the plan you laid out.

In no way am I implying that companies should abandon either their data protection or disaster recovery planning efforts. What I am suggesting is that after you have backed up all your data, laid out your recovery plans and then tested them, introduce some reality back into the situation.

For instance, a concern that one records management provider recently expressed to me is that companies should evaluate their disaster preparedness after they have just finished a disaster recovery exercise. Tapes are out of order, the recovery environment is not properly configured and people are exhausted. How quickly and how well could your company recover in this situation if a disaster happened then?

Another important aspect to include in your plan is to identify someone who knows the plan but is not afraid to think outside the box. I was once in a disaster recovery situation where an entire production database had failed and there was not enough unallocated disk in the free disk pool at that site to recover the database. The plan called for us to recover to another site, but one individual asked “Do we have a SAN?” and “Can we move some allocated but unused disk on another server over to this one?” In both cases the answer was yes, and we were able to recover the application in 2 or 3 hours instead of 8 to 12 hours.

Disaster recovery plans are just that, plans – no more, no less. But like all plans, they were created at a past point in time and may not reflect the current reality. That is where having someone around who can assess the entire situation and not just follow the script becomes imperative if one is to turn the disaster into a recovery.


June 20, 2007  1:16 PM

A word on options backdating

JoMaitland Jo Maitland Profile: JoMaitland

Former Brocade CEO, Greg Reyes, went to court Monday to face the music for options backdating. His trial is the first of over 100 cases against companies accused of options backdating and the results could signal a collapsing house of cards for the technology industry. The criminal indictment against Reyes charges him with conspiracy to commit securities fraud, mail fraud, making false statements in filings with the SEC and falsifying books and records. He faces decades in prison and millions of dollars in fines if convicted.

Options backdating refers to the practice of reaching back to a date when the company’s stock price was at a low, and selecting that date for the option grant’s exercise price, or the price an employee will pay for the stock. The goal is to boost the potential windfall for the recipient. It’s said to have been common practice during the hay days of the dotcom era to lure talented employees. The criminal part of this action is when a company hides this practice from its shareholders, therefore not having to pay the correct compensation on options at the time they were awarded.

But the trial is not about whether Greg Reyes did this or not. His signature is all over hundreds of documents signing off on the practice. The question is whether he knew it was wrong, but did it anyway. Reyes’ defense says that he didn’t know he was doing anything wrong.

So, for the sake of argument, let’s say he was totally clueless and scribbling over financial statements and falsifying board meeting documents seemed perfectly normal to him.

Now think about this. The IRS calls to audit your taxes and they discover you claimed more than you should have. Can you say sorry officer, the rules are so complex, I must have filled out the form wrong? From my limited knowledge of tax laws, even if you get the wrong information from an IRS agent, you are still liable!

The most complex part for the government in this case is proving intent. And Reyes claims he didn’t profit personally from any options backdating. That may be so, but what about his staff and close associates? Should they be called to account as well?

I was talking with a friend in the business world about this topic and he says that CEOs hire legal advisors and executive management to advise them on such matters, as they themselves cannot be expected to know, in detail, every aspect and legal loophole of the law.  So where are these guys then? Perhaps they should be in the dock?  My friend also felt that with hindsight it’s clear that backdating was a nefarious practice, but at the time, it really didn’t appear to be.

In high-profile trials such as Reyes’, the defense and prosecution legal eagles will spend weeks jousting over semantics. In the end the jury will get worn down by the attorneys, who will create enough confusion and doubt in their minds as to Reyes’ actions, and he will be let off, perhaps with a slapped wrist and a fine. That’s my bet.

In the meantime, is it possible for the legal system to monitor the business world a little more closely and for the business world to try to act ethically, to prevent years of wasted legal wrangling and millions of dollars in fees? Or am I just hopelessly optimistic that things can change for the better? Anyone have any thoughts?  


June 20, 2007  12:39 PM

Further notes on HP StorageWorks and Technology Forum

Beth Pariseau Beth Pariseau Profile: Beth Pariseau


HP’s impressive keynote hall…actually a boxing arena.

Welcome back to Las Vegas, where water (a precious commodity in the desert) will cost you $4 a bottle. This week, storage users moved from the Venetian to the Mandalay Bay hotel, drinking in its tropical theme and all the news they could handle on HP. A few miscellaneous items from the show: 

HP CEO: “no confusion” about need to improve business

During his keynote speech Monday night to officially open the show, HP’s CEO Mark Hurd said he’s proud of the progress the company has made “over the last two and a half years” (i.e. since the “Carly Era” ended and the “Mark Era” began), but said “there isn’t any confusion about how much work we have in front of us.”

Hurd told the gathered audience of thousands in Mandalay Bay’s Event Center (where Pay-Per-View boxing matches are also held) that “we’ve invested millions in R&D, only to underdistribute our products in the market.” He estimated the market for HP’s products at $1.1 trillion, compared to HP’s revenues of $100 billion. The company has added 1000 salespeople and is working with its 140,000 channel partners “to get them deeper into the markets they cover,” Hurd said, particularly in midmarket accounts.

“The biggest complaint I get is how hard it is to do business with HP,” Hurd said. “We have to take the complexity that comes with being a big company and turn it into a capability that works for you.”

HP backup tapes climb Everest

Among the presentations at this year’s conference for media was the video-illustrated story of the brothers Clowes, who climbed Everest in 2006 carrying an HP DAT 160 tape drives and LTO-2 tape cartridges to backup photos and video they took on their ascent to the summit. HP “helped out” with the climbing trip and loaned the brothers a laptop as well as the drives, according to Thomas, the excursion’s photographer and the owner of a small residential building company in the UK. He said his two-man climbing team (consisting also of brother Benedict, a UK financial analyst) bailed out a professional video team from the BBC filming a documentary when their laptop choked on the dust flying around the air at base camp. HP also had the brothers do temperature testing on the tapes, which it is reported survived the wintry ordeal intact.

HP consolidates data centers

What would a conference be without more keynotes? On Tuesday morning, Randy Mott, EVP and CIO of HP, took attendees inside HP’s data center consolidation, in which it was estimated the company invested approximately 2% of its gross revenue (gross revenue, according to Mott’s presentation: $97.18 billion) in a three-year project to consolidate 85 data centers into three “zones”, each with two data center locations, in Houston, Austin and Atlanta. Construction is expected to finish on the data centers by this November, and the majority will have been completed in the next 60 days. The consolidated data centers now boast 21.6 football fields’ worth of raised floor space, 40,000 servers, 16,000 racks and 4 PB of storage. Mott said HP has halved the cost of its storage for double the capacity, though the consolidation won’t stave off storage growth for long–the company expects to be at 10 PB by the end of its three-year project.

That’s all the news that’s fit to print from where I sit–if you’re attending the HP Executive Forum, StorageWorks Americas or Technology Forum conferences this week, feel free to add your thoughts below.


June 19, 2007  9:30 AM

Users: We need remote office data protection

cgibney Carolyn E.M. Gibney Profile: cgibney

One of the common questions I am getting from IT people that I meet with is how can they protect remote offices, typically those with no local, at least officially anyway, IT staff. It is important to accept upfront that you may need a couple of solutions to address this challenge. This is especially true if you have remote offices that vary in size. There may be local databases and, more often than not, local file servers.

The first option is to eliminate the problem by eliminating the local need. Products like Citrix or Windows Terminal Server can eliminate the need for local applications and a wide area file system (WAFS) can eliminate the need for local file servers. The Citrix/Windows Terminal server solutions are best explained by those two companies so we’ll spend our time on WAFS. WAFS essentially places a cache at the remote site. At a high level, this cache is a server appliance with a small local disk that can replicate changes as they happen to a central server at a primary data center. The most frequently accessed data is stored on the remote file server, which serves up data to those users in a local fashion and at local performance. Typically, proprietary but enhanced network protocols are utilized to get higher performance on data transfers for data that is not in the remote cache. Some of the WAFS companies are also providing the ability to utilize data deduplication on the network, in a similar fashion to how some disk-to-disk backup products use data deduplication to optimize storage. NAS that can do data deduplication would be an ideal central NAS for this environment since typically there is a high level of file redundancy between remote offices and the primary data center.

From a data protection standpoint, the centralized repository for all the remote cache’s data is now also a server at your primary data center and can fall under the umbrella of your normal protection scheme. Other advantages to WAFS is that it can eliminate the need for buying additional servers and storage for remote offices, delay the need for bandwidth upgrades and can even enable better collaboration between offices.

Another option is to use replication. This is ideal for sites that in addition to a remote file server also have some remote database applications or email, especially if there are just a few of this type of sever present. While most data replication products are considered for disaster recovery solutions, they also make for an ideal remote or branch office backup solution. With these products all data is replicated as it changes to a centralized disk at the primary site. This disk can then be mounted to a backup server and backed up locally at the primary data center. Cost can be a concern if the local server count is more than just a few servers and you are not leveraging the other value points (disaster recovery and failover) of replication. Also, unless your data replication solution can produce snapshots to freeze moments in time or you can leverage snapshot technology by replicating to storage that supports it at the primary data center, be aware that if you do experience corruption at the remote site, you will then very quickly replicate that corruption to the primary site.

If you don’t need the near instant protection of WAFS or replication you can also leverage some D2D solutions to perform a backup locally and utilize the D2D solution’s replication capability to send that data to a similar device at the primary data center. This typically makes sense when you have a fairly large data set or number of servers and maybe a small remote IT staff. It will also almost certainly require a data deduplication device, as straight disk-to-disk backup will not work well for remote backups. You need to be able to deploy a D2D solution that can leverage data deduplication. Standard backup to disk essentially consists of several large backup files. Those files are typically created too fast and are too large to be copied across the common WAN segment in time to meet the nightly backup window. Data deduplication appliances on the other hand only have to send the changed block between the backup jobs, greatly reducing the WAN requirement. Also, you now have the data locally at the branch office instead of only at the primary data center, allowing for faster recovery at the branch if needed.

In the past four or five years, we have expanded from hardly any options for remote office data protection to many. Which of these solutions you deploy is a function of budget and business requirement, and in some cases it may make sense to blend the solutions. Assessing the needs of the remote offices while focusing on the business realities at the primary data center, will help you make those choices.

For more information please email me at georgeacrump@mac.com or visit the Storage Switzerland Web site at: http://web.mac.com/georgeacrump.


June 15, 2007  6:56 AM

Symantec Vision–Dispatches from Keynote Central

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Coming to you from sunny Las Vegas, in the “Blogger’s Lounge” inside Symantec’s Vision City (their name for the show floor). Seemed apropos.

In addition to the Blogger’s Lounge, which consists of picnic tables set up on a large field of Astroturf in the middle of the floor that has been dubbed “Central Park”, Vision City has a Financial District, consisting of business partners’ exhibits, and Tech Row (no relation to Skid Row, presumably), where product demos are being held. Central Park also has Wii video games, robotics competitions, and obstacle courses for remote-control cars.


Click each picture to see a larger version.

In each of this year’s seven keynotes, Symantec has shown similarly slick production values, beginning with an address given by data center group president Kris Hagerman on Tuesday.

Quite often when they roll video at a conference general session, attendees can be seen checking their watches, email, or taking advantage of dimmed lights to catch some shuteye. But the video during Hagerman’s address, called “The Complexity Master”, was a hit with the crowd, who responded with genuine belly laughs to the Office-Space stylings of the short film, which documented a day in the life of a backup administrator trying to sell his company on standardizing on the NetBackup platform (the vid was, of course, not completely without marketing).

The biggest laughs came when the Complexity Master, a Southern California comedian hired for the role, and his supporting cast (presumably of other comedic actors), were given the dialogue most backup admins either imagine or wish would happen during meetings with business units–witness the one higher-up who chews his donut thoughtfully before saying, every time he’s called upon, “that sounds…complicated.”

We can only hope it gets leaked to YouTube.

Video hijinks weren’t the end of the three-ring circus at Hagerman’s keynote, either. Senior director of product marketing Matt Fairbanks bounded onto the stage sporting one of the “Storage United” soccer jerseys (after Manchester United, DC United, et al), and was later joined by a girls’ under-10 soccer team from Silicon Valley, who handed out soccer balls and jerseys to the crowd. The talk of the conference following the session was the tiny pigtailed soccer player who flawlessly rattled off a spiel about replacing her TSM and Legato environments and achieving huge ROI when asked what she thought of the new NetBackup.  Somebody get that girl an agent.

More nuggets from the show for your reading pleasure are below the fold.

Continued »


June 13, 2007  2:03 PM

Shameful disclosure

JoMaitland Jo Maitland Profile: JoMaitland

Word of tapes “falling off the back of trucks” is almost a once-a-month event these days, but the way companies handle the disclosure of these albeit embarrassing incidents is shameful.

A coworker at TechTarget told me this morning that he had just recevied a letter from IBM informing him that the company had lost tapes containing sensitive current and former employee data, including and potentially his social security number.  This is old news [May 15], but a few things stuck me as interesting about it.

1) He has not worked for IBM in over 20 years, yet the company is still storing information on him. Ever heard of ILM over there guys? I think Tivoli has something…

2) IBM announced this publicly on May 15 but my friend did not receive the letter until June 7.

3) IBM lost the tapes on Feb. 23, 2007.

“Time was needed to investigate the incident, determine the nature of the information on the lost tapes, and conclude that recovery of the tapes was unlikely,” IBM said in an FAQ sheet sent to its employees.  “In order not to impede any continuing investigative efforts, we are not disclosing the numbers of individuals affected,” it added.

Come on! We weren’t born yesterday. IBM’s excuse for the delay in informing its employees, as well as the number that were affected seems disingenuous, probably to avoid further embarrassment.  It’s a poor response not to mention bitterly ironic given IBM’s focus on security.

My friend was given a year’s worth of free credit reporting to help him track whether anyone is using his stolen information.  If IBM thinks this is enough to rescue its relationship with its employees it might want to take a look at this survey of people who were notified that their personal information had been lost. It found that 20% of the people had already stopped doing business with that company and another 40% were considering it.


June 12, 2007  2:28 PM

How much data deletion is enough?

WHurley Billy Hurley Profile: WHurley

We all know that deleting a file doesn’t actually “delete” anything. Deletion only marks the file’s clusters as free for re-use — data actually remains tucked away within the sectors of each cluster until they are overwritten by new data. To really destroy data, it must be overwritten multiple times. This ensures that the magnetic traces of previous recordings cannot be read with advanced laboratory equipment (even when new data is on the media).

But how many times do you really have to overwrite that deleted data before it’s actually considered secure? Once? Twice? Ten times? Experts say that multiple overwrites are worthwhile — even required — noting that anywhere from 7 to 11 writing passes may be needed to fully overwrite the old data.

And there’s no shortage of tools that promise to kill your old data. Professional products like FDRERASE/OPEN from Innovation Data Processing can securely erase the magnetic disk using three to eight passes. Even end-user products like File Shredder from HandyBits.com promise to overwrite file data with random information up to 15 times, claiming that “it is practically impossible to recover the original data”.

Now there are circumstances when it pays to be extra thorough, but personally I think it’s overkill — a practice based on old MFM/RLL drive technologies. US DoD specification 5220.22 calls for three overwrites, while NIST standard SP 800-88 was revised in 2006 to call for only one overwriting pass on modern (post 2001) hard disks.

But I want to hear what you think. What tools are you using? How do you ensure that your old files are securely deleted? Does it even matter to you?

In the mean time, listen to this FAQ on Storage Security where Kevin Beaver offers practical answers to the most common storage security questions he hears from storage pros today.


June 11, 2007  2:32 PM

DPM software: Makeover or coverup?

WHurley Maggie Wright Profile: mwright16

Arun Taneja and I have focused on the topic of data protection management (DPM) in recent posts. He sees what I also see – DPM software is undergoing a significant transition in purpose. While DPM software is not new and companies like APTARE, AGITE Software, Bocada, Tek-Tools, ServerGraph and others have offered it for years, only now are vendors and customers figuring out how to use in a larger context within organizations.

Companies tend to only think of and use DPM software in a singular context – day-to-day operations. For the most part, this software does a good job of monitoring and reporting on the successes and failures of backup jobs, the identification of failed tape drives and the utilization of media in tape libraries. However, this did not raise DPM software’s value proposition much beyond the purview of the day-to-day operations staff.

Now, some DPM vendors are reworking their messaging to make their products more appealing to a larger corporate audience – with capacity planners and storage architects their primary focus. Part of the motivation for this change is that more companies want to bring disk into their backup scheme. But, these organizations lack information on their current backup environment to confidently make these types of changes to their infrastructure.

For many companies, it’s a roll of the dice as to how well a new disk library will work in their environment. I have spoken to more than one user who purchased a disk library with high amounts of disk capacity only to find out the controllers on the disk library could not keep pace with the amount of data backup software fed to it. This required them to purchase additional disk libraries which created new seta of management problems.

DPM software can help to address these types of sizing issues by quantifying how well current backup resources are being used and trend their use over time. This allows companies to select and implement appropriately sized disk libraries based on facts, not assumptions. It may also give them the facts they need to justify waiting on a disk purchase, since they may identify better ways to utilize their existing tape assets.

Performing trending and capacity planning is very different than sending out an alarm that a tape drive or a backup job has failed. Companies need to be sure that vendors are actually delivering a product makeover regarding reporting and analysis capabilities and not simply covering up their product’s deficiencies with its latest rendition of marketing literature.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: