Storage Soup


June 15, 2007  6:56 AM

Symantec Vision–Dispatches from Keynote Central

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Coming to you from sunny Las Vegas, in the “Blogger’s Lounge” inside Symantec’s Vision City (their name for the show floor). Seemed apropos.

In addition to the Blogger’s Lounge, which consists of picnic tables set up on a large field of Astroturf in the middle of the floor that has been dubbed “Central Park”, Vision City has a Financial District, consisting of business partners’ exhibits, and Tech Row (no relation to Skid Row, presumably), where product demos are being held. Central Park also has Wii video games, robotics competitions, and obstacle courses for remote-control cars.


Click each picture to see a larger version.

In each of this year’s seven keynotes, Symantec has shown similarly slick production values, beginning with an address given by data center group president Kris Hagerman on Tuesday.

Quite often when they roll video at a conference general session, attendees can be seen checking their watches, email, or taking advantage of dimmed lights to catch some shuteye. But the video during Hagerman’s address, called “The Complexity Master”, was a hit with the crowd, who responded with genuine belly laughs to the Office-Space stylings of the short film, which documented a day in the life of a backup administrator trying to sell his company on standardizing on the NetBackup platform (the vid was, of course, not completely without marketing).

The biggest laughs came when the Complexity Master, a Southern California comedian hired for the role, and his supporting cast (presumably of other comedic actors), were given the dialogue most backup admins either imagine or wish would happen during meetings with business units–witness the one higher-up who chews his donut thoughtfully before saying, every time he’s called upon, “that sounds…complicated.”

We can only hope it gets leaked to YouTube.

Video hijinks weren’t the end of the three-ring circus at Hagerman’s keynote, either. Senior director of product marketing Matt Fairbanks bounded onto the stage sporting one of the “Storage United” soccer jerseys (after Manchester United, DC United, et al), and was later joined by a girls’ under-10 soccer team from Silicon Valley, who handed out soccer balls and jerseys to the crowd. The talk of the conference following the session was the tiny pigtailed soccer player who flawlessly rattled off a spiel about replacing her TSM and Legato environments and achieving huge ROI when asked what she thought of the new NetBackup.  Somebody get that girl an agent.

More nuggets from the show for your reading pleasure are below the fold.

Continued »

June 13, 2007  2:03 PM

Shameful disclosure

JoMaitland Jo Maitland Profile: JoMaitland

Word of tapes “falling off the back of trucks” is almost a once-a-month event these days, but the way companies handle the disclosure of these albeit embarrassing incidents is shameful.

A coworker at TechTarget told me this morning that he had just recevied a letter from IBM informing him that the company had lost tapes containing sensitive current and former employee data, including and potentially his social security number.  This is old news [May 15], but a few things stuck me as interesting about it.

1) He has not worked for IBM in over 20 years, yet the company is still storing information on him. Ever heard of ILM over there guys? I think Tivoli has something…

2) IBM announced this publicly on May 15 but my friend did not receive the letter until June 7.

3) IBM lost the tapes on Feb. 23, 2007.

“Time was needed to investigate the incident, determine the nature of the information on the lost tapes, and conclude that recovery of the tapes was unlikely,” IBM said in an FAQ sheet sent to its employees.  “In order not to impede any continuing investigative efforts, we are not disclosing the numbers of individuals affected,” it added.

Come on! We weren’t born yesterday. IBM’s excuse for the delay in informing its employees, as well as the number that were affected seems disingenuous, probably to avoid further embarrassment.  It’s a poor response not to mention bitterly ironic given IBM’s focus on security.

My friend was given a year’s worth of free credit reporting to help him track whether anyone is using his stolen information.  If IBM thinks this is enough to rescue its relationship with its employees it might want to take a look at this survey of people who were notified that their personal information had been lost. It found that 20% of the people had already stopped doing business with that company and another 40% were considering it.


June 12, 2007  2:28 PM

How much data deletion is enough?

WHurley Billy Hurley Profile: WHurley

We all know that deleting a file doesn’t actually “delete” anything. Deletion only marks the file’s clusters as free for re-use — data actually remains tucked away within the sectors of each cluster until they are overwritten by new data. To really destroy data, it must be overwritten multiple times. This ensures that the magnetic traces of previous recordings cannot be read with advanced laboratory equipment (even when new data is on the media).

But how many times do you really have to overwrite that deleted data before it’s actually considered secure? Once? Twice? Ten times? Experts say that multiple overwrites are worthwhile — even required — noting that anywhere from 7 to 11 writing passes may be needed to fully overwrite the old data.

And there’s no shortage of tools that promise to kill your old data. Professional products like FDRERASE/OPEN from Innovation Data Processing can securely erase the magnetic disk using three to eight passes. Even end-user products like File Shredder from HandyBits.com promise to overwrite file data with random information up to 15 times, claiming that “it is practically impossible to recover the original data”.

Now there are circumstances when it pays to be extra thorough, but personally I think it’s overkill — a practice based on old MFM/RLL drive technologies. US DoD specification 5220.22 calls for three overwrites, while NIST standard SP 800-88 was revised in 2006 to call for only one overwriting pass on modern (post 2001) hard disks.

But I want to hear what you think. What tools are you using? How do you ensure that your old files are securely deleted? Does it even matter to you?

In the mean time, listen to this FAQ on Storage Security where Kevin Beaver offers practical answers to the most common storage security questions he hears from storage pros today.


June 11, 2007  2:32 PM

DPM software: Makeover or coverup?

WHurley Maggie Wright Profile: mwright16

Arun Taneja and I have focused on the topic of data protection management (DPM) in recent posts. He sees what I also see – DPM software is undergoing a significant transition in purpose. While DPM software is not new and companies like APTARE, AGITE Software, Bocada, Tek-Tools, ServerGraph and others have offered it for years, only now are vendors and customers figuring out how to use in a larger context within organizations.

Companies tend to only think of and use DPM software in a singular context – day-to-day operations. For the most part, this software does a good job of monitoring and reporting on the successes and failures of backup jobs, the identification of failed tape drives and the utilization of media in tape libraries. However, this did not raise DPM software’s value proposition much beyond the purview of the day-to-day operations staff.

Now, some DPM vendors are reworking their messaging to make their products more appealing to a larger corporate audience – with capacity planners and storage architects their primary focus. Part of the motivation for this change is that more companies want to bring disk into their backup scheme. But, these organizations lack information on their current backup environment to confidently make these types of changes to their infrastructure.

For many companies, it’s a roll of the dice as to how well a new disk library will work in their environment. I have spoken to more than one user who purchased a disk library with high amounts of disk capacity only to find out the controllers on the disk library could not keep pace with the amount of data backup software fed to it. This required them to purchase additional disk libraries which created new seta of management problems.

DPM software can help to address these types of sizing issues by quantifying how well current backup resources are being used and trend their use over time. This allows companies to select and implement appropriately sized disk libraries based on facts, not assumptions. It may also give them the facts they need to justify waiting on a disk purchase, since they may identify better ways to utilize their existing tape assets.

Performing trending and capacity planning is very different than sending out an alarm that a tape drive or a backup job has failed. Companies need to be sure that vendors are actually delivering a product makeover regarding reporting and analysis capabilities and not simply covering up their product’s deficiencies with its latest rendition of marketing literature.


June 5, 2007  8:13 AM

Meet Drobo. Could it be the RAID of the future?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

drobo-open.jpgData Robotics, a new storage startup just coming out of stealth, has announced both itself and its first product today. Data Robotics was formerly working under the name Trusted Data, and is a venture led by BlueArc founder Geoff Barrall. The product is a consumer storage unit called Drobo that Data Robotics says it intends to scale up into the enterprise market.

Why should you care? Because if Data Robotics has its way, Drobo could change the concept of RAID storage.

Here’s how it works: the Drobo box, a black cube that will fit on a desk, contains four disk bays. Any SATA disk drives from any manufacturer of any size can be added and the box will automatically stripe data across them; the box uses no management software, and instead has a system of lights which show red, yellow or green. If it’s red, replace the disk. If it’s yellow, the disk is filling up. Green and all is well.

When disks fill up, they can be swapped out for a larger size and data is restriped automatically, using RAID levels that change according to the disk capacity left over. For example, in a system with a 120 GB, 250 GB, 500 GB and 750 GB drives installed, 120 GB of each disk would use RAID 5 striping in a 3 + 1 configuration. As the 120 GB drive fills up, the system would put the remaining capacity into a 2+1 configuration, and then finally into a mirrored pair (in case you’re doing the math at home, the final 250 GB of the 750 GB drive would remain empty in that scenario).

Data Robotics president Dan Stevenson said the system was designed for the non-technical customer–it could perhaps even be termed the extremely non-technical customer. Hence the lights. “If you can figure out a traffic light, you can figure out how to manage this storage box,” Stevenson said.

Of course, you may have to understand a bit more than that to know that you shouldn’t take the 750 GB drive out to replace it without enough capacity left in the remaining disk slots to absorb the data while it’s missing. And Stevenson said so far Drobo isn’t pursuing any distribution deals with Best Buy, instead presenting this as an alternative to homegrown RAID arrays for Bit-Torrent addicted power users or “prosumers”, professional digital photographers, and small businesses where, to quote Stevenson, “the IT guy is Frank’s son.”

One such company is Michael Witlin Associates (MWA), a five-man company that produces corporate events in Silicon Valley. Its owner, namesake and executive producer received a free eval copy of Drobo and says it’s been humming along nicely in the few months he’s tested it, in contrast to a Maxtor-drive-based RAID 5 array he struggled to manage. Every so often partitions on that array would “drop off” in Witlin’s user interface, “And I could never figure out what the problem was,” he said. Drobo plugs in via USB 2.0–something Witlin said he prefers–and “there’s nothing I have to do after that.”

Meanwhile, Data Robotics is aiming to take its approach to RAID to the big time. Stevenson envisions a Drobo-ruled utopia in which lower-paid “Tier 3″ IT admins also manage Tier 3 nearline storage, automatically striped using Drobo’s RAID method, who merely need the expertise to observe and act on a red, yellow or green light.

“Their algorithms could have a huge impact on enterprise storage, as well,” wrote Brad O’Neill, senior analyst with the Taneja Group in an email. “If you can begin to create heterogeneous arrays right down to the drive level, with no interruption of availability or performance, you’ve done something extremely disruptive to the market–capacity upgrades would become the equivalent of simply plugging in drives, waiting for a green light, then adding more. I could imagine large service providers using drive robots running swaps and upgrades not unlike tape robots do today. ”

In the interest of full disclosure, O’Neill’s not just an enthusiastic supporter of Drobo’s vision, but also of their bottom line: “I have a Drobo plugged into my laptop right now via USB providing 700 gigabytes of…storage…with four different drives of varying capacities and vendors,” O’Neill confessed. “I [also] bought four of them and gave them to my friends.”

(His friends are probably used to those special Christmases with Brad…)


June 4, 2007  7:54 AM

Cross correlation engines reaching into primary storage

Ndamour Nicole D'Amour Profile: Ndamour

You have seen my writings on (and may even have heard me speak about) Cross Correlation (CC) analytics engine as a necessary part of a Data Protection Management (DPM) product. DPM products make your backup and restore environment work more efficiently. Recently, I have seen the application of CC techniques to solve problems on the primary storage side. And much to my pleasure, I have also seen the technique applied to manage application performance.

Several players are delivering products in the DPM market including Aptare, Bocada, Illuminator, Servergraph, Tek-Tools and WysDM, and most recently, Symantec, with their NetBackup Reporter product. These products, as a category, are delivering real value, based on my conversations with many of you. EMC, who resells WysDM as Backup Advisor, is apparently shipping in large quantities. All big data protection vendors have gotten religion on this recently, and they are all scrambling to add DPM functionality via in-house R&D or through a partnership.

To be sure, not all products are created equal in terms of the strength of the CC engine (or even the existence of one), which to me is the essence of the product. Without a sound CC engine, the best a product can do is rudimentary analysis and basically report on changes.

I have seen two new and interesting uses of CC recently. First, WysDM announced WysDM for File Servers. Essentially, that means the same CC engine is being used to look at NetApp filers (primary storage) to determine if the filer is behaving as it should. Much as before, the product gathers data from the application and through all hardware and software layers that reside between it and the filer, and applies analytics to determine if the system is behaving within acceptable boundaries. Are response times to file requests deteriorating? Is capacity being utilized efficiently? Is a file system ready to run out of storage? What needs to be done to solve the problem? Will an additional GE connection make a difference? You get the point.

I know you are probably saying to yourself, “I get some of that information from filer’s integral management tool?” Of course, you do. But, just like on the data protection side, the amount and type of information about the environment that was being delivered before this tool was available was rudimentary and static. Unless one escapes outside of the filer and looks at the entire picture from end-to-end it is hard to determine the root cause of a problem that exists or is in the making. That can only be done with a sophisticated CC tool. And only a sophisticated tool will give you predictive information with a high degree of confidence.

Another company that has applied CC to the primary storage is Illuminator Software, whose DPM product now includes functionality about snapshots and replication. But, the product is still true to its data protection roots. In this case, the product provides information on the readiness of volumes from a data recoverability point of view. Whether the volume is protected using snapshots or replication or secondary disk or tape, its recoverability is established and reported on. The product also offers advice on the actions necessary to improve recoverability.

The third company, Akorri Networks, has applied a CC engine for an entirely different purpose: to provide insight into application performance. Of course, application recoverability is improved when application availability is improved so there is an underlying connection here. But, the overt focus is to provide insight into how storage resources are being used to deliver a certain level of performance at the application level. In other words, given a particular SLA for an application, does one have adequate or inadequate storage resources applied? Would extra resources (higher throughput storage, more storage, another pipe to storage, etc.) help to bring application performance back into SLA boundaries? Or would it be a waste? What would help the most? With this kind of information the right type and quantity of resources can be applied thus saving time and resources.

The progress in these areas has been truly phenomenal in the last three years, and yet, we are still in infancy stages of utilizing these tools. Most of these technologies have become available from smaller companies, whose reach is limited. Given that your environment is only getting more complex it behooves you to check these out! Send me an email if you need any help.


May 31, 2007  8:53 AM

Straight out of the 80s

Ndamour Maggie Wright Profile: mwright16

Why is it still acceptable to the majority of us to protect and recover our data like we did in the 1980s? Obviously, backup software has evolved in the last 20 years to perform differentials, incrementals and synthetics, integrate with most major databases, take advantage of array-based snapshots and do SAN-based backups. But, at the end of the day, many data protection products still lag the wide-scale user desire – may I even suggest requirement – for near-instant recoveries.

It strikes me as ironic that low-tech industries like fast-food can serve up a hamburger in 30 seconds or less while those of us who work in the technology industry can’t recover data for many of our company’s applications in the same amount of time or less. The guys running the hamburger joint at some point figured out that they made more money and were more productive making hamburgers every 30 seconds than they did every 90 seconds. We should minimally seek to be as productive.

The fast food guys also managed to figure out that letting you fill up your own drinks and then going back to get free refills was cheaper and faster than dedicating two people behind the counter to do the same thing. So, why can’t us high-tech folks figure out a way to empower our users to recover data rather than always requiring storage administrators to perform this task for them?

Now, this is not meant to diminish the value that storage administrators provide or to say that data protection is akin to serving up a hamburger. Obviously, all data is not created equal and you don’t want just any user to be able to access and restore data for a mission-critical production environment. There is still too much complexity and the ramifications – financial, political and technical – if anything goes wrong are potentially enormous. But, should recovering a file on a file server in 2007 really require a call to the help desk, a storage administrator and a wait time of 30 minutes or longer?

Near-instant recovery of data in the 21st century should no longer be reserved for just applications deemed “mission-critical”. Companies have too few employees and too many applications running on too many different servers to possibly keep track of which applications are mission-critical and early indications are that the emerging world of virtual servers will only exacerbate this situation.

Now, I am not suggesting one immediately abandon one’s current backup software product in favor of new products like CommVault’s Continuous Data Replicator, NetApp’s Topio Data Protection Suite or InMage’s DR-Scout that can deliver near real-time data replication and recovery. Everyone should be extremely cautious about their data and proceed cautiously with any of these new products, because they all take time to implement and tune to your environment.

But, we should keep in mind this is 2007, not the 1980s, and there is a risk associated with not moving forward. Just as your computing environment has changed, new data protection technologies are available that are better suited for today’s environment. Unfortunately, if your company has not changed its fundamental approach to data protection and how it protects and recovers data, odds are your company is operating at a disadvantage when it comes to providing your users a level of service that in this day and age they should not have to ask for but should expect.


May 29, 2007  3:59 PM

Archiving action

Karen Guglielmo Karen Guglielmo Profile: Karen Guglielmo

Hitachi Data Systems announced a major upgrade to its Content Archive Platform, which SearchStorage.com News Director Jo Maitland reported on today. If you’re considering adopting an archiving product, you might want to check out the second chapter of the Data Retrieval Research Guide, which we published this week. It highlights the key issues involved with retrieving data from archives, with lots of information on CAS and deduplication.

We also recorded a podcast on email archiving a while back, which offers information about archiving with CAS and dealing with unstructured data. Check that out below.

Elsewhere, you might be interested in reading what Hu Yoshida, vice president and CTO of HDS, has to say about HDS’s Content Archive Platform in a recent blog post entitled “When is CAS not CAS?”.


May 21, 2007  1:30 PM

Working hard at EMC World…

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

052107_12341.jpg

052107_12401.jpg

052107_12402.jpg

052107_12461.jpg

052107_12422.jpg

052107_12471.jpg 


May 21, 2007  1:15 PM

Don’t like your email archiving system? Service providers are standing by…

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Managed service provider RenewData briefed us today on its launching of a data migration service specifically for transferring email archives from one archiving product to another while maintaining a legal chain of custody.

Renew has partnerships with EMC, Symantec and CA (for the former iLumin product), which allow its proprietary data migration software to bypass the archiving application and extract data directly from the archive for quicker transfers. According to James Smith, vice president of enterprise solutions for RenewData, the company had already begun offering these migration services on an on-demand basis for customers and Smith says Renew has performed dozens of migrations already — the formal packaging and marketing of the service is what’s new.

There were no firms willing to speak to the press about their use of the service, but the fact that Renew anticipates a market for such a service is interesting evidence of the influence that e-discovery and email archiving in particular have in the storage industry of late. It’s difficult to tell what it means at this point if there’s a large market for assisted migration between email archiving tools–would it mean that users are not making the best choice of archiving systems the first time? Or would it mean that email archiving systems are not delivering on their promises?

The bottom line is that this service is anticipating at least some market because in many ways email archiving, as well as migration between archives, can be a painful and proprietary exercise. According to Smith, the service can be used to create a “baseline” copy of data in intercustodial deduplicated format. The service can also export to “standard” formats such as HTML or XML. However, most often the service has been used to migrate from one proprietary archive to another, according to Smith.

“Very few products out there archive the pure message file,” he said. “They put it in their own format so that it’s more painful to migrate away.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: