Storage Soup

A SearchStorage.com blog.


February 8, 2008  12:30 PM

Pillar’s funny, but will storage admins take it seriously?



Posted by: Dave Raffo
Storage

For a company known primarily for spending hundreds of millions of Larry Ellison’s Oracle bucks, the folks at Pillar Data have a good sense of humor.Take this video Pillar put together for its Application-Aware Storage release this week: http://www.youtube.com/watch?v=b0Kx0w7fYx4

Funny. But I have a feeling that a lot of storage administrators might have similar reactions as those at the malls and McDonald’s did to Pillar’s claim that it’s the first to offer application-aware storage. Application awareness is helpful but not new in storage, let alone “game-changing,” as Pillar claimed when it announced it this week.

“Is this a new feature? Well, not for the industry, but certainly for Pillar,” said analyst Greg Schulz of The StorageIO Group. “Others have tried, including Sun. So for Pillar, it’s new and game-changing. For the industry, well, maybe game-changing for those who have not seen it before.”

But is it even new for Pillar? What Pillar describes in its release — writing scripts that assign an application to either the outside, middle or inside of the disks in a volume — was supposedly in their product from the start.

In his blog explaining Pillar’s application-aware storage, here’s how Pillar CEO Mike Workman describes it: “. . .application-awareness implies configuration of disk, but in the case of Pillar’s Axiom it also implies things like cache configuration, network bandwidth, CPU priority, and layout of data on the disk platters. In other words, all the system resources are tailored to the application — set up to make the application see the best possible disk attributes out of the resources in the array.”

Workman also writes that this is the approach Pillar took when it started shipping its Axiom systems 2-1/2 years ago.

Pillar customer Greg Thayer, director of IT at voice data network provider Comm-Works, says application awareness was a key part of why he bought a Pillar system last September. “It was a compelling reason for us,” he said. “I can characterize my data by what is the most important information that users access, and that goes on the outside of the disk where things are spinning more often.”

But why is Pillar trumpeting a feature that it’s had from the start? Cynics in the industry say the company is trying to generate buzz because of stalled sales. Pillar has watched less funded storage system vendors Compellent and 3Par go public and Dell scoop up EqualLogic for $1.4 billion. For Ellison’s $300 million or so investment, Pillar claims 300 customers — which means it has spent at least $1 million per customer.

Still, let’s hope Pillar sticks around. No other storage company is running videos on YouTube that are nearly as interesting.

It certainly beats watching this guy carry on about server virtualization conspiracies.

February 7, 2008  3:41 PM

Vengeful militant dolphins…and the Internet



Posted by: Beth Pariseau
Data center disaster recovery planning

That, friends, is without a doubt the best headline I’ve ever written.

As many of you are surely aware, underwater Internet cables in Asia were cut last week, one of them by an errant ship’s anchor, and another two (or three–I’ve seen stories that say there were a total of three cut cables, and stories that say there were four)…unexplained.

It all happened last week, but repairs are still ongoing in the region. The cable cut by the anchor has been fixed, and reportedly most of the region of Asia, the Middle East and North Africa that was Net-less has come back online (all those Saharan nomads are surely relieved wireless is back on their laptops again). Fixes to the other cables should be done Sunday according to authorities.

As always when human beings encounter the unknown, their immediate instinct is to fill it in with knowledge or theory as quickly as possible. This story is no exception, and according to this AFP piece, the conspiracy theories are flying fast and furious. Many suspect terrorism, yet no one knows how it would have been accomplished.

All of which leads to the following paragraph, which I will now quote verbatim:

Bloggers have speculated that the cutting of so many cables in a matter of days is too much of a coincidence and must be sabotage. Theories include a US-backed bid to cut off arch-foe Iran’s Internet access, terrorists piloting midget submarines or “vengeful militant dolphins.”

If this blog were the Daily Show, that right there would be your Moment of Zen. 

But in seriousness. While all this is happening, there are no doubt companies suffering a complete outage, and if the estimates for the repairs are true (personally I apply the same projection-to-reality formula for Internet fixes as I do to cable repair guy appointment times), these companies will have been suffering complete outages for at least a week to ten days.

Helpfully, IT companies are reminding us through press releases that most companies are not equipped to survive outages longer than seven days (per Gartner). They’re also reminding everyone that had these companies been using their product(s), and presumably a sufficiently distant secondary site, they would’ve been fine. How that would be if you don’t have a WAN to replicate and restore data, or a network through which to conduct commerce, is beyond me, but that’s really not the point; here in the trade press we expect to get press releases linking IT products to every conceivable natural or worldwide disaster, regardless of how tenuous the link may be.

The more I thought about it, the more I wondered…unless you’re a multinational company, how do you survive an outage that big? We’ve all heard about how 9/11 taught people to expand the scope of their DR plans, and Katrina taught people to expand the geographic area they consider potentially disaster-affected when sending tapes offsite. This type of disaster, though, is too big to be escaped by all but the biggest of global corporations. And it does beg the question–how far can DR go? How do you respond to a disaster of global or hemispheric proportions? Many companies are going through a painstaking process of broadening the scope of DR plans beyond their local area as a result of Katrina–should they start planning DR hot sites in Siberia instead?

Yet even as IT shops slowly inch toward better preparedness, disasters, and the global economy, wait for no man. Given our worldwide dependence on the Internet (and imagine what the effect would be if this had happened in North America and Europe), has this disaster suggested a practical limit to technical DR? If so, what’s the contingency plan for that?


February 6, 2008  3:28 PM

HP’s new SMB system … for remote offices



Posted by: Dave Raffo
Storage

 Why is it when storage vendors hawk a system below a certain price point – say $10,000 – it automatically becomes an SMB product?

Take HP’s MSA upgrade launched today. According to HP’s press release,

The easy-to-use, enterprise-class systems are designed for small and mid-size businesses …”

But the real use for the system comes next, after a however,

“… enterprises also will find the MSA2000 is an ideal solution for their remote office, departmental, secondary and tertiary storage needs.”

The real purpose for the MSA2000 is the second one listed. SMBs are listed first because that’s considered the hot “greenfield” market today. But just because the MSA is at the low end of HP’s SAN portfolio doesn’t make it right for SMBs. They’re more an option for existing HP customers to add smaller storage deployments or to hook up to blade servers. That doesn’t make them bad; but they’re smaller versions of HP’s SANs and not built for SMBs.

Charles Vallhonrat, MSA product manager, HP StorageWorks division admits as much. When I asked about the system being for SMBs, he said yes, the price point is low and management is simple, “but we also see a large uptake with large customers putting it in remote offices and departments.”

HP says the list price starts at $4,999 but you better have a lot of unused storage lying around because that price includes no disk. That price covers a single iSCI controller. If you want 4.5 TB of SATA storage, it costs $7,993. A single Fibre Channel controller costs $5,999 without disk and $8.993 with 4.5 TB of SATA drives. Dual controller systems add $2,500 to the price. More expensive SAS drives are also available.

Vallhonrat says a single controller system is viable because the controllers include transportable cache. If one fails, you can move the cache to a new controller and recover data.  That’s useful for a storage administrator, but probably not something the person who manages systems at an SMB wants to deal with.

HP isn’t alone in labeling its small enterprise systems as SMB offerings. EMC does the same with its Clariion AX systems. The difference is HP has a real SMB system – called the All-In-One.

Vallhonrat says the MSA2000 platform is meant to compete with IBM’s DS3000, Dell’s MD3000 and the lower-end of the AX4.

As for differences between the MSA200 and All-In-One, he compared it to using a dedicated printer or scanner as opposed to a multifunction device. The All-In-One is the multifunction device with iSCSI and NAS for block and file storage while the MSA2000 handles only block storage.

“All-In-One is for people who have a need for multiple storage types [file and block], but not the best performance for one type,” he said. “Like a multifunction printer, it’s not best scanner or best printer but does all. The MSA is for if you need better performance or availability, but not the ease of use or functionality of the All-In-One.”


February 6, 2008  1:42 PM

Are storage vendors going to help send us down a black hole?



Posted by: Beth Pariseau
Around the water cooler

I’m sure any number of you can come up with witty figurative responses to that, but I actually mean it literally.

Back in August I did a case study on CERN, the world’s largest physics laboratory, in Switzerland, and the petabytes of data storage that are going to support research on its Large Hadron Collider (LHC). LHC is a 12-story-high, 10-mile-wide underground system of tunnels, magnets and sensors that’s designed to do no less than recreate atomic conditions at the creation of the universe and capture particles that until now have been only theoretical.

Having spoken with CERN about their research and the way the whole system is set up, I was surprised when I logged in to my personal email this morning and got a friend request from a profile titled STOP CERN. According to the profile:

This space has been set up to spread awareness of the risks a project due to be launched at CERN next year poses to our planet. For the first time in many decades someone has built a machine that exceeds all our powers of prediction, and although they estimate the possibility of accidentally destroying the planet as extremely low, the LHC propaganda machine that ‘everything is safe’ is well funded by your tax dollars, paying large salaries to thousands of people who have much to lose financially should the LHC be unable to prove its safety. As most of them perceive the risk to be small, they are willing to take that ‘small risk’ at our expense. The actual risk cannot presently be calculated, and a Large Hadron Collider [LHC] legal defense fund has even been set up to challenge CERN on the project.

I don’t have any kind of physics background, so I don’t know if the criticisms are legit, but I was doubly surprised to find that the MySpace profile is only the tip of the iceberg of people questioning CERN. In addition to some other critical websites, an LHC Legal Defense Fund has been started with the goal of legally intervening to stop CERN from turning on LHC this May, creating a black hole within the collider and accidentally destroying the planet.

By the way, isn’t that really every geek’s dream? To be working on a machine that even theoretically could accidentally destroy the planet?

Anyway, the debate seems to be whether or not something called “Hawking evaporation” (presumably named after physicist Stephen Hawking) will neutralize the microscopic black holes that could be created by the particle collisions in LHC, or if they’ll continue to grow and, well, eat France.

According to another anti-CERN site:

If MBH’s [microscopic black holes] are created, there is a likelyhood [sic] that some could fall unimpeded to the centre of the Earth under gravity…Scientists have estimated that a stable black hole at the center of the earth could consume not only France but the whole planet in the very short time span of between 4 minutes and 30 seconds and 7 minutes.

I’m a little more inclined to believe the multiple accredited physics organizations around the world involved in the LHC project know what they’re doing than I am to believe some people I’ve never heard of from the Internet, but what do I know? The criticism has at least been strong enough to prompt CERN to post a kind of FAQ page about black holes, strangelets, and all manner of interesting potential doomsday scenarios that have been envisioned for LHC.

Despite the impressive power of the LHC in comparison with other accelerators, the energies produced in its collisions are greatly exceeded by those found in some cosmic rays. Since the much higher-energy collisions provided by Nature for billions of years have not harmed the Earth, there is no reason to think that any phenomenon produced by the LHC will do so.

Wouldn’t it just be something, though, if after centuries of war and pollution and all the other things mankind has done to compromise the planet, Armageddon was actually brought about by a bunch of guys in a physics lab?


February 5, 2008  3:25 PM

Human element will be key for Dell/EqualLogic



Posted by: Beth Pariseau
Storage

So far, I’d have to say Dell has done just about everything it can to make its acquisition of EqualLogic go right. The question now, though, is whether or not it will be enough.

Dell has said all the right things, gone through all the right motions, to address user and channel partner concerns. They’ve trotted out Michael Dell to assuage channel partners multiple times, held user forums, and demonstrated with their event yesterday that they mean to continue to develop EqualLogic’s product. What’s been most interesting to me is the way Dell has gone about handling the merger with the open admission it doesn’t have much storage expertise–hence the attempt to hire a storage analyst to supply some of that know-how.

Along these same lines, Dell also seems to recognize that retention of EqualLogic personnel is important. They held their post-acquisition event yesterday at EqualLogic’s former headquarters in Nashua, NH, which might as well be Siberia as far as a big multinational like Dell is concerned. Still, it appears EqualLogic will be staying put there, at least for now. “Dell” has been added to permanent signs around the office park campus as well as inside the offices; otherwise, the facilities look exactly as I remember from having visited there in the past.

Some familiar faces are also at least giving the Dell gig a chance–much of the EqualLogic PR and marketing staff that I’m familiar with has been re-titled and retained with Dell. The most encouraging sign I saw yesterday was that EqualLogic’s former VP of marketing, John Joseph, was front and center as VP of marketing for Dell.

All this has to be comforting for end users, who told me after the acquisition closed that what they want is for Dell to essentially leave EqualLogic’s product alone, except maybe where pricing is concerned. Yesterday, Dell tweaked EqualLogic’s chassis design a bit, standardizing both its SAS and SATA arrays on a 16-bay form factor, and said it will ship the new PS5000 at a lower price per GB than its predecessors–$19,000 starting list price for a chassis with 2 TB of storage, as opposed to $22,500 previously for 1.75 TB capacity. So far, so good.

But Dell also confirmed yesterday that Tony Asaro has left, and estimated EqualLogic CEO Don Bulen’s tenure at around 3 to 6 months now that the acquisition has closed. For execs like John Joseph to still be around is also typical of this stage of the acquisition process, and it remains to be seen how many familiar faces will remain in Nashua a year from now. There’s still a long road ahead, and the fact that Dell’s attempt to add expertise from Asaro fell through, the uncertain future of Bulens, and the fact that channel partners are still not going quietly, are the first rumblings of the political difficulties that could follow. And that’s to say nothing of the technical ones.

So far, Dell can leave EqualLogic largely as-is, but eventually, it’s going to have to wade in and change, or at least update, EqualLogic products, if only to keep up with technology trends. Furthermore, to get the bang for its 1.4 billion bucks, Dell’s also probably going to have to get its hands dirty spreading EqualLogic’s IP around its other product lines.

Meanwhile, Dell’s partner / competitor, EMC, isn’t going to be sitting idly by–the storage giant has already taken an indirect shot at Dell’s fledgling storage business with the AX4-5. Dell is clinging to the Fibre Channel capabilities of that array as a differentiator, but EMC officials have made it clear that the AX4-5 is an iSCSI play. In fact, with eerily similar messaging around ease of use and support for virtualized server environments, AX4-5 and EqualLogic’s new PS5000 series seem destined to do battle. Factor in the fact that this somewhat contentious set of product lines will be distributed by two potentially conflicting sales forces–direct and the channel–and we have all the makings of a rodeo on our hands.

When all these chickens come home to roost, Dell will have to hope that the storage expertise it picked up with EqualLogic’s remaining execs sticks around longer than Tony Asaro did.  If Dell can retain people like Joseph, as well as EqualLogic’s existing support and engineering staff, to keep the technology on a steady course, it has a good chance of sorting out the political and logistical hurdles to bring EqualLogic’s product to market. But if it can’t, well…as a certain loudmouthed NFL wide receiver would say, get your popcorn ready.


February 1, 2008  11:23 AM

Hello Tony, Goodbye Tony



Posted by: Beth Pariseau
Storage

This has to be some kind of record. I haven’t gotten hold of him directly yet, but sources close to the situation confirmed today that Tony Asaro has left Dell, less than a month after leaving analyst firm Enterprise Strategy Group (ESG) for the vendor.

Asaro’s departure from Dell was followed closely throughout the storage industry, with fellow analysts and even the readers of this blog throwing in their two cents. “I give him 12-18 months” wrote Storage Soup commenter “chameleon”.

Turns out chameleon should’ve taken the under on that bet.

As a reporter, this little tidbit is giving me an allergic reaction, because it prompts more questions than it answers. Why did Asaro go to Dell in the first place? And once he made that move, why did he leave after only about three weeks? Why did he tender his resignation suddenly, so suddenly that Dell had already sent out an invitation to an event Monday with his name on it? And where does he go from here?

At the time of his departure from ESG, we cautioned:

[Asaro] will need to be careful to avoid the fate of another former analyst, Randy Kerns, who left the Evaluator Group to become a vice president of strategy and planning at Sun in September 2005, shortly after Sun completed a blockbuster acquisition of its own. Less than a year later, he left Sun, resurfacing in October 2006 as CTO of ProStor Systems.

In retrospect, that seems laughable. Compared to Asaro’s tenure at Dell, Sun should’ve given Kerns a gold watch.

Speaking of Dell, EqualLogic and questions, there’s some head-scratching going on as to why EqualLogic CEO Don Bulens isn’t listed among the execs at Monday’s event. That has people asking how long he’ll stick around now that the deal is closed. I’ll be there reporting, and that’s one question I hope will be answered, along with its attendant followups, as soon as possible. Stay tuned to the news page for more.


February 1, 2008  9:54 AM

NetApp pulls a fast one on EMC with SPC



Posted by: Beth Pariseau
Storage

I don’t refer to many things related to data storage as humorous, but I have to admit this is a hoot.

Everybody knows by now that EMC hasn’t submitted its products to the Storage Performance Council (SPC) for performance benchmarking, saying it’s a rigged system.

So some in the storage industry feared that hell had frozen over when they saw an SPC benchmark published for Clariion this week (scroll down, it’s in the table). Actually, two SPC-1 benchmarks have been published for the Clariion CX-3 model 40, one with and one without SnapView enabled.

One little twist, however: in the “test sponsor” column next to EMC’s products is the name “Network Appliance Inc.”

Now that. is. hilarious.

Shockingly, the NetApp-submitted benchmark numbers show the Clariion CX-3 40 with lower performance than that of NetApp’s FAS3040. Or, as a NetApp press release put it:

In both cases, the NetApp FAS3040 outshined the EMC CLARiiON CX3-40, delivering 30,985.90 SPC-1 IOPs versus 24,997.48 SPC-1 IOPs (baseline result) and a robust 29,958.60 SPC-1 IOPs versus just 8,997.17 SPC-1 IOPs (baseline result with snapshots enabled). These results further validate NetApp as the high-performance leader for real-world data center deployments featuring value-add data management and data protection functionality. 

While I agree NetApp’s move is somewhat ridiculous, if there were a mom refereeing between these squabbling siblings of the storage market, NetApp could accurately say, “But he started it!”

In fact, not only did EMC start it, but it did this exact thing first. This bickering goes back to the hoary days of November 2006. NetApp released the 3000 series and published performance specs that showed its new array performing far better than EMC’s newest Clariions. Although performance testing is generally against its beliefs, EMC couldn’t let that stand, so did its own test on NetApp’s equipment. EMC’s internal tests showed that NetApp’s filers initially perform better than Clariion, but as NetApp systems fill up its WAFL file system causes  fragmentation that slows everything down. So EMC conceded NetApp’s original results, but contended the initial results were reflective of how performance on the NetApp system would change over time. I waded into this whole mess back when it happened, if you want to read what analysts had to say, it’s all here.

“Many companies have access to other vendor’s equipment–competitive analysis is nothing new,” argued SPC administrator Walter Baker. “NetApp’s not the only EMC competitor to have run competitive analysis.”

“But they’re the only competitor whose analysis you’re endorsing,” I replied. Baker insisted it’s not an endorsement–that publication on SPC’s Web site among all its other specs merely serves as notification that NetApp’s results have been submitted for approval. He also pointed out that unlike, say, vendor-published white papers about another vendor’s product, there’s a redress process for EMC in this case.

There’s a 60-day review period before the result is officially accepted. Until then it’s submitted for review status. In that period, it can be challenged by any member company, or in this case, EMC, that the testing was not compliant with the SPC-1 spec or did not represent the performance the Clariion should have attained.

Baker said EMC has not challenged yet. “Absolutely not–and they have been notified, because I spoke with them myself,” he said. He added,  “as the auditor I feel the result produced by NetApp is representative.” Pressed further, Baker said his basis for that conclusion was ”talking to people who are familiar with EMC equipment.

“I understand what you’re saying,” he admitted. “At first blush it does seem to be a conflict of interest–but it really doesn’t serve Netapp’s purpose if they were to understate or undermine the performance of the EMC equipment, because it would bring about an immediate response from EMC.”

EMC hasn’t yet responded to my e-mail about this, but something tells me they’ll have something to say before the review period is up. And what about this really serves NetApp’s purposes anyway? Have they done anything with this than cast aspersion on the very spec at the core of this latest volley against EMC? If there’s anything to be learned from this from my point of view, it’s to add an extra shake of salt when referring to SPC benchmarks. 

And seriously, I would love to see the user considering a Clariion against an FAS3040 for whom this is the tipping point in one direction or another–I would love to see the user on the verge of signing on the dotted line for a Clariion suddenly saying, “But wait! NetApp’s performance testing shows this array doesn’t perform as well as the FAS3040!” 

It’s kind of like when MacDonald’s tells you its fries taste better than Burger King’s, when Coke tells you more compensated blindfolded taste testers picked its soda over Pepsi’s in a carefully controlled totally off-the-cuff random taste test, when a Red Sox fan walks up to  you with a T-shirt that says “YANKEES SUCK!” All that it really, reliably tells you is what one company thinks of its competitor. And we kind of don’t need a press release about that, especially not when it comes to NetApp and EMC.


January 31, 2008  2:39 PM

Georgens next in line at NetApp



Posted by: Dave Raffo
Storage

When Network Appliance hired Tom Georgens to run its enterprise storage systems group in 2005, many storage insiders suspected it was grooming him to succeed Dan Warmenhoven as CEO. NetApp sent a strong signal that was the case this week by promoting Georgens to president and COO.

Georgens has the right credentials for a CEO of a large storage company. He served in that capacity with LSI’s Engenio storage unit, and spent 11 years with EMC before that. He was responsible for scouting out acquisition possibilities at NetApp, so he knows the industry. But perhaps where he stands out most among NetApp execs is he brings an outsider’s perspective. Of the top eight Net App execs, two (Dave Hitz and James Lau) founded the company in 1992, three (Warmenhoven, Tom Mendoza and Rob Salmon) joined in 1994, another (Ed Deenihan) came aboard in 2000 and CFO Steve Gomo signed on in 2002.

That’s a lot of experience – all with one company. Yet Georgens, a relative NetApp rookie with barely two years with the vendor – was the one promoted from VP of product operations to COO/president.

To make room, NetApp made former president Mendoza vice chairman. That’s a promotion, too, and Mendoza is truly considered “an icon and a legend” inside NetApp, as Warmenhoven described him in the release announcing the management changes. But the old guard will likely make way for Georgens when Warmenhoven steps down. Now it’s a question of when.

One financial analyst who follows NetApp suspects it could be within a year, and expects it will definitely come by mid-2009.

“It is obvious that they are grooming Georgens to be the next CEO,” the analyst said. “It could be six months, or it could be 12 months. I feel confident it will be less than 18 months [before Warmenhoven steps down].”

At Engenio, Georgens dreamed of becoming CEO of a public storage company before parent LSI pulled the plug on plans to spin off Engenio with an IPO. Georgens promptly left, and NetApp scooped him up within months. Now it looks like he’ll get his wish after all.


January 31, 2008  1:20 PM

Xiotech predicts storage crisis, will tell us how to solve it later



Posted by: Beth Pariseau
disk drives, Strategic storage vendors

Xiotech put out an intriguing press release yesterday, headlined “IT MANAGERS EXPRESS CONCERN ABOUT STORAGE SCALABILITY AND CAPACITY.” It discusses survey results from end users establishing that reliability is a key feature when considering storage systems (I personally prefer the ones that crash all the time, but maybe that’s just me), and an ominous quote from Xiotech’s CTO:

“The industry is on pace for a data storage crisis in the next few years,” predicts Steve Sicola, chief technology officer at Xiotech. “The cost of adding a gigabyte of storage is dropping nicely every year, but the cost of managing, protecting and servicing that storage continues to grow. Drives are the most numerous constituent of data centers, and with that have the largest probability of failure. If demand for storage continues at the expected pace and nothing is done, we may see a significant increase in data loss and accelerating cost inefficiencies.”

Thinking Xiotech may have joined Carnegie Mellon University and Google in blowing the whistle on drive reliability specs produced by manufacturers, I hopped on the phone with Sicola this morning. The call began with compelling candor. “Drive makers and systems vendors may say all drives are standard, but they don’t all talk the same way,” Sicola said. “People don’t see the problems that are already happening because big systems houses are trying to make money on both the front-end and the back-end.”

So are we saying that big systems vendors are covering for drive manufacturers even more than the Carnegie-Mellon or Google reports already led us to believe? Was there a heretofore undiscovered problem with drives Xiotech wants to warn us about? “It’s not one specific problem with drives, it’s when you add up a lot of drives in one system that you have a higher potential for failures, and more time spent addressing drive failures,” Sicola clarified. Ah. Hasn’t storage growth been raising the potential for failures over the last couple of years? Isn’t this why we have RAID 6 and clustered systems and. . . .
That’s when we got to the heart of the matter. Xiotech is also coming out with an approach to addressing storage growth, built on IP acquired from Seagate’s Advanced Storage Architecture (ASA) group in November. This itself isn’t news, either. We covered that acquisition when it happened.

Now thinking that the press release was a lead-in to a deeper discussion about what ASA will do and how Xiotech is developing it, I began asking questions along those lines. To my surprise, at that point, you’d have thought I had initiated this call, with the goal of getting Xiotech to divulge trade secrets. No comment, no comment, no comment. No comment on what specifically ASA will do to provide better storage reliability, no comment on when we’ll see ASA released by Xiotech, beyond “later this year.”

“So,” I asked, “until that happens, isn’t Xiotech one of those systems vendors making money on unreliable back-ends, too?”

“We see this problem really exploding in the next few years,” Sicola clarified again. “Xiotech has done a lot today to ensure balanced configurations and has limited the size of its systems. I’d be more concerned about big systems vendors like EMC and Hitachi that are packing in so many drives, which is like trying to herd 1000 cats.”

So to review, the news today seems to be that Xiotech is eventually coming to the rescue of PB-plus shops dealing with reliability issues. Sometime soon-ish. As to how they’ll rescue you. . .well, you’ll find out when they get here. Hopefully.


January 31, 2008  11:02 AM

Recovering data from a crashed drive using VMware



Posted by: Tskyers
data backup, disk drives, VMware

I was talking with a friend the other day about the prospect of multi-terabyte hard drives and how painful it would be to lose that much data. My friend — being my friend of course — countered that it’s not the amount of data, but where it resides and what the data is that’s important.

For instance, he went on, the EEPROM on your desktop motherboard isn’t more than 2MB worth of data. Yet without it the bazillion hours of work you have stored on your desktop hard drive, while safe and sound, is still useless to you because you can’t access it because your computer won’t boot.

After conceding the point, I rephrased the statement to emphasize the loss of multiple terabytes of data residing on a platter-based spinning medium, located in a computer or computer-like device providing data storage services to said computer, group of computers, or computer-like devices (whew!).

Without blinking an eye, he said he’d started a hard drive data recovery company. He built a clean room and had been perfecting his recovery skills on hard drives purchased on, all of places, eBay. As an aside, use a hammer and nail, or Sawzall, to properly delete all data from unwanted hard drives you dispose of.

A while back, I got a frantic call from a family member whose laptop hard drive had crashed. She was beside herself because on her hard drive were all the digital photos she’d ever taken. . .ALL of them. She’d meant to back up her stuff to a disk but never got around to it. She wanted to know was there anything I could do to help her.

That is when it hit me full force, I have brilliant and baleful friends.

My friend recovered almost all the data from her hard drive for me (at a very reasonable price) and now she has the first pictures of her child, some of her wedding photos and other very important moments in her life back, and on DVD this time. The whole saga got me thinking: Am I really protected from a hard drive crash? How about the executives I support? What would I do if my array at home failed where I have all of my photos!

Seeing the look on my relative’s face when I presented her with all of her photos was priceless. But it got me thinking about all the other people out there in the SMB world with the 0.5 person IT shop who don’t even know these services exist, much less who can afford the super-high cost of traditional data recovery. I don’t think today’s data protection schemes are going to be able to handle the eventuality of these super-sized drives making their way to the same SMB shops.

Do the math. A decent 100Mb pipe can push about 3TB an hour (this takes into account -25% for packet and transmission overhead). If you had three people with a terabyte drive, you’d saturate a 100MB uplink should they decide to back up to a device on the network. How are we going to back that up? The storage SaaS startups making their way to market aren’t going to be able to keep up either. Imagine backing up 400-700GB over your home Internet link where your upstream bandwidth is only 768Kbps.

I saw this coming a bit back when I got my grubby hands on the Hitachi Terabyte drive and have begun using a combination of VMware Player and VMware Workstation to mitigate my issues with capacious storage at home. I essentially virtualize the machine I want to use and deploy that on top of a generic OS install, replete with a pretty icon (in my case, Debian Linux), instructing the user to launch the player as their “desktop.” I’ll eventually get to a point where I will move upward from Player to Workstation for all my machines (right now cost is limiting me to using player for most of my machines), then run snapshots and back up the snaps to the same location as the original VMDK using RSync.

It sounds like a lot of work, but try explaining to your wife that she’s lost all her projects she’s been working on and you don’t have a recent backup because her drive is too big to back up quickly. You’ll appreciate the effort that much more when you can say, “I’ve got you covered, hon!!”

Here’s the visual I use when I explain this concept.

1) Fold a piece of paper four times (or use a folded napkin)

1a) Imagine the paper (napkin) as your physical hard drive

2) Tear off two or three 1-inch pieces of that napkin. Put them on the table next to the napkin.

2a) Imagine those pieces as virtual hard drives or volumes.

3) Reorder those 1-inch pieces of the napkin. Easy, isn’t it?

4) Peel apart the layers of those 1-inch pieces, 4x as much stuff to manipulate, making it take a little longer to move things around the table, no?

4a) Imagine those layers as individual files.

Take this one step further. Blow a soft puff of air at the three 1-inch pieces before you peel them apart (this works best with the napkin as they are slightly “stuck” together). Think of that puff of air as a failure or some sort of issue with storage. Do the same when you’ve peeled apart the pieces.

Now you have a great way to envision how your task of managing individual files (family photos) on a gargantuan hard drive (look how much napkin you have left!!). Multiply that out by a couple of napkins and you see why all of a sudden this problem of failed drives and how to protect against it becomes really hard in the TB-drive world. This can open eyes at the management level. It puts a real and appropriate understanding of why we as storage admins freak out at times when they refuse to allocate budget.

I started out talking about the advent of huge drives and what are you going to do to get the data back should they fail? I’ve developed my own solution to protect myself using some free and not-so-free tools from VMware, but I’m not sure it would scale well, or be easily manageable. Maybe a small challenge to the hardcore virtualizers out there may be in order. . . .


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: