Storage Soup


January 31, 2008  11:02 AM

Recovering data from a crashed drive using VMware

Tskyers Tory Skyers Profile: Tskyers

I was talking with a friend the other day about the prospect of multi-terabyte hard drives and how painful it would be to lose that much data. My friend — being my friend of course — countered that it’s not the amount of data, but where it resides and what the data is that’s important.

For instance, he went on, the EEPROM on your desktop motherboard isn’t more than 2MB worth of data. Yet without it the bazillion hours of work you have stored on your desktop hard drive, while safe and sound, is still useless to you because you can’t access it because your computer won’t boot.

After conceding the point, I rephrased the statement to emphasize the loss of multiple terabytes of data residing on a platter-based spinning medium, located in a computer or computer-like device providing data storage services to said computer, group of computers, or computer-like devices (whew!).

Without blinking an eye, he said he’d started a hard drive data recovery company. He built a clean room and had been perfecting his recovery skills on hard drives purchased on, all of places, eBay. As an aside, use a hammer and nail, or Sawzall, to properly delete all data from unwanted hard drives you dispose of.

A while back, I got a frantic call from a family member whose laptop hard drive had crashed. She was beside herself because on her hard drive were all the digital photos she’d ever taken. . .ALL of them. She’d meant to back up her stuff to a disk but never got around to it. She wanted to know was there anything I could do to help her.

That is when it hit me full force, I have brilliant and baleful friends.

My friend recovered almost all the data from her hard drive for me (at a very reasonable price) and now she has the first pictures of her child, some of her wedding photos and other very important moments in her life back, and on DVD this time. The whole saga got me thinking: Am I really protected from a hard drive crash? How about the executives I support? What would I do if my array at home failed where I have all of my photos!

Seeing the look on my relative’s face when I presented her with all of her photos was priceless. But it got me thinking about all the other people out there in the SMB world with the 0.5 person IT shop who don’t even know these services exist, much less who can afford the super-high cost of traditional data recovery. I don’t think today’s data protection schemes are going to be able to handle the eventuality of these super-sized drives making their way to the same SMB shops.

Do the math. A decent 100Mb pipe can push about 3TB an hour (this takes into account -25% for packet and transmission overhead). If you had three people with a terabyte drive, you’d saturate a 100MB uplink should they decide to back up to a device on the network. How are we going to back that up? The storage SaaS startups making their way to market aren’t going to be able to keep up either. Imagine backing up 400-700GB over your home Internet link where your upstream bandwidth is only 768Kbps.

I saw this coming a bit back when I got my grubby hands on the Hitachi Terabyte drive and have begun using a combination of VMware Player and VMware Workstation to mitigate my issues with capacious storage at home. I essentially virtualize the machine I want to use and deploy that on top of a generic OS install, replete with a pretty icon (in my case, Debian Linux), instructing the user to launch the player as their “desktop.” I’ll eventually get to a point where I will move upward from Player to Workstation for all my machines (right now cost is limiting me to using player for most of my machines), then run snapshots and back up the snaps to the same location as the original VMDK using RSync.

It sounds like a lot of work, but try explaining to your wife that she’s lost all her projects she’s been working on and you don’t have a recent backup because her drive is too big to back up quickly. You’ll appreciate the effort that much more when you can say, “I’ve got you covered, hon!!”

Here’s the visual I use when I explain this concept.

1) Fold a piece of paper four times (or use a folded napkin)

1a) Imagine the paper (napkin) as your physical hard drive

2) Tear off two or three 1-inch pieces of that napkin. Put them on the table next to the napkin.

2a) Imagine those pieces as virtual hard drives or volumes.

3) Reorder those 1-inch pieces of the napkin. Easy, isn’t it?

4) Peel apart the layers of those 1-inch pieces, 4x as much stuff to manipulate, making it take a little longer to move things around the table, no?

4a) Imagine those layers as individual files.

Take this one step further. Blow a soft puff of air at the three 1-inch pieces before you peel them apart (this works best with the napkin as they are slightly “stuck” together). Think of that puff of air as a failure or some sort of issue with storage. Do the same when you’ve peeled apart the pieces.

Now you have a great way to envision how your task of managing individual files (family photos) on a gargantuan hard drive (look how much napkin you have left!!). Multiply that out by a couple of napkins and you see why all of a sudden this problem of failed drives and how to protect against it becomes really hard in the TB-drive world. This can open eyes at the management level. It puts a real and appropriate understanding of why we as storage admins freak out at times when they refuse to allocate budget.

I started out talking about the advent of huge drives and what are you going to do to get the data back should they fail? I’ve developed my own solution to protect myself using some free and not-so-free tools from VMware, but I’m not sure it would scale well, or be easily manageable. Maybe a small challenge to the hardcore virtualizers out there may be in order. . . .

January 30, 2008  10:32 AM

Slowdown could push VMware into storage market

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It seems crazy, and in many ways it is. The company that essentially created the hottest market in IT has said it will grow 50% over the next year, and the company that owns it has projected $15 billion in revenues for 2008. And yet as of this morning, Company #1, VMware, has seen its stock drop 33%. Company #2, storage giant EMC, has seen its stock drop $1.02, to $15.89.

The problem, ESG analyst Brian Babineau points out, is that VMware grew 90% last year–it’s not that 50% growth is bad, it’s that 50% growth is relatively bad. “You’ll get the airbag on that stop,” is the expression I’ve heard used.

Meanwhile, the consensus is that EMC’s poor stock performance is a direct result of the VMware issue–even though EMC has achieved its goals of folding in a dizzying array of acquisitions, balancing its revenue streams across software, services and its core storage hardware business, and bringing its products up to speed with emerging technology trends. . .even getting out ahead of them with the first Tier 1 array, DMX-4, to support flash drives internally. “I don’t know that EMC’s business execution could have been much better,” is how Brian put it to me yesterday in my story on EMC’s earnings call.

And yet these companies both are in trouble on the stock market today, and more alarming is the underlying reason: a dramatic slowdown in revenue predicted for VMware. If this truly comes to pass, it could be the darkest omen yet in the chain-reaction brought about in the market by the subprime mortgage crisis, the culmination of fears about the tech market in general this year that began when Cisco revised predictions downward in November. The general vibe from the financial eggheads seems to be that if the highest-flying tech company on the market is forecasting a slowdown in spending, what’s next?

I’ll admit this scares me a little too. The way I understand it, greedy bankers gave loans to unqualified people, and then those shaky loans in turn were carved up into securities, meaning that when those unqualified debtors couldn’t manufacture money (lo and behold!) the whole house of cards started to tumble down. In a way, it’s satisfying to see the people who played on the dreams of low-income people to own homes by locking them into high-interest-rate deals, with no regard to how they were going to come up with the cash, get their comeuppance. But when it threatens the entire national economy, it’s hardly worth the last word.

But before I could step off the ledge into panic myself, this morning I had a chat with Andrew Reichman of Forrester Research, whose level-headed perspective is one I think will eventually shake out once the initial frenzy is over. Or, at least, I hope.

“This is typical of the financial world–overblown expectations,” he said. “I still see VMware’s product and outlook as very strong.” Even in a recession, Reichman pointed out, VMware still has a value proposition, since server virtualization is a consolidation and cost-cutting play. “They can still demonstrate to companies why they should spend money on their product even as they try to put the brakes on IT spending otherwise.”

Great point. Then there’s just the plain fact that 50% growth ain’t too shabby! Especially as the market braces for a spending slowdown–and especially when analysts say 60% of the market has already purchased and deployed VMware’s product. “It’s rare in the technology world to see such consensus around one piece of technology,” Reichman concurred. “They’ve built up a lot of momentum and there’s still a lot of room for them to take advantage of the ‘network effect’ and expand existing customers’ use of the product.”

However, Reichman had another suggestion about the VMware situation that piqued my interest, especially given my recent post on storage virtualization and VMware and the sometimes tense relationships between the two. “What they need to do to continue their growth is take the money they got in the IPO and they’ve built up in revenue and find the next frontier.”

Reichman’s prediction for that next frontier in the near future is business continuity/disaster recovery, which VMware has already said it’s working on. “There’s a high level of interest at a lot of companies in using VMware for BC/DR,” he said. But he added that the next horizon will probably be primary storage–or storage virtualization.

“The story of storage behind VMware has never been clear,” Reichman said. “There are a lot of issues that remain around storage virtualization, performance and compatibility and a lot of room to improve that picture.”

He continued, “They’ve played Switzerland for a long time. Now they need to get off the dime and make a call about how storage is going to work behind server virtualization.”

Of course, it’s hard to tell what the consequences of that might be. Other analysts such as the Burton Group’s Chris Wolf have pointed out that if hardware vendors don’t support VMware, end users won’t take the risk of using their product. But has VMware’s ascension into a Wall Street bellwether changed that equation? Has its ubiquity in IT shops turned the tables on the storage vendors–so that end users will instead be less inclined to use a storage technology if it’s not certified with VMware? How will this balance-of-power play out?

It’s like that expression about the old Chinese curse. We are living in interesting times.


January 30, 2008  10:26 AM

Who’s going to pay for digital preservation?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I can hardly call myself a storage geek–I don’t know a MAC address from a Macintosh and couldn’t operate a CLI with a gun to my head. So it’s rare I take a personal interest, the way Tory does, in most of the products or trends I cover.

The one exception to that is the idea of digital preservation. This is probably because, unlike a true storage geek, I don’t have to worry about trying to fix a machine that’s broken or trying to throttle my service engineers. So I have time on my hands to think about the long-term future of data, data storage, and what we’re going to do with all the important records that are currently being converted from physical format to digital. Spinning disk still has nothing on a cave painting, data tapes have nothing on an acid-free paper book, and in 100 years, we might have an unprecedented historical problem: how to preserve our culture and our information for future generations.

That’s the kind of thing you don’t have to be able to architect a storage fabric to be affected by. Every living person has a vested interest in how the human race will pass on knowledge and information over the long haul.

The problem is, those of us with the time to think about this stuff aren’t the ones who know how to answer that question, and the ones with the know-how are too busy putting out day-to-day fires in their data centers to worry about how it’s all going to work when they’re long gone.

And who says it’s their (your) responsibility anyway? Shouldn’t institutions like the National Archives be the ones worrying about it? Shouldn’t the storage vendors be the ones developing the right media for long-term storage?

As of this week, there’s finally a publicly-funded consortium at least trying to find the answer to those questions about digital preservation, all leading up to the biggest mystery of them all: Who’s going to pay for it?

The consortium, known as the Blue Ribbon Task Force on Digital Preservation, was launched by the National Science Foundation and the Andrew W. Mellon Foundation in partnership with the Library of Congress, the Joint Information Systems Committee of the United Kingdom, the Council on Library and Information Resources, and the National Archives and Records Administration. The consortium, headed up by academics from the San Diego Supercomputing Center, will attempt to bring together testimony from a variety of sources — consumer and enterprise, vendor and end-user — to arrive at a sustainable economic model for digital preservation.

The group has been funded for a two-year project. The first year of the project, according to Francine Berman, Director, San Diego Supercomputer Center and High Performance Computing Endowed Chair, UC San Diego, and co-Chair of the Blue Ribbon Task Force, will produce a report on “a survey of what we know.” The initial report will feature case studies and opinions from experts in digital preservation, and is expected to appear by the end of 2008 or early 2009. By 2010, the task force hopes to have a second report suggesting an approach to digital preservation that’s the most cost-effective and logistically feasible for the most people.

It’s all a little loosey-goosey, Berman admitted, saying, “These are open questions.” So far the group doesn’t have much idea what its direction will be. Alternatives for economic models that will be taken into consideration include an iTunes-like pay-per-use model; a privatized model relying on corporations to finance preservation; or a public-goods model that preserves digital records the same way public parks are preserved, through a collective public trust.

Further complicating matters, “there won’t be a one-size-fits-all solution to the digital preservation question,” according to Berman. Consumers will be concerned with preserving family photos, for example, which will be an entirely different process from preserving corporate and government records. Preserving digitally-recorded works of art and multimedia files will be yet another issue to resolve.

Personally, I’m a little reluctant to put much stock in a government study until I see it produce actionable results, and as a taxpayer I’m not nuts about the number of studies my hard-earned dollars go to that just tell us things we already know. But in this case, I’m just happy someone’s thinking about it. And maybe getting others to start thinking about it a little more, too.

Raising awareness is another goal for the task force, Berman confirmed. “My dry cleaner knows what global warming is, and could also probably give you a basic definition of the human genome,” she said. “What we’re looking for is that same level of understanding about digital preservation, which also affects us all.”


January 28, 2008  4:19 PM

EMC slaps its logo on the Boston Red Sox

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As a storage reporter who’s also a fanatical Red Sox fan, I’m in good position to comment on EMC’s latest marketing move: the agreement with the Boston Red Sox to place a small patch with the EMC logo on the Red Sox uniform shirt during the team’s trip to Japan in April.

The ‘work’ side of me understands why both the team and EMC would be interested in this joint venture. EMC has already sponsored an entire level of Fenway Park, and its logo is plastered about in many places at the old grounds. The Red Sox, under MLB’s mandate to expand its global reach, need to bring a $200 million team halfway around the world for Opening Day, and ticket prices are already $90. Doesn’t seem like they have a whole lot of choice.

But the sports fan in me remembers the flak when there was talk of displaying ads for Spiderman II on the bases used in games. Heck, in Boston, the sanctity of the Green Monster–historically a billboard anyway–has been cited in decrying advertisers. The problem for Red Sox management is that they are working with a very valuable, but very finicky brand.

At the end of the day, the Red Sox are a sports franchise, and a business, and entertainment. But many people in Boston have deeper feelings about the team–it’s a cultural institution for many people, and for some, even a sacred one. Putting an advertiser’s logo on one of the bases at the same park where Ted Williams played. . .well, you might have an easier time convincing a churchgoer to accept corporate sponsorship on the altar. I know some Red Sox fans who fear that baseball will eventually become like NASCAR, with jerseys so bedecked in ads you can’t tell what team they’re supposed to be for. To see this happen to the Red Sox, for many people in EMC’s home state, would be agony.

Not every Red Sox fan feels this way, and I can’t speak for everyone in Boston–and certainly not Japan, where the logos will be displayed in part to announce EMC’s sponsorship of Major League Baseball. But I will say that the popular fan blog / Boston Globe subsidiary Boston Dirt Dogs posted a photo yesterday of Larry Lucchino holding up the patch at a press conference with the headline, “Nothing’s sacred.” Even though it’s only going to be in Japan, and even though it’s just one logo, it’s the first time the Red Sox uniform has displayed any corporate logo that didn’t belong to the sporting-goods company that manufactured it. I don’t count on a lot of Red Sox fans buying that it’s not a slippery slope.

To people outside the day-to-day baseball melodrama that surrounds the Red Sox, I understand why that might seem silly. And a little hypocritical, if you think about it, because recent attempts by a Boston City Councilman to remove the giant neon Citgo sign from the roof of a building in Kenmore Square in protest of Hugo Chavez met with derision from Sox purists. I can also understand why EMC would want to become another Citgo–to have its logo become another cultural icon, particularly as they try to expand into the consumer storage space, and for the first time have a message for the consumers that fill the ballpark.

Problem is, I don’t think it’ll work. Things are different than when the Citgo sign was installed. Nowadays the sign isn’t really seen as an advertisement so much as a landmark, and its visibility just over the top of the Green Monster from inside the park has made it as much a part of the landscape there as home plate. But in general, corporations and their products are not seen as friendly companions or benevolent institutions. People are going to the ballpark in Boston for entertainment, yes, but also to reconnect with an experience that feels genuine, a throwback to a simpler time. An advertising logo on a uniform that’s barely changed in 100 years isn’t going to sit well in that context.


January 24, 2008  8:29 AM

Symantec shines spotlight on Backup Exec 12

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec updated its NetBackup enterprise backup product last year with version 6.5, and now is preparing to upgrade its Windows-based Backup Exec software.

CEO John Thompson said Backup Exec 12 is due out around March during Symantec’s Wednesday night earnings conference call with analysts. Thompson didn’t go deeply into details, but said the new version would more tightly integrate Symantec’s security with the backup product it acquired from Veritas. Backup Exec’s last major upgrade was version 11d in late 2005.

“It candidly does some of the things that we had envisioned when we brought the two companies together, where you can have a vulnerability alert trigger a more frequent backup process,” Thompson said of the upcoming 12.0. “So it’s our belief that we’re starting to see some of the real benefits that we had envisioned a few years ago in bringing security and security-related activity closer to where information is being either managed or stored.”

Thompson told analysts both Backup Exec and Net Backup had strong sales last quarter, driven by a move to disk-based backup. “Every major customer that I speak to is absolutely thinking about how do they move away from tape,” he said.

Thompson did not address – nor was he asked about — two things Symantec is late on delivering: storage software as a service (SaaS) and the integration of the continuous data protection (CDP) it acquired from Revivio in late 2006.


January 16, 2008  5:52 PM

If EMC releases solid-state drives in a forest…

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As soon as EMC’s announcement that it had added support for solid-state drives (SSDs) to Symmetrix crossed the wire, guess who called? If you’ve been watching the storage space, you know it had to be Hitachi Data Systems (HDS), whose high-end USP array has been do-si-doing around Symmetrix in the high-end disk array market for the last year and a half.

Turnabout’s fair play for HDS–as soon as it beat EMC to thin provisioning with the announcement of the USP-V last September, EMC went on the attack while both storage giants ignored the fact that they’d been soundly beaten to the feature by startups. I had a brief chat with HDS chief scientist Claus Mikkelsen yesterday, to see what HDS had to say about EMC’s blue-ribbon finish in the race to “tier zero.”

Generally when vendors gather to pooh-pooh one another’s products, they take one of two tacks: either poke holes in the soundness of the technology (EMC’s tactic in the earliest days of USP-V) or say there’s no market for it. In this case, HDS has taken the latter approach.

“Hitachi was in the solid-state disk business and the demand was very, very slight,” Mikkelsen began. Further questioning revealed that Hitachi’s disk division was offering standalone solid-state devices in the late 90′s…not quite the same business as flash drives embedded in an array, but I heard him out.

“Currently, flash has a limited number of writes before its memory layers wear out, and the use is limited to applications which are almost 100% very random reads,” he continued. “Even if EMC ships 10,000 solid state drives this year, it’s only .25 percent of their total shipments.”

Sour grapes? Maybe. “The drives they’re using have a SATA interface, you should be able to just pop them into any array,” Mikkelsen sniffed. “If they’ve created a market here, we’ll just jump right in.”

But out of curiosity, I also called a user for a major telecom which is a petabyte-plus EMC shop. This user and I have gotten into the nitty-gritty about performance-tuning storage before, and performance is king in his transaction-heavy environment. If this guy isn’t buying in, I thought, then who is?

Turns out he isn’t. “I think it’s great someone’s trying to make progress in this space–it’s been ignored,” he said. But even for his blue-chip company (he didn’t want it named in conjunction with his vendor), the whopping price tag for solid state drives is too much. “It has yet to get to the point where it’ll balance against savings on Tier 1 storage,” he said, though he admitted he has yet to do an in-depth analysis. “There might be certain cases…if we were less budget constricted or the timing was right, like we were going through a product refresh, we might look at it sooner, but for me this year, it’s not going to happen.”

EMC says it has the solid-state drives in beta tests in several of its “household name” customers’ shops. But did those shops pay for the drives? Did they pay full price? Will they put them into production? We don’t know for right now–EMC says they’re not available for interviews.


January 16, 2008  4:54 PM

Reyes draws 21 months

Dave Raffo Dave Raffo Profile: Dave Raffo

 Former Brocade CEO Greg Reyes received a 21-month sentence and $15 million fine for his role in backdating options given employees of the SAN switch vendor.

Reyes won’t do jail time yet. U. S. District Judge Charles Breyer let Reyes go free pending an appeal. And it could’ve been worse; prosecutors had recommended that Breyer sentence Reyes to at least 30 months, fine him $41 million and ask him to repay Brocade for legal fees after his conviction last August.

Breyer said he was swayed by nearly 400 letters of support he received on Reyes’ behalf and took that into consideration when he sentenced the disgraced CEO.

And no, those letters didn’t all come from other CEOs whose companies have been investigated for improperly reporting backdated options. Breyer did at least make it clear why Reyes is facing jail time despite claims from his supporters who say he should not have been convicted because he did not personally benefit financially and there was no victim in this crime.

“This offense is about honesty,” Breyer said in handing down his sentence, reminding all that it’s not all right when a CEO breaks the law to recruit talent or place his company in a better light.

For his part, Reyes apologized, said he regretted his actions and admitted “there were many things I would have done differently” if he could turn back the clock. That didn’t sound like someone who pleaded not guilty and sought a new trial after claiming a witness changed her story, as Reyes did.


January 16, 2008  11:05 AM

Storage and VMware walk virtual tightropes

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It all started with a pretty run-of-the-mill partnership agreement. FalconStor announced its storage virtualization, snapshot and replication software will support Virtual Iron’s virtual servers last week, and my SearchServerVirtualization.com colleague Alex Barrett and I agreed to take briefings.

FalconStor is walking a s tightrope here, because it’s also partnered with Virtual Iron’s large rival VMware. But FalconStor has to come up with reasons to use Virtual Iron over VMware, (i.e. ways to promote that partnership). This led Alex to begin an  interesting conversation with FalconStor’s vice president of business development Bernie Wu about the pros and cons of virtualizing storage with VMware vs. Virtual Iron. Wu pointed out what he’d later reprise in a separate call with me: that the use case for FalconStor’s IPStor storage virtualization software is in many ways stronger with Virtual Iron, because VI doesn’t have its own file system, like VMware does.

As Burton Group senior analyst Chris Wolf patiently explained to me later, VMware’s file system means that its hypervisor (the software layer that controls the host server, guest OSes, and their interaction with the rest of the network) is handling virtual hard disk mapping on back-end storage systems. You can use VMware with raw device mapping (RDM), but then you turn off many of the features VMware users have come to like about VMware, such as VMotion. (RDM also has a slightly limited “virtual mode” as of 3.0, but that’s a tangential discussion.) This makes virtual hard disk mapping performed by storage virtualization products, whether appliances or software, at least somewhat redundant.

So I asked Wu, “what are users missing out on if they can’t use your storage virtualization software with VMware?”  His first answer was large-scale data migrations.

Up until VMware’s Virtual Infrastructure 3.5, VMware had no ability to move the data it managed in its virtual hard disks on back-end storage; hence storage virtualization devices stepped in to fill the gap. With Storage VMotion in 3.5, that gap was at least partially closed. Storage VMotion is still a difficult way to do a large-scale migration, however, because it migrates data one host at a time. So storage virtualization devices, which perform migrations en masse, still have that advantage. At least, until and unless Storage VMotion adds that capability.

Aside from large-scale migrations, Wu also told me that thin provisioning is another capability IPStor offers that VMware doesn’t. That’s a big deal–VMware’s best practices recommend that users allot twice the amount of disk space they actually plan to write to; the ability to expand capacity on the fly helps everyone avoid buying 2x the amount of storage they need.

The Burton Group’s Wolf pointed out plenty more gaps in VMware’s storage capabilities–heterogeneous array support; heterogeneous multiprotocol support (Storage VMotion doesn’t support iSCSI yet);  I/O caching; and heterogeneous replication support.

Some of these gaps will likely be filled by VMware or the storage industry. For instance, when it comes to multiprotocol support, VMware’s MO with new features has always been to support Fibre Channel first and they usually get around to iSCSI soon after. And what happens to the need for heterogeneous multiprotocol support if FCoE ever takes off? What of I/O caching, when and if everybody’s working with 10 Gigabit Ethernet pipes? And VMware’s launching its own management software for heterogeneous replication support (even if it’s not doing the replication itself).

So it seems that storage virtualization players will have to start coming up with more value-adds for VMware environments as time goes on.

VMware has its own tightrope to walk, too. Take replication for example–VMware supports replication from its partners, saying it doesn’t want to reinvent the wheel. But that’s the kind of thing it said when users were asking for Storage VMotion back in 2006, too.

“Deep down, I believe that VMware isn’t going to push its partners out,” Wolf said. And indeed, VMware did make a good-faith gesture last fall with the announcement of a certification program for storage virtualization partners. Wolf also pointed out, “A lot of organizations are afraid to deploy something in the data path unless their hardware vendors will support and certify it–without the support of their partners, VMware would have a tough time playing ball.”

But that might not be the case as much now as back when EMC first bought VMware in 2003 and everybody in the storage world scrateched their heads and wondered why. Now, VMware has its own muscles to flex as its billion-dollar 2007 IPO for 10 percent of the company proved.

More and more analysts are telling me that the hypervisor will become the data center operating system of the future. Over in the server virtualization world, Wolf says VMware competitors argue that the hypervisor is a commodity, and VMware says it isn’t. “In order to keep the hypervisor from becoming commoditized, they have to keep adding new features,” he said.

Which suggests to me that storage virtualization vendors should probably be working on new features, too.


January 11, 2008  10:18 AM

The eBay effect on storage

Tskyers Tory Skyers Profile: Tskyers

Have you ever heard of the “butterfly effect“? In essence, it is a way to conceptualize the possibility that the flapping of butterfly wings in the Amazon Jungle can be the catalyst for a hurricane in Texas.

I think–now, mind you, this is just a thought–the same is going to occur on eBay with storage.

All the innovation currently going on in the enterprise storage arena–10 Gigabit Ethernet (10 GbE) and iSCSI come to mind — is going to be a catalyst that makes businesses retire storage technology faster than usual, filling the secondary market with great usable stuff.

So if you’re a budding storage geek, or an established one looking for the next challenge, eBay is going to become the place to shop. It is not for the faint of heart–that array you spent $200,000 on? Well, $1000, “Buy It Now” and nominal shipping may be able to snag it!! (A slight exaggeration, but I DID find a StorageTek Flex D280 for sale for $2000, with disks!!)

The butterfly here is progress. There will be a tropical storm from progress flapping its wings when these folks enter the workforce, or bring ideas they’ve tried out at home, without limits on uptime or intervention from management, to work. This will cause a fresh round of grassroots innovation that will come from people who can tinker untrammeled (The English language never ceases to amaze me. Thanks for this submission!).

The Linksys WRT-54Gl is a great example of how a group of tinkerers can influence a large company. Check out DD-WRT. Ever wonder why so many wireless routers are coming with USB or e-SATA ports on them nowadays? That started with some hackers wanting to add storage.

eBay has allowed me to build what I otherwise couldn’t find a solid business justification for, or create an ROI schedule around, at work. I have the ability to test various scenarios and provide services to my toughest IT customers: my wife and 9 year old son! Not to mention I have a really geeky conversation piece.

Using myself as an example, I’ve been able to build 3 different versions of a home SAN using technology I purchased from eBay. The first was 1 Gbps FC from Compaq. I picked up a disk shelf for $29 and $40 for shipping. The disks cost a whopping $100 for 14, shipping included. The Fibre cards were $3 a pop. From that, I learned that proprietary SAN technology stinks. Open is the only way to go, so … I started the second iteration which cost a bit more to build.

The second iteration of my SAN was a bit more of an investment both time and money but a tenth the cost of new. I bought a new Areca 8 port SATA array controller with onboard RAM (the only new part in my SAN). I plugged it into a dual Opteron based motherboard with guts from eBay as well, and bought a lot of 10 (2 for spares) 250gb SATA drives. The drives were a deal of a lifetime, for just about $300 I got 2 Terabytes of storage!! Apparently they came out of a Promise array and the person upgraded to 500GB drives.

At the time, I didn’t have any Gigabit Ethernet ports so I opted to buy used 2Gbps fiber channel cards and a used fiber switch. This was a bit more costly than the first SAN so I put some of my old goodies over on eBay (!)  to foot the bill.

The third iteration is the one I’m currently constructing. The second SAN or Second of Many as I like to call it (an ode to Star Trek: Voyager) is still “in production” and servicing my Vmware and file sharing needs, but I felt the need to make it more modular. So far, I’ve gotten an unmanaged Gig-E switch, first generation TOE (TCP Offload Engine) NICs, the controller “head” and a couple SAS disks. I’m making the switch from a SATA controller to a SAS controller to allow for mix and match of speed and capacity on the same bus. I’ve sold some of my fiber and am going to try ISCSI this time.

The hurricane blows when progress has put 8 Gig fiber and 10Gig Ethernet in the datacenter en masse. This will push managed Gig-E components and 4Gbps fiber components out into the secondary market, and make folks like me and value conscious SMB buyers VERY happy!

How long this is going to take to appear I’m not so sure, but if this year is truly the year if iSCSI, I would suggest you open the Web filters to allow a little eBaying at lunch time. Type SAN into eBay see what you come up with.


January 10, 2008  9:33 AM

What’s up with CDP for 2008

Tskyers Maggie Wright Profile: mwright16

Some analysts touted CDP as being the dark horse technology for corporate adoption in 2007. As we all know, that didn’t occur and the multitude of CDP technologies ended up confusing analysts, press and IT alike as they each tried to sort out the differences between available CDP products and what CDP’s true value proposition was. All of these factors contributed to spoiling CDP’s debut.

However, I anticipate CDP will make a comeback in 2008 for two reasons: corporate needs for data replication and higher availability. Data replication has been around for a long time (only recently under the moniker of continuous), so it is a mature technology and well understood by storage professionals in the field.

“Higher availability” is the more important feature of CDP. Companies now must choose between high availability and semi-availability. High availability is associated with synchronous replication software and provides application recoveries in seconds or minutes but at an extremely high cost. At the other extreme, is backup software that only delivers semi-availability so it can take hours, days or even weeks to recover data. CDP delivers higher availability which is an acceptable compromise between these two extremes as it can quickly recover data (typically under 30 minutes) to any point in time and at a price that is competitive with backup software.

CDP also compliments deduplication. While some may view CDP and deduplication as competing technologies (and in some respects they are), the real goal of data protection is data recovery.

This is where CDP and deduplication part ways. CDP captures all changes to data but keeps the data for shorter periods of time, typically 3 to 30 days, to minimize data stores. Deduplication’s primary objective is data reduction, not data recovery. Faster recoveries may be a byproduct of deduplication since the data is kept on disk but it is not the focus of deduplication so recoveries from deduplicated data do not approach the granularity that CDP provides.

So what’s in store for CDP in 2008? The staying power of new data protection technologies is now largely determined by whether it is adopted by small and midsize businesses. If it’s practical and works there, it will find its way into the enterprises because more and more enterprises work as a conglomeration of small businesses despite corporate consolidations. So, it is not a matter of if CDP will gain momentum in 2008, it is a question of how quickly it will become the predomimant technology that companies use to protect all of their application data.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: