Storage Soup

January 30, 2008  10:32 AM

Slowdown could push VMware into storage market

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It seems crazy, and in many ways it is. The company that essentially created the hottest market in IT has said it will grow 50% over the next year, and the company that owns it has projected $15 billion in revenues for 2008. And yet as of this morning, Company #1, VMware, has seen its stock drop 33%. Company #2, storage giant EMC, has seen its stock drop $1.02, to $15.89.

The problem, ESG analyst Brian Babineau points out, is that VMware grew 90% last year–it’s not that 50% growth is bad, it’s that 50% growth is relatively bad. “You’ll get the airbag on that stop,” is the expression I’ve heard used.

Meanwhile, the consensus is that EMC’s poor stock performance is a direct result of the VMware issue–even though EMC has achieved its goals of folding in a dizzying array of acquisitions, balancing its revenue streams across software, services and its core storage hardware business, and bringing its products up to speed with emerging technology trends. . .even getting out ahead of them with the first Tier 1 array, DMX-4, to support flash drives internally. “I don’t know that EMC’s business execution could have been much better,” is how Brian put it to me yesterday in my story on EMC’s earnings call.

And yet these companies both are in trouble on the stock market today, and more alarming is the underlying reason: a dramatic slowdown in revenue predicted for VMware. If this truly comes to pass, it could be the darkest omen yet in the chain-reaction brought about in the market by the subprime mortgage crisis, the culmination of fears about the tech market in general this year that began when Cisco revised predictions downward in November. The general vibe from the financial eggheads seems to be that if the highest-flying tech company on the market is forecasting a slowdown in spending, what’s next?

I’ll admit this scares me a little too. The way I understand it, greedy bankers gave loans to unqualified people, and then those shaky loans in turn were carved up into securities, meaning that when those unqualified debtors couldn’t manufacture money (lo and behold!) the whole house of cards started to tumble down. In a way, it’s satisfying to see the people who played on the dreams of low-income people to own homes by locking them into high-interest-rate deals, with no regard to how they were going to come up with the cash, get their comeuppance. But when it threatens the entire national economy, it’s hardly worth the last word.

But before I could step off the ledge into panic myself, this morning I had a chat with Andrew Reichman of Forrester Research, whose level-headed perspective is one I think will eventually shake out once the initial frenzy is over. Or, at least, I hope.

“This is typical of the financial world–overblown expectations,” he said. “I still see VMware’s product and outlook as very strong.” Even in a recession, Reichman pointed out, VMware still has a value proposition, since server virtualization is a consolidation and cost-cutting play. “They can still demonstrate to companies why they should spend money on their product even as they try to put the brakes on IT spending otherwise.”

Great point. Then there’s just the plain fact that 50% growth ain’t too shabby! Especially as the market braces for a spending slowdown–and especially when analysts say 60% of the market has already purchased and deployed VMware’s product. “It’s rare in the technology world to see such consensus around one piece of technology,” Reichman concurred. “They’ve built up a lot of momentum and there’s still a lot of room for them to take advantage of the ‘network effect’ and expand existing customers’ use of the product.”

However, Reichman had another suggestion about the VMware situation that piqued my interest, especially given my recent post on storage virtualization and VMware and the sometimes tense relationships between the two. “What they need to do to continue their growth is take the money they got in the IPO and they’ve built up in revenue and find the next frontier.”

Reichman’s prediction for that next frontier in the near future is business continuity/disaster recovery, which VMware has already said it’s working on. “There’s a high level of interest at a lot of companies in using VMware for BC/DR,” he said. But he added that the next horizon will probably be primary storage–or storage virtualization.

“The story of storage behind VMware has never been clear,” Reichman said. “There are a lot of issues that remain around storage virtualization, performance and compatibility and a lot of room to improve that picture.”

He continued, “They’ve played Switzerland for a long time. Now they need to get off the dime and make a call about how storage is going to work behind server virtualization.”

Of course, it’s hard to tell what the consequences of that might be. Other analysts such as the Burton Group’s Chris Wolf have pointed out that if hardware vendors don’t support VMware, end users won’t take the risk of using their product. But has VMware’s ascension into a Wall Street bellwether changed that equation? Has its ubiquity in IT shops turned the tables on the storage vendors–so that end users will instead be less inclined to use a storage technology if it’s not certified with VMware? How will this balance-of-power play out?

It’s like that expression about the old Chinese curse. We are living in interesting times.

January 30, 2008  10:26 AM

Who’s going to pay for digital preservation?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I can hardly call myself a storage geek–I don’t know a MAC address from a Macintosh and couldn’t operate a CLI with a gun to my head. So it’s rare I take a personal interest, the way Tory does, in most of the products or trends I cover.

The one exception to that is the idea of digital preservation. This is probably because, unlike a true storage geek, I don’t have to worry about trying to fix a machine that’s broken or trying to throttle my service engineers. So I have time on my hands to think about the long-term future of data, data storage, and what we’re going to do with all the important records that are currently being converted from physical format to digital. Spinning disk still has nothing on a cave painting, data tapes have nothing on an acid-free paper book, and in 100 years, we might have an unprecedented historical problem: how to preserve our culture and our information for future generations.

That’s the kind of thing you don’t have to be able to architect a storage fabric to be affected by. Every living person has a vested interest in how the human race will pass on knowledge and information over the long haul.

The problem is, those of us with the time to think about this stuff aren’t the ones who know how to answer that question, and the ones with the know-how are too busy putting out day-to-day fires in their data centers to worry about how it’s all going to work when they’re long gone.

And who says it’s their (your) responsibility anyway? Shouldn’t institutions like the National Archives be the ones worrying about it? Shouldn’t the storage vendors be the ones developing the right media for long-term storage?

As of this week, there’s finally a publicly-funded consortium at least trying to find the answer to those questions about digital preservation, all leading up to the biggest mystery of them all: Who’s going to pay for it?

The consortium, known as the Blue Ribbon Task Force on Digital Preservation, was launched by the National Science Foundation and the Andrew W. Mellon Foundation in partnership with the Library of Congress, the Joint Information Systems Committee of the United Kingdom, the Council on Library and Information Resources, and the National Archives and Records Administration. The consortium, headed up by academics from the San Diego Supercomputing Center, will attempt to bring together testimony from a variety of sources — consumer and enterprise, vendor and end-user — to arrive at a sustainable economic model for digital preservation.

The group has been funded for a two-year project. The first year of the project, according to Francine Berman, Director, San Diego Supercomputer Center and High Performance Computing Endowed Chair, UC San Diego, and co-Chair of the Blue Ribbon Task Force, will produce a report on “a survey of what we know.” The initial report will feature case studies and opinions from experts in digital preservation, and is expected to appear by the end of 2008 or early 2009. By 2010, the task force hopes to have a second report suggesting an approach to digital preservation that’s the most cost-effective and logistically feasible for the most people.

It’s all a little loosey-goosey, Berman admitted, saying, “These are open questions.” So far the group doesn’t have much idea what its direction will be. Alternatives for economic models that will be taken into consideration include an iTunes-like pay-per-use model; a privatized model relying on corporations to finance preservation; or a public-goods model that preserves digital records the same way public parks are preserved, through a collective public trust.

Further complicating matters, “there won’t be a one-size-fits-all solution to the digital preservation question,” according to Berman. Consumers will be concerned with preserving family photos, for example, which will be an entirely different process from preserving corporate and government records. Preserving digitally-recorded works of art and multimedia files will be yet another issue to resolve.

Personally, I’m a little reluctant to put much stock in a government study until I see it produce actionable results, and as a taxpayer I’m not nuts about the number of studies my hard-earned dollars go to that just tell us things we already know. But in this case, I’m just happy someone’s thinking about it. And maybe getting others to start thinking about it a little more, too.

Raising awareness is another goal for the task force, Berman confirmed. “My dry cleaner knows what global warming is, and could also probably give you a basic definition of the human genome,” she said. “What we’re looking for is that same level of understanding about digital preservation, which also affects us all.”

January 28, 2008  4:19 PM

EMC slaps its logo on the Boston Red Sox

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As a storage reporter who’s also a fanatical Red Sox fan, I’m in good position to comment on EMC’s latest marketing move: the agreement with the Boston Red Sox to place a small patch with the EMC logo on the Red Sox uniform shirt during the team’s trip to Japan in April.

The ‘work’ side of me understands why both the team and EMC would be interested in this joint venture. EMC has already sponsored an entire level of Fenway Park, and its logo is plastered about in many places at the old grounds. The Red Sox, under MLB’s mandate to expand its global reach, need to bring a $200 million team halfway around the world for Opening Day, and ticket prices are already $90. Doesn’t seem like they have a whole lot of choice.

But the sports fan in me remembers the flak when there was talk of displaying ads for Spiderman II on the bases used in games. Heck, in Boston, the sanctity of the Green Monster–historically a billboard anyway–has been cited in decrying advertisers. The problem for Red Sox management is that they are working with a very valuable, but very finicky brand.

At the end of the day, the Red Sox are a sports franchise, and a business, and entertainment. But many people in Boston have deeper feelings about the team–it’s a cultural institution for many people, and for some, even a sacred one. Putting an advertiser’s logo on one of the bases at the same park where Ted Williams played. . .well, you might have an easier time convincing a churchgoer to accept corporate sponsorship on the altar. I know some Red Sox fans who fear that baseball will eventually become like NASCAR, with jerseys so bedecked in ads you can’t tell what team they’re supposed to be for. To see this happen to the Red Sox, for many people in EMC’s home state, would be agony.

Not every Red Sox fan feels this way, and I can’t speak for everyone in Boston–and certainly not Japan, where the logos will be displayed in part to announce EMC’s sponsorship of Major League Baseball. But I will say that the popular fan blog / Boston Globe subsidiary Boston Dirt Dogs posted a photo yesterday of Larry Lucchino holding up the patch at a press conference with the headline, “Nothing’s sacred.” Even though it’s only going to be in Japan, and even though it’s just one logo, it’s the first time the Red Sox uniform has displayed any corporate logo that didn’t belong to the sporting-goods company that manufactured it. I don’t count on a lot of Red Sox fans buying that it’s not a slippery slope.

To people outside the day-to-day baseball melodrama that surrounds the Red Sox, I understand why that might seem silly. And a little hypocritical, if you think about it, because recent attempts by a Boston City Councilman to remove the giant neon Citgo sign from the roof of a building in Kenmore Square in protest of Hugo Chavez met with derision from Sox purists. I can also understand why EMC would want to become another Citgo–to have its logo become another cultural icon, particularly as they try to expand into the consumer storage space, and for the first time have a message for the consumers that fill the ballpark.

Problem is, I don’t think it’ll work. Things are different than when the Citgo sign was installed. Nowadays the sign isn’t really seen as an advertisement so much as a landmark, and its visibility just over the top of the Green Monster from inside the park has made it as much a part of the landscape there as home plate. But in general, corporations and their products are not seen as friendly companions or benevolent institutions. People are going to the ballpark in Boston for entertainment, yes, but also to reconnect with an experience that feels genuine, a throwback to a simpler time. An advertising logo on a uniform that’s barely changed in 100 years isn’t going to sit well in that context.

January 24, 2008  8:29 AM

Symantec shines spotlight on Backup Exec 12

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec updated its NetBackup enterprise backup product last year with version 6.5, and now is preparing to upgrade its Windows-based Backup Exec software.

CEO John Thompson said Backup Exec 12 is due out around March during Symantec’s Wednesday night earnings conference call with analysts. Thompson didn’t go deeply into details, but said the new version would more tightly integrate Symantec’s security with the backup product it acquired from Veritas. Backup Exec’s last major upgrade was version 11d in late 2005.

“It candidly does some of the things that we had envisioned when we brought the two companies together, where you can have a vulnerability alert trigger a more frequent backup process,” Thompson said of the upcoming 12.0. “So it’s our belief that we’re starting to see some of the real benefits that we had envisioned a few years ago in bringing security and security-related activity closer to where information is being either managed or stored.”

Thompson told analysts both Backup Exec and Net Backup had strong sales last quarter, driven by a move to disk-based backup. “Every major customer that I speak to is absolutely thinking about how do they move away from tape,” he said.

Thompson did not address – nor was he asked about — two things Symantec is late on delivering: storage software as a service (SaaS) and the integration of the continuous data protection (CDP) it acquired from Revivio in late 2006.

January 16, 2008  5:52 PM

If EMC releases solid-state drives in a forest…

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As soon as EMC’s announcement that it had added support for solid-state drives (SSDs) to Symmetrix crossed the wire, guess who called? If you’ve been watching the storage space, you know it had to be Hitachi Data Systems (HDS), whose high-end USP array has been do-si-doing around Symmetrix in the high-end disk array market for the last year and a half.

Turnabout’s fair play for HDS–as soon as it beat EMC to thin provisioning with the announcement of the USP-V last September, EMC went on the attack while both storage giants ignored the fact that they’d been soundly beaten to the feature by startups. I had a brief chat with HDS chief scientist Claus Mikkelsen yesterday, to see what HDS had to say about EMC’s blue-ribbon finish in the race to “tier zero.”

Generally when vendors gather to pooh-pooh one another’s products, they take one of two tacks: either poke holes in the soundness of the technology (EMC’s tactic in the earliest days of USP-V) or say there’s no market for it. In this case, HDS has taken the latter approach.

“Hitachi was in the solid-state disk business and the demand was very, very slight,” Mikkelsen began. Further questioning revealed that Hitachi’s disk division was offering standalone solid-state devices in the late 90’s…not quite the same business as flash drives embedded in an array, but I heard him out.

“Currently, flash has a limited number of writes before its memory layers wear out, and the use is limited to applications which are almost 100% very random reads,” he continued. “Even if EMC ships 10,000 solid state drives this year, it’s only .25 percent of their total shipments.”

Sour grapes? Maybe. “The drives they’re using have a SATA interface, you should be able to just pop them into any array,” Mikkelsen sniffed. “If they’ve created a market here, we’ll just jump right in.”

But out of curiosity, I also called a user for a major telecom which is a petabyte-plus EMC shop. This user and I have gotten into the nitty-gritty about performance-tuning storage before, and performance is king in his transaction-heavy environment. If this guy isn’t buying in, I thought, then who is?

Turns out he isn’t. “I think it’s great someone’s trying to make progress in this space–it’s been ignored,” he said. But even for his blue-chip company (he didn’t want it named in conjunction with his vendor), the whopping price tag for solid state drives is too much. “It has yet to get to the point where it’ll balance against savings on Tier 1 storage,” he said, though he admitted he has yet to do an in-depth analysis. “There might be certain cases…if we were less budget constricted or the timing was right, like we were going through a product refresh, we might look at it sooner, but for me this year, it’s not going to happen.”

EMC says it has the solid-state drives in beta tests in several of its “household name” customers’ shops. But did those shops pay for the drives? Did they pay full price? Will they put them into production? We don’t know for right now–EMC says they’re not available for interviews.

January 16, 2008  4:54 PM

Reyes draws 21 months

Dave Raffo Dave Raffo Profile: Dave Raffo

 Former Brocade CEO Greg Reyes received a 21-month sentence and $15 million fine for his role in backdating options given employees of the SAN switch vendor.

Reyes won’t do jail time yet. U. S. District Judge Charles Breyer let Reyes go free pending an appeal. And it could’ve been worse; prosecutors had recommended that Breyer sentence Reyes to at least 30 months, fine him $41 million and ask him to repay Brocade for legal fees after his conviction last August.

Breyer said he was swayed by nearly 400 letters of support he received on Reyes’ behalf and took that into consideration when he sentenced the disgraced CEO.

And no, those letters didn’t all come from other CEOs whose companies have been investigated for improperly reporting backdated options. Breyer did at least make it clear why Reyes is facing jail time despite claims from his supporters who say he should not have been convicted because he did not personally benefit financially and there was no victim in this crime.

“This offense is about honesty,” Breyer said in handing down his sentence, reminding all that it’s not all right when a CEO breaks the law to recruit talent or place his company in a better light.

For his part, Reyes apologized, said he regretted his actions and admitted “there were many things I would have done differently” if he could turn back the clock. That didn’t sound like someone who pleaded not guilty and sought a new trial after claiming a witness changed her story, as Reyes did.

January 16, 2008  11:05 AM

Storage and VMware walk virtual tightropes

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It all started with a pretty run-of-the-mill partnership agreement. FalconStor announced its storage virtualization, snapshot and replication software will support Virtual Iron’s virtual servers last week, and my colleague Alex Barrett and I agreed to take briefings.

FalconStor is walking a s tightrope here, because it’s also partnered with Virtual Iron’s large rival VMware. But FalconStor has to come up with reasons to use Virtual Iron over VMware, (i.e. ways to promote that partnership). This led Alex to begin an  interesting conversation with FalconStor’s vice president of business development Bernie Wu about the pros and cons of virtualizing storage with VMware vs. Virtual Iron. Wu pointed out what he’d later reprise in a separate call with me: that the use case for FalconStor’s IPStor storage virtualization software is in many ways stronger with Virtual Iron, because VI doesn’t have its own file system, like VMware does.

As Burton Group senior analyst Chris Wolf patiently explained to me later, VMware’s file system means that its hypervisor (the software layer that controls the host server, guest OSes, and their interaction with the rest of the network) is handling virtual hard disk mapping on back-end storage systems. You can use VMware with raw device mapping (RDM), but then you turn off many of the features VMware users have come to like about VMware, such as VMotion. (RDM also has a slightly limited “virtual mode” as of 3.0, but that’s a tangential discussion.) This makes virtual hard disk mapping performed by storage virtualization products, whether appliances or software, at least somewhat redundant.

So I asked Wu, “what are users missing out on if they can’t use your storage virtualization software with VMware?”  His first answer was large-scale data migrations.

Up until VMware’s Virtual Infrastructure 3.5, VMware had no ability to move the data it managed in its virtual hard disks on back-end storage; hence storage virtualization devices stepped in to fill the gap. With Storage VMotion in 3.5, that gap was at least partially closed. Storage VMotion is still a difficult way to do a large-scale migration, however, because it migrates data one host at a time. So storage virtualization devices, which perform migrations en masse, still have that advantage. At least, until and unless Storage VMotion adds that capability.

Aside from large-scale migrations, Wu also told me that thin provisioning is another capability IPStor offers that VMware doesn’t. That’s a big deal–VMware’s best practices recommend that users allot twice the amount of disk space they actually plan to write to; the ability to expand capacity on the fly helps everyone avoid buying 2x the amount of storage they need.

The Burton Group’s Wolf pointed out plenty more gaps in VMware’s storage capabilities–heterogeneous array support; heterogeneous multiprotocol support (Storage VMotion doesn’t support iSCSI yet);  I/O caching; and heterogeneous replication support.

Some of these gaps will likely be filled by VMware or the storage industry. For instance, when it comes to multiprotocol support, VMware’s MO with new features has always been to support Fibre Channel first and they usually get around to iSCSI soon after. And what happens to the need for heterogeneous multiprotocol support if FCoE ever takes off? What of I/O caching, when and if everybody’s working with 10 Gigabit Ethernet pipes? And VMware’s launching its own management software for heterogeneous replication support (even if it’s not doing the replication itself).

So it seems that storage virtualization players will have to start coming up with more value-adds for VMware environments as time goes on.

VMware has its own tightrope to walk, too. Take replication for example–VMware supports replication from its partners, saying it doesn’t want to reinvent the wheel. But that’s the kind of thing it said when users were asking for Storage VMotion back in 2006, too.

“Deep down, I believe that VMware isn’t going to push its partners out,” Wolf said. And indeed, VMware did make a good-faith gesture last fall with the announcement of a certification program for storage virtualization partners. Wolf also pointed out, “A lot of organizations are afraid to deploy something in the data path unless their hardware vendors will support and certify it–without the support of their partners, VMware would have a tough time playing ball.”

But that might not be the case as much now as back when EMC first bought VMware in 2003 and everybody in the storage world scrateched their heads and wondered why. Now, VMware has its own muscles to flex as its billion-dollar 2007 IPO for 10 percent of the company proved.

More and more analysts are telling me that the hypervisor will become the data center operating system of the future. Over in the server virtualization world, Wolf says VMware competitors argue that the hypervisor is a commodity, and VMware says it isn’t. “In order to keep the hypervisor from becoming commoditized, they have to keep adding new features,” he said.

Which suggests to me that storage virtualization vendors should probably be working on new features, too.

January 11, 2008  10:18 AM

The eBay effect on storage

Tskyers Tory Skyers Profile: Tskyers

Have you ever heard of the “butterfly effect“? In essence, it is a way to conceptualize the possibility that the flapping of butterfly wings in the Amazon Jungle can be the catalyst for a hurricane in Texas.

I think–now, mind you, this is just a thought–the same is going to occur on eBay with storage.

All the innovation currently going on in the enterprise storage arena–10 Gigabit Ethernet (10 GbE) and iSCSI come to mind — is going to be a catalyst that makes businesses retire storage technology faster than usual, filling the secondary market with great usable stuff.

So if you’re a budding storage geek, or an established one looking for the next challenge, eBay is going to become the place to shop. It is not for the faint of heart–that array you spent $200,000 on? Well, $1000, “Buy It Now” and nominal shipping may be able to snag it!! (A slight exaggeration, but I DID find a StorageTek Flex D280 for sale for $2000, with disks!!)

The butterfly here is progress. There will be a tropical storm from progress flapping its wings when these folks enter the workforce, or bring ideas they’ve tried out at home, without limits on uptime or intervention from management, to work. This will cause a fresh round of grassroots innovation that will come from people who can tinker untrammeled (The English language never ceases to amaze me. Thanks for this submission!).

The Linksys WRT-54Gl is a great example of how a group of tinkerers can influence a large company. Check out DD-WRT. Ever wonder why so many wireless routers are coming with USB or e-SATA ports on them nowadays? That started with some hackers wanting to add storage.

eBay has allowed me to build what I otherwise couldn’t find a solid business justification for, or create an ROI schedule around, at work. I have the ability to test various scenarios and provide services to my toughest IT customers: my wife and 9 year old son! Not to mention I have a really geeky conversation piece.

Using myself as an example, I’ve been able to build 3 different versions of a home SAN using technology I purchased from eBay. The first was 1 Gbps FC from Compaq. I picked up a disk shelf for $29 and $40 for shipping. The disks cost a whopping $100 for 14, shipping included. The Fibre cards were $3 a pop. From that, I learned that proprietary SAN technology stinks. Open is the only way to go, so … I started the second iteration which cost a bit more to build.

The second iteration of my SAN was a bit more of an investment both time and money but a tenth the cost of new. I bought a new Areca 8 port SATA array controller with onboard RAM (the only new part in my SAN). I plugged it into a dual Opteron based motherboard with guts from eBay as well, and bought a lot of 10 (2 for spares) 250gb SATA drives. The drives were a deal of a lifetime, for just about $300 I got 2 Terabytes of storage!! Apparently they came out of a Promise array and the person upgraded to 500GB drives.

At the time, I didn’t have any Gigabit Ethernet ports so I opted to buy used 2Gbps fiber channel cards and a used fiber switch. This was a bit more costly than the first SAN so I put some of my old goodies over on eBay (!)  to foot the bill.

The third iteration is the one I’m currently constructing. The second SAN or Second of Many as I like to call it (an ode to Star Trek: Voyager) is still “in production” and servicing my Vmware and file sharing needs, but I felt the need to make it more modular. So far, I’ve gotten an unmanaged Gig-E switch, first generation TOE (TCP Offload Engine) NICs, the controller “head” and a couple SAS disks. I’m making the switch from a SATA controller to a SAS controller to allow for mix and match of speed and capacity on the same bus. I’ve sold some of my fiber and am going to try ISCSI this time.

The hurricane blows when progress has put 8 Gig fiber and 10Gig Ethernet in the datacenter en masse. This will push managed Gig-E components and 4Gbps fiber components out into the secondary market, and make folks like me and value conscious SMB buyers VERY happy!

How long this is going to take to appear I’m not so sure, but if this year is truly the year if iSCSI, I would suggest you open the Web filters to allow a little eBaying at lunch time. Type SAN into eBay see what you come up with.

January 10, 2008  9:33 AM

What’s up with CDP for 2008

Tskyers Maggie Wright Profile: mwright16

Some analysts touted CDP as being the dark horse technology for corporate adoption in 2007. As we all know, that didn’t occur and the multitude of CDP technologies ended up confusing analysts, press and IT alike as they each tried to sort out the differences between available CDP products and what CDP’s true value proposition was. All of these factors contributed to spoiling CDP’s debut.

However, I anticipate CDP will make a comeback in 2008 for two reasons: corporate needs for data replication and higher availability. Data replication has been around for a long time (only recently under the moniker of continuous), so it is a mature technology and well understood by storage professionals in the field.

“Higher availability” is the more important feature of CDP. Companies now must choose between high availability and semi-availability. High availability is associated with synchronous replication software and provides application recoveries in seconds or minutes but at an extremely high cost. At the other extreme, is backup software that only delivers semi-availability so it can take hours, days or even weeks to recover data. CDP delivers higher availability which is an acceptable compromise between these two extremes as it can quickly recover data (typically under 30 minutes) to any point in time and at a price that is competitive with backup software.

CDP also compliments deduplication. While some may view CDP and deduplication as competing technologies (and in some respects they are), the real goal of data protection is data recovery.

This is where CDP and deduplication part ways. CDP captures all changes to data but keeps the data for shorter periods of time, typically 3 to 30 days, to minimize data stores. Deduplication’s primary objective is data reduction, not data recovery. Faster recoveries may be a byproduct of deduplication since the data is kept on disk but it is not the focus of deduplication so recoveries from deduplicated data do not approach the granularity that CDP provides.

So what’s in store for CDP in 2008? The staying power of new data protection technologies is now largely determined by whether it is adopted by small and midsize businesses. If it’s practical and works there, it will find its way into the enterprises because more and more enterprises work as a conglomeration of small businesses despite corporate consolidations. So, it is not a matter of if CDP will gain momentum in 2008, it is a question of how quickly it will become the predomimant technology that companies use to protect all of their application data.

January 8, 2008  9:55 AM

EMC overshoots SMBs again

Dave Raffo Dave Raffo Profile: Dave Raffo

If EMC set out to improve its Clariion AX150, then it succeeded with the AX4 it launched today. But if it wants to offer a system optimized for SMBs, then it still has some work to do.

The AX4 follows the blueprint that storage companies used when they first started going after SMBs a few years back. They took their larger SAN systems and scaled them down in size and features. That didn’t work, and none of the large SAN vendors has made much of a dent on the vast SMB market. EMC and its partner Dell are now in their third generation of AX systems without much to show.

Meanwhile, EMC’s competitors Hewlett-Packard, Network Appliance, Hitachi Data Systems, and even Dell with its PowerVault MD3000 have delivered storage systems designed from the ground up for SMBs. And they cost less than the $8,000-plus price tag EMC puts on the AX4.

EMC counters that it delivers more capacity and technology for the money with the AX4. But SMBs want simplicity; do they really care about a Fibre Channel option or SAS/SATA intermix? Those features are aimed at EMC customers who want a storage system for a department or remote office that is compatible to their larger Clariions, not SMBs looking to network their storage for the first time.

Dell is more realistic with its positioning of the new system, which it calls the AX4-5. Dell product manager Eric Cannell says the PowerVault MD3000 is for small businesses and the AX4-5 for largers SMBs and “can scale up to the bottom of what you would consider a midrange array.” Dell also prices the system at $13,858 and up, clearly not a true SMB price point.

But the new system puts Dell in a sticky situation with its EqualLogic acquisition. Cannell declined to talk about where all Dell’s iSCSI products fit until the EqualLogic deal closes, but customers are likely confused. If the AX4-5 is so good, why did Dell spent $1.4 billion on EqualLogic?

In the long run, Dell has more at stake than EMC. When you sell the most systems with six-figure and even seven-figure price tags as EMC does, it doesn’t hurt much to lose out on $8,000 deals. But SMBs are Dell’s main line of business, and it’s crucial for Dell to get it right.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: