Storage Soup


January 16, 2008  11:05 AM

Storage and VMware walk virtual tightropes

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It all started with a pretty run-of-the-mill partnership agreement. FalconStor announced its storage virtualization, snapshot and replication software will support Virtual Iron’s virtual servers last week, and my SearchServerVirtualization.com colleague Alex Barrett and I agreed to take briefings.

FalconStor is walking a s tightrope here, because it’s also partnered with Virtual Iron’s large rival VMware. But FalconStor has to come up with reasons to use Virtual Iron over VMware, (i.e. ways to promote that partnership). This led Alex to begin an  interesting conversation with FalconStor’s vice president of business development Bernie Wu about the pros and cons of virtualizing storage with VMware vs. Virtual Iron. Wu pointed out what he’d later reprise in a separate call with me: that the use case for FalconStor’s IPStor storage virtualization software is in many ways stronger with Virtual Iron, because VI doesn’t have its own file system, like VMware does.

As Burton Group senior analyst Chris Wolf patiently explained to me later, VMware’s file system means that its hypervisor (the software layer that controls the host server, guest OSes, and their interaction with the rest of the network) is handling virtual hard disk mapping on back-end storage systems. You can use VMware with raw device mapping (RDM), but then you turn off many of the features VMware users have come to like about VMware, such as VMotion. (RDM also has a slightly limited “virtual mode” as of 3.0, but that’s a tangential discussion.) This makes virtual hard disk mapping performed by storage virtualization products, whether appliances or software, at least somewhat redundant.

So I asked Wu, “what are users missing out on if they can’t use your storage virtualization software with VMware?”  His first answer was large-scale data migrations.

Up until VMware’s Virtual Infrastructure 3.5, VMware had no ability to move the data it managed in its virtual hard disks on back-end storage; hence storage virtualization devices stepped in to fill the gap. With Storage VMotion in 3.5, that gap was at least partially closed. Storage VMotion is still a difficult way to do a large-scale migration, however, because it migrates data one host at a time. So storage virtualization devices, which perform migrations en masse, still have that advantage. At least, until and unless Storage VMotion adds that capability.

Aside from large-scale migrations, Wu also told me that thin provisioning is another capability IPStor offers that VMware doesn’t. That’s a big deal–VMware’s best practices recommend that users allot twice the amount of disk space they actually plan to write to; the ability to expand capacity on the fly helps everyone avoid buying 2x the amount of storage they need.

The Burton Group’s Wolf pointed out plenty more gaps in VMware’s storage capabilities–heterogeneous array support; heterogeneous multiprotocol support (Storage VMotion doesn’t support iSCSI yet);  I/O caching; and heterogeneous replication support.

Some of these gaps will likely be filled by VMware or the storage industry. For instance, when it comes to multiprotocol support, VMware’s MO with new features has always been to support Fibre Channel first and they usually get around to iSCSI soon after. And what happens to the need for heterogeneous multiprotocol support if FCoE ever takes off? What of I/O caching, when and if everybody’s working with 10 Gigabit Ethernet pipes? And VMware’s launching its own management software for heterogeneous replication support (even if it’s not doing the replication itself).

So it seems that storage virtualization players will have to start coming up with more value-adds for VMware environments as time goes on.

VMware has its own tightrope to walk, too. Take replication for example–VMware supports replication from its partners, saying it doesn’t want to reinvent the wheel. But that’s the kind of thing it said when users were asking for Storage VMotion back in 2006, too.

“Deep down, I believe that VMware isn’t going to push its partners out,” Wolf said. And indeed, VMware did make a good-faith gesture last fall with the announcement of a certification program for storage virtualization partners. Wolf also pointed out, “A lot of organizations are afraid to deploy something in the data path unless their hardware vendors will support and certify it–without the support of their partners, VMware would have a tough time playing ball.”

But that might not be the case as much now as back when EMC first bought VMware in 2003 and everybody in the storage world scrateched their heads and wondered why. Now, VMware has its own muscles to flex as its billion-dollar 2007 IPO for 10 percent of the company proved.

More and more analysts are telling me that the hypervisor will become the data center operating system of the future. Over in the server virtualization world, Wolf says VMware competitors argue that the hypervisor is a commodity, and VMware says it isn’t. “In order to keep the hypervisor from becoming commoditized, they have to keep adding new features,” he said.

Which suggests to me that storage virtualization vendors should probably be working on new features, too.

January 11, 2008  10:18 AM

The eBay effect on storage

Tskyers Tory Skyers Profile: Tskyers

Have you ever heard of the “butterfly effect“? In essence, it is a way to conceptualize the possibility that the flapping of butterfly wings in the Amazon Jungle can be the catalyst for a hurricane in Texas.

I think–now, mind you, this is just a thought–the same is going to occur on eBay with storage.

All the innovation currently going on in the enterprise storage arena–10 Gigabit Ethernet (10 GbE) and iSCSI come to mind — is going to be a catalyst that makes businesses retire storage technology faster than usual, filling the secondary market with great usable stuff.

So if you’re a budding storage geek, or an established one looking for the next challenge, eBay is going to become the place to shop. It is not for the faint of heart–that array you spent $200,000 on? Well, $1000, “Buy It Now” and nominal shipping may be able to snag it!! (A slight exaggeration, but I DID find a StorageTek Flex D280 for sale for $2000, with disks!!)

The butterfly here is progress. There will be a tropical storm from progress flapping its wings when these folks enter the workforce, or bring ideas they’ve tried out at home, without limits on uptime or intervention from management, to work. This will cause a fresh round of grassroots innovation that will come from people who can tinker untrammeled (The English language never ceases to amaze me. Thanks for this submission!).

The Linksys WRT-54Gl is a great example of how a group of tinkerers can influence a large company. Check out DD-WRT. Ever wonder why so many wireless routers are coming with USB or e-SATA ports on them nowadays? That started with some hackers wanting to add storage.

eBay has allowed me to build what I otherwise couldn’t find a solid business justification for, or create an ROI schedule around, at work. I have the ability to test various scenarios and provide services to my toughest IT customers: my wife and 9 year old son! Not to mention I have a really geeky conversation piece.

Using myself as an example, I’ve been able to build 3 different versions of a home SAN using technology I purchased from eBay. The first was 1 Gbps FC from Compaq. I picked up a disk shelf for $29 and $40 for shipping. The disks cost a whopping $100 for 14, shipping included. The Fibre cards were $3 a pop. From that, I learned that proprietary SAN technology stinks. Open is the only way to go, so … I started the second iteration which cost a bit more to build.

The second iteration of my SAN was a bit more of an investment both time and money but a tenth the cost of new. I bought a new Areca 8 port SATA array controller with onboard RAM (the only new part in my SAN). I plugged it into a dual Opteron based motherboard with guts from eBay as well, and bought a lot of 10 (2 for spares) 250gb SATA drives. The drives were a deal of a lifetime, for just about $300 I got 2 Terabytes of storage!! Apparently they came out of a Promise array and the person upgraded to 500GB drives.

At the time, I didn’t have any Gigabit Ethernet ports so I opted to buy used 2Gbps fiber channel cards and a used fiber switch. This was a bit more costly than the first SAN so I put some of my old goodies over on eBay (!)  to foot the bill.

The third iteration is the one I’m currently constructing. The second SAN or Second of Many as I like to call it (an ode to Star Trek: Voyager) is still “in production” and servicing my Vmware and file sharing needs, but I felt the need to make it more modular. So far, I’ve gotten an unmanaged Gig-E switch, first generation TOE (TCP Offload Engine) NICs, the controller “head” and a couple SAS disks. I’m making the switch from a SATA controller to a SAS controller to allow for mix and match of speed and capacity on the same bus. I’ve sold some of my fiber and am going to try ISCSI this time.

The hurricane blows when progress has put 8 Gig fiber and 10Gig Ethernet in the datacenter en masse. This will push managed Gig-E components and 4Gbps fiber components out into the secondary market, and make folks like me and value conscious SMB buyers VERY happy!

How long this is going to take to appear I’m not so sure, but if this year is truly the year if iSCSI, I would suggest you open the Web filters to allow a little eBaying at lunch time. Type SAN into eBay see what you come up with.


January 10, 2008  9:33 AM

What’s up with CDP for 2008

Maggie Wright Profile: mwright16

Some analysts touted CDP as being the dark horse technology for corporate adoption in 2007. As we all know, that didn’t occur and the multitude of CDP technologies ended up confusing analysts, press and IT alike as they each tried to sort out the differences between available CDP products and what CDP’s true value proposition was. All of these factors contributed to spoiling CDP’s debut.

However, I anticipate CDP will make a comeback in 2008 for two reasons: corporate needs for data replication and higher availability. Data replication has been around for a long time (only recently under the moniker of continuous), so it is a mature technology and well understood by storage professionals in the field.

“Higher availability” is the more important feature of CDP. Companies now must choose between high availability and semi-availability. High availability is associated with synchronous replication software and provides application recoveries in seconds or minutes but at an extremely high cost. At the other extreme, is backup software that only delivers semi-availability so it can take hours, days or even weeks to recover data. CDP delivers higher availability which is an acceptable compromise between these two extremes as it can quickly recover data (typically under 30 minutes) to any point in time and at a price that is competitive with backup software.

CDP also compliments deduplication. While some may view CDP and deduplication as competing technologies (and in some respects they are), the real goal of data protection is data recovery.

This is where CDP and deduplication part ways. CDP captures all changes to data but keeps the data for shorter periods of time, typically 3 to 30 days, to minimize data stores. Deduplication’s primary objective is data reduction, not data recovery. Faster recoveries may be a byproduct of deduplication since the data is kept on disk but it is not the focus of deduplication so recoveries from deduplicated data do not approach the granularity that CDP provides.

So what’s in store for CDP in 2008? The staying power of new data protection technologies is now largely determined by whether it is adopted by small and midsize businesses. If it’s practical and works there, it will find its way into the enterprises because more and more enterprises work as a conglomeration of small businesses despite corporate consolidations. So, it is not a matter of if CDP will gain momentum in 2008, it is a question of how quickly it will become the predomimant technology that companies use to protect all of their application data.


January 8, 2008  9:55 AM

EMC overshoots SMBs again

Dave Raffo Dave Raffo Profile: Dave Raffo

If EMC set out to improve its Clariion AX150, then it succeeded with the AX4 it launched today. But if it wants to offer a system optimized for SMBs, then it still has some work to do.

The AX4 follows the blueprint that storage companies used when they first started going after SMBs a few years back. They took their larger SAN systems and scaled them down in size and features. That didn’t work, and none of the large SAN vendors has made much of a dent on the vast SMB market. EMC and its partner Dell are now in their third generation of AX systems without much to show.

Meanwhile, EMC’s competitors Hewlett-Packard, Network Appliance, Hitachi Data Systems, and even Dell with its PowerVault MD3000 have delivered storage systems designed from the ground up for SMBs. And they cost less than the $8,000-plus price tag EMC puts on the AX4.

EMC counters that it delivers more capacity and technology for the money with the AX4. But SMBs want simplicity; do they really care about a Fibre Channel option or SAS/SATA intermix? Those features are aimed at EMC customers who want a storage system for a department or remote office that is compatible to their larger Clariions, not SMBs looking to network their storage for the first time.

Dell is more realistic with its positioning of the new system, which it calls the AX4-5. Dell product manager Eric Cannell says the PowerVault MD3000 is for small businesses and the AX4-5 for largers SMBs and “can scale up to the bottom of what you would consider a midrange array.” Dell also prices the system at $13,858 and up, clearly not a true SMB price point.

But the new system puts Dell in a sticky situation with its EqualLogic acquisition. Cannell declined to talk about where all Dell’s iSCSI products fit until the EqualLogic deal closes, but customers are likely confused. If the AX4-5 is so good, why did Dell spent $1.4 billion on EqualLogic?

In the long run, Dell has more at stake than EMC. When you sell the most systems with six-figure and even seven-figure price tags as EMC does, it doesn’t hurt much to lose out on $8,000 deals. But SMBs are Dell’s main line of business, and it’s crucial for Dell to get it right.


January 4, 2008  4:04 PM

Storage analyst goes to the Dell Side

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The industry’s been abuzz this week with rumors that ESG analyst Tony Asaro is headed to Dell, and today Asaro confirmed that’s the plan. He’s going to join Dell as a director of product marketing; today is his last official day as an analyst.

Asaro said his new role will be in creating Dell’s storage strategy and evangelizing  their storage products (some would say this is the role of analysts in the market today, anyway). When asked why he’s leaving his analyst gig, Asaro said he’s excited by the position iSCSI is taking in the market and Dell’s direction following the $1.4 billion acquisition of EqualLogic in November. In other words, a boilerplate answer.

In fairness, Asaro has focused on iSCSI during his time as an analyst and has been bullish about that market’s future. Maybe he didn’t want to sit on the sidelines anymore, watching money roll in elsewhere. In that way, it’s refreshing to see an analyst put his career where his predictions are.

However, he’ll need to be careful to avoid the fate of another former analyst, Randy Kerns, who left the Evaluator Group to become a vice president of strategy and planning at Sun in September 2005, shortly after Sun completed a blockbuster acquisition of its own. Less than a year later, he left Sun, resurfacing in October 2006 as CTO of ProStor Systems.

Still, this news, along with Dell’s acquisition of The Networked Storage Co. in December, will be welcome to EqualLogic users concerned with customer support in the wake of the acquisition. Folding in added storage expertise shows Dell’s at least trying to make the right moves.


January 4, 2008  3:34 PM

A cluster of clusters to begin the year

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Storage vendors are always looking for the next big thing, and they bang the drums loudly when they think they’ve found it — often long before customers are ready to buy. Now they are making a lot of noise around clustered systems, particularly when it comes to selling to new types of businesses such as Web-based merchants and service providers.

Sun, EMC and IBM, which have mostly slideware at this point, have all disclolsed intentions to tackle this space over the past month or so. IBM backed up the talk by acquiring grid storage system vendor XIV this week in a deal reportedly worth $300 to $350 million.

But it’s still early in the cluster game, as Isilon painfully found out. Isilon went public on the strength of its success in the clustered NAS market, but that market apparently isn’t as big as Isilon’s execs and investors expected. Its first year as a public company was marked by disappointing revenues and a plummeting stock price.

NetApp, meanwhile, is finding it hard to cluster its traditional non-clustered NAS. More than four years after it acquired cluster technology from Spinnaker, NetApp hasn’t had great success with its OnTap GX product. Users report that many  features they’ve come to expect from NetApp aren’t working yet with GX.  NetApp considers its clustered product mainly for the small high performance computing (HPC) market at this stage. Others, such as startup Panasas, also sell clusters mainly to HPC customers .

So it will be interesting to see how much success IBM has with XIV’s Nextra systems, and what EMC and Sun come up with. 

There is one Web 2.0 company that has successfully deployed a highly parallel compute farm in massive scale production, but it developed the technology in-house. That is Google, which built Googleplex with nary a single storage vendor’s salemsan present.  But Google’s system isn’t for sale — much to the relief of storage vendors but not their would-be customers.


January 3, 2008  10:53 AM

2008 recommendations for deduplication, encryption and VMware

Maggie Wright Profile: mwright16

As 2007 draws to a close, there are three technologies that appear near the top of many storage managers’ priority lists going into 2008.

· Deduplication

· Tape encryption

· VMware

The mix of old and new technologies is intriguing. One would think that as deduplication and VMware rise in importance, more companies would start to abandon storing data on tape. Yet that does not appear to be the case. Symantec’s Director of Product Marketing, Marty Ward, recently told me that the new encryption features in NetBackup 6.5 are its #2 most sought-after feature (deduplication is #1).

Don’t rush into a deduplication purchase decision. I have yet to talk to a user who doesn’t report faster backup times using a deduplicating backup appliance or backup software and ensuing reductions in data stores. However, I sense that users are rushing into purchasing decisions and not stepping back to look at what other options they have available.

ExaGrid System’s CEO, Bill Andrews, told me this past week that in 50% of its customer deals, the company is seeing no competition. I suspect this percentage probably holds true for Data Domain and Quantum as well. But storage managers should avoid rushing out and buying a deduplicating product to solve their backup problems. Taking just a few extra days to check out what other products are available, how each product adds more capacity and performance, and how viable the company behind the product is can save you some management headaches.

The big cautionary note with tape encryption is to verify how encryption keys are created and managed. So, I recommend using a third-party appliance to create and manage the encryption keys. Though appliances can encrypt the data, more are starting to work in conjunction with backup software and tape drives to provide encryption keys. When companies encrypt data stored to tape, most are hoping they never to access the data again. So managers need to think in terms of how best to manage the recovery of data in five years, not five days. Encryption appliances create highly secure encryption keys, manage the keys long-term, and give companies assurance that they can manage the encryption keys and then recover the data years later.

Storage companies also need to account for the very real storage problems that server virtualization creates. One of the best things you can do in 2008 to prevent VMware from negatively impacting your environment is to change the way you back up VMware virtual machines (VMs). One approach is to use the latest versions of backup software that support the VMware Consolidated Backup (VCB) framework, which back up just the VMDK file which contains the data for all VMs on a VMware server. The other is to install a host-based CDP or dedupe agent on each VM. This eliminates the overhead that backup software agents introduce on each VM. I recommend using CDP. If you are going to change your backup approach anyway, choose the one that gives you the most granular recovery options.


December 31, 2007  11:01 AM

IBM to invest in grid storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

According to a report from an Israeli news source, Globes, IBM is set to pay between $300 and $350 million for an Israeli startup, XIV, which is still in stealth mode and reportedly specializes in grid storage. According to the Globes report:

Since inception only $3 million dollars have been invested in the company, which came from chairman Moshe Yanai, formerly of storage solutions giant EMC, and private investors.

EMC, meanwhile, has its own plans to release a grid storage system next year, according to announcements made at its Innovation Day in Boston in November.


December 21, 2007  12:25 PM

GlassHouse ready for IPO

Dave Raffo Dave Raffo Profile: Dave Raffo

 You can count GlassHouse Technologies among the companies expecting storage spending to increase – or at least hold steady – in 2008. The storage consultant firm filed for an IPO this week, which means it plans to go pubic early next year during a time when many large storage and IT vendors are cautious because of a perceived spending slowdown.

GlassHouse said it hopes to raise $100 million, and will likely be the first or second storage company to go public in 2008 (NAS vendor BlueArc  filed to for an IPO in September, but has yet to  price its shares to complete the IPO).

                                        

And GlassHouse needs spending to increase in order to make it as a public company. As it points out in its SEC filing, it lost $9.6 million last year, $69.9 million since its inception in 2001, and expects the losses to continue. So why go public now? Partly because it can use the $100 million on acquisitions and to keep its business growing, but also because it is well respected in storage circles and has steadily increased revenue. GlassHouse sees a rosy long term future for IT and storage consultants, revolving around data protection, virtualization and green data centers. According to its S-1 filing:

  • Storage/Data Protection: These services help customers plan, integrate and manage their physical data storage and data protection technologies. According to Gartner, this market is predicted to grow from $24 billion in 2006 to $34 billion by 2011.
  • Virtualization: These services help customers plan, integrate and manage their virtualized environments. IDC forecasts that the consulting and systems integration segments of this market will grow from $1.2 billion in 2006 to $5.2 billion in 2011 at an average compound annual growth rate (CAGR) of 33%
  • “Green” Data Centers: These services help customers plan, migrate and manage their data centers to reduce power needs, thereby decreasing the cost to operate their data centers. We believe this market will grow rapidly, as companies seek to reduce their energy costs. According to Gartner, “more than 70% of the world’s Global 1000 organizations will have to modify their data center facilities significantly during the next five years.”

Others are bullish on the need for storage consultants, too. Dell today said it is acquiring U.K.-based The Networked Storage Co., an IT consultant that – as you can guess from the name – specializes in networked storage.

Also today, venture capital buyout firm Garnett & Helfrich Capital, purchased MTI Europe from the bankrupt MTI Technology Corp. The private equity firm will rebrand MTI Europe as MTI, and an MTI spokesperson said the company will offer its consultant services in the United States.


December 21, 2007  10:25 AM

Buffalo unleashes 100 GB Flash drive

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Even my friends who don’t normally follow the storage business are atwitter over an Engadget report that Buffalo has unleashed a 100 GB behemoth flash drive upon the world. Geeks everywhere are probably salivating to take the thing apart (yes, I’m looking at you, Tory) … unfortunately, they’ll have to wait. The catch is that Buffalo is only releasing the product for now in its home country of Japan.

According to company reps, the $1,000 asking price for the credit-card sized USB accessory makes it less than cost-effective to import right now. (If you just can’t get enough flash memory, there are 64 GB monsters roaming North America.)

The Engadget comments section also contains an interesting discussion of the merits of such a large flash drive. In the Engadget screenshot, the card looks like a  behemoth, but the post says it’s about the size of a credit card. Still, it launched a spirited discussion that I think asks some pertinent questions, namely, “would it not be more practical to just buy a $300 travel drive?”

At this juncture, and at this price point, certainly. But Moore’s law waits for no man, and the price of a 100 GB card will come down. Hence the other questions that this announcement begs: at what capacity and price point does a mechanical drive become more practical than a solid state drive? How will that equation change over time? It’s something we in the storage market are going to have to examine more closely in the coming year.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: