Storage Soup

A SearchStorage.com blog.


July 3, 2008  8:59 AM

ILM seems to be MIA, but why?



Posted by: Tskyers
Data storage management

I don’t see the terms ILM or Data Lifecycle Management mentioned much anymore on the Interwebs. Odd that we have this many regulatory pressures, and the one thing that could actually save us some money, time and stress when dealing with those pressures hasn’t been seen in headlines for at least a year, maybe two.

Where did ILM go? Did it morph into something else? Based on all the coverage it was getting two years ago, you’d think we would have progressed to hardware-based products supporting ILM initiatives by now. Yet the storage hardware vendors, with the most noticeable exception being Compellent, still haven’t added ILM features to their hardware. Storage software vendors, like Veritas, IBM Tivoli, CommVault, et al., still offer ILM features for their software suites, but today it’s dedupe that has the spotlight. And I’m not so sure I see the reason why.

I understand the marketing behind dedupe: “Hard economic times are ahead, so save money and don’t buy as much disk.” But if you look at the sales figures from the leading storage vendors, they are all meeting their sales estimates, and in some cases exceeding those estimates by a good margin, so businesses apparently haven’t yet gotten into the whole “save money” or the “buy less” aspect of that marketing push.

Managing one’s data seems to me the better way to spend, if you know when to move it to cheap disk, commodity tape and through to destruction. It would free up capacity on fast expensive disk, and reduce the effort needed to satisfy policy pressures. I distinctly remember eons ago sitting in a conference hall and listening to Curtis Preston for the first time, and this topic was the thrust of his talk: Manage your data, figure out where it should live and put it there.

This message holds true now more than ever. Just think, three or four years ago, 250 GB drives were the largest SATA drives certified for storage arrays. Now, with 750 GB to 1 TB in each slot, we have even more of a need to know when the data was created and when it needs to be archived or destroyed. With SSDs rapidly making their way into storage arrays, data management and subsequent movement becomes a crucial cost saving tool.

The part about all this that baffles me the most is liability. You’d think that if you were going to be legally liable to either hold onto or destroy files or information, you’d probably want an automated, “people-resistant” system in place to handle all that. At another recent Techtarget event on DR, Jon Toigo talked about a data map and knowing how valuable your data is in order to best protect the most valuable data. Sounds like a straightforward, common-sense approach, but as far as I know only one vendor is doing it in hardware, and most of the software vendors have gone quiet with their marketing behind it.

The term Information Lifecycle Management conjures up thoughts of managed data, orderly storage environments, documented processes, and responsible governance for me. All these things I’ve seen brought up in blogs (some of my own included) and articles, expressed as concerns for businesses large and small. So why has ILM gone underground?

July 2, 2008  10:10 AM

More server virtualization benchmark drama



Posted by: Beth Pariseau
Storage and server virtualization

Last time on As the Benchmark Turns, we saw NetApp pull a fast one on EMC. This week’s episode brings us into the torrid realm of server virtualization and Hyper-V.

It began with a press release from QLogic and Microsoft lauding benchmark results showing “near-native throughput” on Hyper-V hosts attached to storage via Fibre Channel, to the tune of 200,000 IOPS.

Server virtualization analyst Chris Wolf of the Burton Group took the testing methodology to task on his blog:

The press release was careful to state the hypervisor and fibre channel HBA (QLogic 2500 Series 8Gb adapter), but failed to mention the back end storage configuration. I consider this to be an important omission. After some digging around, I was able to find the benchmark results here. If I was watching an Olympic event, this would be the moment where after thinking I witnessed an incredible athletic event, I learned that the athlete tested positive for steroids. Microsoft and QLogic didn’t take a fibre channel disk array and inject it with Stanzanol or rub it with “the clear,” but they did use solid state storage. The storage array used was a Texas Memory RamSan 325 FC storage array. The benchmark that resulted in nearly 200,000 IOPS, as you’ll see from the diagram, ran within 90% of native performance (180,000 IOPS). However, this benchmark used a completely unrealistic block size of 512 bytes (a block size of 8K or 16K would have been more realistic). The benchmark that resulted in close to native throughput (3% performance delta) yielded performance of 120,426 IOPS with an 8KB block size. No other virtualization vendors have published benchmarks using solid state storage, so the QLogic/Hyper-V benchmark, to me, really hasn’t proven anything.

I talked with QLogic’s VP of corporate marketing, Frank Berry, about this yesterday. He said that Wolf had misinterpreted the intention of the testing, which he said was only meant to show the performance of Hyper-V vs. a native Windows server deployment. “Storage performance wasn’t at issue,” he said. At least one of Wolf’s commenters pointed this out, too:

You want to demonstrate the speed of your devices, the (sic) you avoid any other bottlenecks: so you use RamSan. You want to show transitions to and from your VM do not matter, then you use a blocksize that uses a lot of transitions: 512 bytes…

But Wolf points to wording in the QLogic press release, claiming the result ”surpasses the existing benchmark results in the market,” and claiming that ”implies that the Hyper-V/QLogic benchmark has outperformed a comparable VMware benchmark.” Furthermore, he adds:

Too many vendors provide benchmark results that involve running a single VM on a single physical host (I’m assuming that’s the case with the Microsoft/QLogic benchmark). I don’t think you’ll find a VMware benchmark published in the last couple of months that does not include scalability results. If you want to prove the performance of the hypervisor, you have to do so under a real workload. Benchmarking the performance of 1 VM on 1 host does not accurately reflect the scheduling work that the hypervisor needs to do, so to me this is not a true reflection of VM/hypervisor performance. Show me scalability up to 8 VMs and I’m a believer, since consolidation ratios of 8:1 to 12:1 have been pretty typical. When I see benchmarks that are completely absent of any type of real world workload, I’m going to bring attention to them.

And really, why didn’t QLogic mention the full configuration in first reporting the test results? For that matter, why not use regular disk during the testing, since that’s what most customers are going to be using?

On the other hand, QLogic and Microsoft would be far from the first to put their best testing results forward. But does anyone really base decisions around vendor-provided performance benchmarks anyway?


July 1, 2008  10:36 AM

QLogic boss says he’s too busy to quit



Posted by: Dave Raffo
SAN, Storage protocols (FC / iSCSI)

 QLogic CEO H.K. Desai had planned to step down this year, but says there is too much going on in the storage industry for him to leave now.

That’s why his heir apparent left QLogic in March, and QLogic isn’t looking for a replacement. “I’ll be hanging around for awhile,” Desai, who has been QLogic’s CEO since 1995 and its chairman since 1999, told me.

He wouldn’t have said that a year ago. QLogic hired Jeff Benck from IBM as COO and President in April 2007 with the understanding that he would replace Desai in a year. But Desai stayed put and Benck left QLogic – eventually signing on with its HBA rival Emulex as COO.

Desai says QLogic isn’t trying to replace Benck, because it doesn’t need to replace the CEO. “The whole plan was for us to have a long-term transition, and it didn’t work out,” Desai said. “I’ve been fully engaged the last few months and I expect to be fully engaged for awhile. I won’t even talk to the board [about a COO] unless we plan a transition, which I don’t expect to do.”

Desai said his change of heart came because the storage networking world is so busy now. The industry is in a transition to 8-Gbps Fibre Channel with Fibre Channel over Ethernet (FCoE) and 16-gig FC coming up on its heels. QLogic is also trying to make inroads in InfiniBand. “There’s so much activity going on now, I don’t want to disturb anything,” he said. “Our team has been working together for so many years, and I don’t want to bring anybody in from the outside to disturb things. There’s a time to do a transition, and I don’t think this is the time.”

Of course, if Desai is looking for a lull in emerging storage technologies to retire he just might work forever.


July 1, 2008  10:33 AM

Ibrix gets a new leader



Posted by: Beth Pariseau
Strategic storage vendors

Yesterday I met with Ibrix, a company I haven’t caught up with in a while. They seem to be doing well with movie production studios, and our meeting featured a screening of Wall-E, the latest picture from Ibrix customer Pixar.

Along the way, I was told that while his business cards still say VP of marketing and business development, Milan Shetti is actually the new president of Ibrix. The handover from former interim CEO Bernard Gilbert has happened over the last few days, according to Shetti. The CEO before Gilbert, Shaji John, remains chairman of the board.

“My gut is that if [Shetti] performs well over the next six months, he’ll end up in that CEO role,” Taneja Group analyst Arun Taneja said.

Generally, companies that undergo frequent reorganizations aren’t doing spectacularly. But Ibrix presented this second transition as a planned one, saying Gilbert had been focused on getting operations moving along, while Shetti has already become the public face of the company. Shetti said Gilbert planned on serving one year as CEO, and that term has been completed.

Or as StorageIO Group analyst Greg Schulz put it, “Ibrix has been shifting from development-focused to that of marketing, business development, partner/reseller/OEM crecruitment and sales deployment execution.” In the past few years, Ibrix has signed channel deals with Dell and HP. Shetti played a key role in both deals, according to the company.

Shetti told me yesterday that Ibrix has been winning deals like the ones with Pixar and Disney because its clustered file system is software-only, and customers can choose their own hardware. Ibrix software can also be embedded within HP or Dell servers at the factory before shipment, so the customer doesn’t have to load software agents on every node.

Ibrix’s software is rumored to be shipping with EMC’s Hulk, but Shetti was mum on that subject.


June 27, 2008  10:50 AM

Brocade still a fan (not FAN) of software



Posted by: Dave Raffo
Data storage management

Although Brocade has a lot on its connectivity plate these days as it transitions to 8-Gbps FC switches, plots its move to FCoE and gets into the HBA game, it still has plans for its fledgling software business.Brocade’s FAN (file area network) initiative has been a bust so far, but at Brocade’s Tech Day Thursday, Max Riggsbee, CTO of the files business unit, laid out a roadmap for a refocused data management portfolio.

Nobody from Brocade used the FAN acronym, but file management remains a key piece of its software strategy, beginning with the recently released Files Management Engine (FME) product. FME is a policy engine that handles migration and virtualization of Windows files. Brocade will add SharePoint file services, disaster recovery for SharePoint and file servers, content-driven file migration and data deduplication in a series of updates through 2010.

Brocade has been fiddling with its files platform and overall software strategy for months now. It dumped its Branch File Manager WAFS product earlier this year, but kept StorageX file migration and Thursday revealed plans for new replication products, including one that deduplicates and compresses files moved across the WAN.

While the new lineup looks impressive on paper, it will take time to play out. And Brocade is walking a tightrope between expanding its product line and treading on its storage system partners’ turf with file virtualization and replication. “We are interested to see how this is received by storage/server vendors,” Wachovia Capital Markets financial analyst Aaron Rakers wrote of the replication product in a note to clients.

Brocade execs say they will take great care to work with partners and avoid competing with them — something they say rival Cisco does with many of its products. Ian Whiting, Brocade’s VP of data center infrastructure, said the new products will be developed jointly with its major OEM partners. “Our business model is all around partnerships with bigger systems companies,” he said. “We believe that’s how customers will consume the technology.”

At least Brocade’s not calling its new files strategy FAN 2.0. That’s a good start.


June 26, 2008  1:53 PM

Litigation update: Sun 1, NetApp 0



Posted by: Beth Pariseau
Strategic storage vendors

According to a blog posted today by Sun’s general counsel Mike Dillon, at least one of the patent-infringment counts is off the table in court, after the US Patent Office (PTO) granted a re-examination request filed by Sun.

With regard to one NetApp patent, the ‘001 patent, the PTO has issued a first action rejecting all the claims of this patent. Based on the positive response we received from the PTO, we asked the trial court to stay a portion of the litigation. Obviously, it doesn’t make sense to go through the expense and time of litigating a patent in court if the PTO is likely to find it invalid. The court agreed with our request and at least one NetApp patent has thus far been removed from the litigation.

NetApp started all this by filing its ZFS lawsuit against Sun with great fanfare last September, but Sun has been the aggressor ever since.  Sun countersued NetApp’s original suit and accused NetApp of violating Sun’s patents.  It tacked on another lawsuit in March alleging patent infringement related to storage management technology NetApp acquired when it bought Onaro in January.

This is the first of six reexamination requests filed by Sun. Dillon said Sun expects to hear more throughout the year.

NetApp refused comment on the latest developments and a survey of NetApp’s many executive blogs hasn’t turned up any further discussion, though some of Dave Hitz’s testimony is now being made available by Sun online.


June 25, 2008  4:01 PM

Nirvanix readying cloud-based NAS



Posted by: Beth Pariseau
data compliance and archiving, NAS, Storage managed service providers, Storage Software as a Service

Startup Nirvanix today unveiled CloudNAS, which will combine Nirvanix software agents with Linux or Windows servers at the customer site to offer standard NAS storage in the cloud. Until now, Nirvanix and most other cloud storage services such as Amazon’s S3 required API integration between applications and the cloud service.

Nirvanix has had three large companies alpha testing CloudNAS, and is now starting up an open beta program, according to chief marketing officer Jonathan Buckley. ”CloudNAS can run on a laptop,” Buckley said. “We’re looking to bring dozens more companies into the mix.”

As far as I know, this is a first. Rackspace’s Mosso offers cloud-based NAS, but only for Web developers. Nirvanix says CloudNAS will use commonly available interfaces, including NFS on Red Hat Linux 5 or SUSE 10 and CIFS on Windows XP. Customers must provide the server hardware and hook it up to Nirvanix’s cloud. The company charges 25 cents per GB per month for its service, but the NAS software will be free. Customers will have the option of a $200 per month support contract.

Just don’t look for CloudNAS to replace your production NAS boxes any time soon. Buckley said he expects the service will be used for mainly for backup, archiving, and other applications that can tolerate latency. “We’re not going to restrict what people put on it, but cloud storage can never be as fast as local enterprise storage,” Buckley said.


June 25, 2008  12:17 PM

The enterprise and open-source storage



Posted by: Beth Pariseau
Data storage management, Strategic storage vendors

Jesse at SanGod wrote an interesting post the other day entitled “Enterprise storage vs….not.”

I have a cousin. Very well-to-do man, owns a company that does something with storing and providing stock data to other users. I don’t pretent do know the details of the business, but what I do know is that it’s storage and bandwidth intensive.

He’s building his infrastructure on a home-grown storage solution – Tyan motherboards, Areca SATA controllers, infiniband back-end, etc. Probably screaming fast but I don’t have any hard-numbers on what kind of performance he’s getting.

Now I understand people like me not wanting to invest a quarter-mil on “enterprise-class” storage, but why would someone who’se complete and total livelihood depends on their storage infrastructure rely on an open-source, unsupported architecture?

Jesse goes on to point out the resiliency and services benefits of shelling out enterprise bucks. His post sparked a conversation between me and an end user I know well whose shops (in the two jobs I’ve followed him through) are as enterprise as they come. This guy also knows his way around a Symmetrix, and Clariion, and NetApp filers, and when it comes to the secondary disk storage and media servers he’s building for his beefy Symantec NetBackup environment…he’s going with Sun’s Thumper-based open-source storage.

Obviously it’s a little different from cobbling together the raw parts, and Sun offers support on this, so it’s kind of apples and oranges compared with what Jesse’s talking about. But I’ve also heard similar withering talk about Sun’s open storage in particular, and can only imagine Sun’s open-source push is making this topic timely.

This is the second person I’ve talked to from a big, booming enterprise shop who picked Thumper to support NetBackup.  The first, who had the idea more than a year ago, was a backup admin from a major telco I met at Symantec Vision.

Obviously it’s not mission-critical storage in the sense that Symmetrix or XP or USP are, but I’d venture to guess that for a backup admin, his “complete and total livelihood” does depend on this open-source storage. As for the reasons to deploy it instead of a NetApp SATA array or EMC Disk Library or Diligent VTL? Both users cited cost, and the one I talked to more recently had some pointed things to say about what enterprise-class support often really means (see also the Compellent customer I talked with last week, who found that the dollars he spent made him less appreciative of the support he got from EMC).

This ties in with a recent conversation I  had with StorageMojo’s Robin Harris. He compares what’s happening in storage to the relationship between massively parallel systems and the PC in the era of the minicomputer.  When the PC arrived, the workstation market was dominated by makers of minicomputers, the most famous being Digital. Minicomputers were proprietary, expensive and vertically integrated with apps by vendors, much like today’s storage subsystems. Just as the PC introduced a low-cost, industry-standard workstation and the concept of a standardized OS, Harris predicts clustered NAS products built on lower cost, industry-standard components will bring about a similar paradigm shift in enterprise storage.

While there will obviously remain use cases for all kinds of storage (after all, people still run mainframes), I suspect people are starting to think differently about what they’re willing to pay for storage subsystems in the enterprise, regardless of the support or capabilities they’d get for the extra cash. And I do think that on several fronts, whether open-source storage or clustered NAS, it is looking, as Harris put it, like the beginnings of a paradigm shift similar to those that have already happened with PCs and servers.

That’s not to say I think Sun will win out, though. For all Sun’s talk about the brave new world of open-source storage, I haven’t heard much emphasis placed on the secondary-storage use case for it. And that so far is the only type of enterprise deployment for Thumper I’ve come across in the real world.


June 23, 2008  3:48 PM

Yahoo, I hardly knew ye



Posted by: Beth Pariseau
Around the water cooler, software as a service

I’ve watched the story unfold about Microsoft and Yahoo, but from a removed perspective because it has little to do with the storage industry and when it comes to most things Web-based and search or email related, I’m a Google user. Still, it’s been a good story to sit back with some popcorn and watch develop.

Recently, though, it’s hit home a little more for me. First, I saw that the New York Times/AP reported that the co-founders of Flickr, a photo sharing service bought by Yahoo in 2005, have left the company. Then I found out that the founder of Del.icio.us is also leaving Yahoo–which was the first time I even realized Del.icio.us was a Yahoo property.

Now I wonder two things–1) How many other staples of my Web 2.o life are part of Yahoo and I didn’t know it? (One helpful resource for this question: TechCrunch has posted a big table to keep track of the Yahoo exodus). 2) What’s going to happen to them?

It’s as close as I’ll ever come to the experience my enterprise storage audience must have regularly when dealing with the effects of mergers and acquisitions. Anxiety frequently accompanies these events, causing people to wonder how the user experience will change with the product, how support might change, how well might the company keep up with features…

It’s not like products can’t survive without their original innovators, and for the moment, Yahoo does still exist as we know it (though there’s speculation that’s not for long). But I have seen in the storage industry how innovation diminishes after the guys who first built the machine in the garage leave the company, innovation diminishes, and the company itself is more likely to move on to the next shiny object.

That’s what I’m afraid will happen now to Flickr and Del.icio.us, and then I’d have to face another nightmare commong among enterprise folks–how to get my 8,000-plus photos and 2,000-plus bookmarks migrated over to another service.


June 23, 2008  10:36 AM

Symantec’s SwapDrive: $500 a year for 2 GB?



Posted by: Beth Pariseau
software as a service, Storage backup, Storage managed service providers, Storage Software as a Service, Strategic storage vendors

I wasn’t convinced at first when an alert blog reader flagged an error in my previous posts about Symantec and SwapDrive: a comment from  “kataar” pointed out that yearly, SwapDrive actually charges $500 (five hundred) for 2 GB, not $50 (fifty).

That couldn’t possibly be right, I thought. I clicked the site, saw the same price list, read down the column for individual users–ah! 2 GB, $50. I was all ready to post a reply when I went back and checked it one more time, just to be sure. That’s when I noticed “Monthly” over the cost I was looking at. Under “Yearly” was, indeed, $500. For 2 GB of storage per year. For multi-user plans of up to 10 GB, the yearly cost is $2,800.

My bad. And thanks to kataar!

EMC, of course, is having a field day with this. Even comparing a relatively modest price of $49.50 a year (you’ll notice Mark Twomey made the same mistake I did), they are only too happy point out that you can get 2 GB of storage free from Mozy (I’ll let the irony of EMC gloating about another vendor’s pricing pass for now). Meanwhile, you can get up to 5 GB free from Windows SkyDrive, GMail will give you a 2 GB inbox for free, and Carbonite will let you back up unlimited capacity to its cloud for $49.95 per year.

I’ve heard of some of the older data hosting services, like certain specialized deals with Iron Mountain, costing in the neighborhood of SwapDrive’s quoted price, but I haven’t heard of too many in the consumer/SOHO/SMB space charging on that scale.

When I asked Symantec about the pricing, this was the response: “SwapDrive’s current online pricing will keep pace with the market and the value derived. Our service is more robust and redundant than many others offered in the market today.” The spokesperson added that 2 GB of online storage comes included with Norton 360 for an MSRP of $79.99.

I’d really like to learn more about exactly what makes SwapDrive hundreds of dollars more robust and redundant per year.  And what makes it worth $500 standalone but worth some percentage of $80 with Norton 360? That seems like a big swing to me.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: