Storage Soup


March 20, 2008  9:28 AM

Mendocino Software, R.I.P.

Dave Raffo Dave Raffo Profile: Dave Raffo

Not all storage startups either went public or got acquired for big bucks over the past two years. Mendocino Software sold little of its  continuous data protection (CDP) software and found no takers for its intellectual property, so Wednesday it sold whatever was left at auction.

Mendocino did have five customers through an OEM deal with Heweltt-Packard, which rebranded Mendocino’s product as HP StorageWorks CIC.

According to an email HP sent to SearchStoage.com today, “HP has assigned a task force and is working closely with each of its five HP CIC customers to understand their specific information availability requirements and to determine an appropriate plan of action.”

According to the email, HP  is offering to switch CIC customers to HP Data Protector at no charge for the software and installation, and will transfer CIC support contracts to Data Protector.

March 19, 2008  12:40 PM

User response about NetApp and FC LUN snapshots – UPDATED

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Last week, I blogged about discussions I’ve recently had with NetApp and NetApp customers about the company’s messaging and products. One of the focal points of the debate was what users understood about best practices for overhead on FC LUN snapshots. A couple of users I’d talked to prior to reporting on NetApp’s analyst day event said NetApp best practices dictate at least 100% overhead on FC LUNs, but that NetApp salespeople tell them a different story before the sale.

However, when I followed up with NetApp, officials told me in no uncertain terms that their most current best practices for FC LUNs dictate the same snapshot overhead as any other type of data: 20%.

After posting on this, I got another response from a NetApp customer disputing those statements that seems worthy of adding to the discussion. Here’s the message verbatim:

Continued »


March 19, 2008  11:29 AM

Are storage vendors squeezing dedupe?

Dave Raffo Dave Raffo Profile: Dave Raffo

As the first vendor to make data deduplication a key piece of the backup picture, Data Domain has benefitted most from the dedupe craze. And now it has the most at stake when deduplication becomes mainstream. If all the major storage vendors offer deduplication, there goes at least part of Data Domain’s edge.

That’s not lost on Data Domain CEO Frank Slootman. He sees NetApp’s decision to build deduplication into its operating system and use it for primary data, and the move by other large disk and tape vendors to put dedupe into their virtual tape libraries as part of a strategy to marginalize the technology.

“NetApp’s and EMC’s fundamental strategy is to make deduplication go away as a separate technology,” Slootman said “NetApp has been giving away their deduplication, and we think EMC [through an OEM deal with Quantum] will fully charge for storage but give away dedupe. They don’t want dedupe to be a separate business, or even a technology in its own right.”

Slootman says he’s not worried, though. He sees the biggest benefit of deduplication as an alternative to VTLs, and claims many new Data Domain customers use deduplication to replace virtual tape rather than enhance it. He calls deduplication for VTLs a “bolt-on” technology, where Data Domain built its appliances specifically for dedupe.

And he maintains that deduplication doesn’t work for primary storage. It’s not a technical issue, but a strategic one.

“Primary data lives for short periods of time, why dedupe that?” he said. “It doesn’t live long enough to get any benefit to reducing its size. If data doesn’t mutate, it should be spun off primary storage anyway. It should go to cheaper storage. It’s the stuff that doesn’t change that mounts a huge challenge for data centers. You can’t throw it away, and it’s expensive to keep online.”


March 18, 2008  3:44 PM

EMC Centera exec leaves for CAS startup

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

EMC’s Centera has been something of a question mark for many in the industry over the last 6-8 months. Rumors seem to continually swirl around a major overhaul or replacement for the first content-addressed storage (CAS) system to hit the market. Those rumors and speculation persist even after hardware and software refreshes, such as the introduction of CentraStar 4.0 software last week, and despite insistence from EMC officials that no further major overhauls to the system are planned.

So far Centera remains the leader in market share and the best-known CAS product in the industry, but as we all know, the archiving market is heating up like never before right now, and other big competitors like Hewlett-Packard and Hitchi Data Systems  have been refreshing archiving systems to compete better, to say nothing of archiving startups (or re-starts) popping up like mushrooms all over the industry.

Today, in an interesting twist, one of those startups, Caringo,  revealed that Centera’s director of technology, Jan Van Riel, has left EMC to be Caringo’s VP of Advanced Technology.

Execs leave EMC all the time, often for positions of higher responsibility at newer companies. But there’s a tangled, shared history between these players in particular. The founders of Caringo were also among the co-founders of FilePool, which became Centera when EMC acquired it in 2001. Van Riel was the CTO of FilePool prior to joining EMC as part of the acquisition.

Caringo’s CAS uses standard CIFS and NFS protocols to ingest data, rather than a proprietary API as Centera does. Caringo’s product can run on clusters of virtually any kind of hardware (one example they showed me was the software running on a Mac external drive). With this product, they find themselves in the strange position of launching attacks against what they view as the proprietary, hardware-bound nature of a competitive product that they themselves created.

Who knows if it really means anything that Van Riel has joined with his old buddies again, but he also made a public statement critical of EMC in the press release Caringo put out announcing the move: “With EMC scaling down the Centera unit and the future of Centera unclear, the chance to join Caringo, which understands the potential of CAS, and partner once again with Paul Carpentier was too good of an opportunity to pass up.”

The plot thickens…


March 17, 2008  10:34 AM

QLogic president quietly slips away

Dave Raffo Dave Raffo Profile: Dave Raffo

QLogic opened last week by declaring it has 8-Gbit/s HBAs and switches available, and ended the week by quietly disclosing the resignation of its president and COO Jeff Benck.

The connection between the two pieces of news isn’t yet clear, but until Friday both 8-gig and Benck were considered keys to QLogic’s future.

Former IBM exec Benck has been considered the likely successor to CEO H.K. Desai since joining QLogic last May. He has been the vendor’s front man with Wall Street as well as in the storage industry with 8-gig and the emerging Fibre Channel over Ethernet (FCoE) protocol. Benck was quoted in QLogic’s 8-gig press release last Monday, and he even did an interview on QLogic’s strategy later in the week with the Orange County Business Journal Online for a story that appeared today.

QLogic did not even issue a press release on Benck’s resignation. It filed a notice with the SEC on Friday afternoon – a time when companies often release news they want to slip by with little scrutiny.

“This is quite a shock,” a financial analyst who covers QLogic said in an email to SearchStorage.com after hearing the news.

A source with knowledge of QLogic said Desai told Benck he would not renew his contract, which expires May 1. The move comes as QLogic and its long-time HBA rival Emulex battle to get a leg up on 8-gig and FCoE, and as switch vendor Brocade makes its move into the HBA space. This is an important time for storage networking vendors as they prepare for a coming convergence between Ethernet and Fibre Channel networks.

Wall Street analysts take it as a bad sign for QLogic, which is also without a CFO following Tony Massetti’s resignation last November to become CFO at NCR.

Kaushik Roy of Pacific Growth Equities LLC described Benck as “sharp and pretty engaged. I don’t get the idea that Jeff was incompetent,” Roy said. “There must have been serious disagreements between him and H.K. They’re still searching for a CFO. All these things don’t bode well for the company. It is clear that QLogic is having some serious strategic issues that are extremely hard to overcome.”

Aaron Rakers of Wachovia expressed a similar opinion in a note to clients today.

“We view this as a negative for QLogic,” Rakers wrote, adding that Benck was seen as Desai’s probable successor. “This announcement likely puts increased strain on QLogic’s executive management team, which has been in the process of looking for a CFO.”

Benck’s resignation raises issues regarding QLogic’s future technology direction, as well as the company’s fate. There has been speculation for several years that Cisco would acquire QLogic. Cisco resold QLogic Fibre Channel switches before developing its own fabric switches, the vendors are working together on FCoE, and QLogic has pieces that Cisco lacks for the coming converged network architecture.

But Roy, who was quoted in the Orange County Business Journal story saying he “would not be surprised” if Cisco buys QLogic and Brocade acquires Emulex, said Benck’s departure probably means no deal is imminent. “Why would Jeff leave before the acquisition?” he said. “It would make more sense for him to stick around until after the deal.”


March 14, 2008  1:47 PM

Interesting tidbits from around the storage blogosphere

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The storage market is a vibrant one right now, and as social networking concepts like blogs become more popular in the corporate world, the storage industry has a lively, varied blogosphere to match. Below are examples of some of the more interesting commentary I’ve seen lately, in case you missed them.

Continued »


March 13, 2008  2:26 PM

NetApp’s messaging and users’ experiences

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

NetApp has been on the radar screen more than usual this week, with the announcement of its rebranding campaign followed by its Analyst Day 24 hourse later.

When I’ve written about NetApp over the past few months, I’ve received feedback from its customers talking about the discrepancies between what NetApp’s claiming and what they’re seeing in their shops. A while back I heard from someone using one of NetApp’s arrays who said he felt there had been a difference in what his NetApp sales rep told him and what actually happened after he installed the machine, specifically when it came to Fibre Channel LUNs and snapshots.

According to NetApp, its most current official best practices state that LUNs have the same snapshot overhead as other data on FAS systems, an estimated 20%. But in the course of reporting on Analyst Day, I quoted a different user in the article talking about how he’s seen overhead issues with LUNs. This user’s understanding is also that the best practice is 100% overhead for snapshots of that data.

Another comment in my Analyst Day story suggested high-end customers don’t view NetApp’s boxes as matching the reliability of Tier 1 based arrays. Val Bercovici, director of competitive sales for NetApp, said that attitude is outdated because it doesn’t take into account the vendor’s more recent focus on higher-end storage. As he put it, it represents a “very rear-view mirror view of NetApp.” But, he conceded that this is exactly why NetApp is trying to change its messaging.

But messaging can be a big part of the problem when there is a discrepancy between what the sales rep claims and what the actual hands-on engineer wants to configure. Who knows more about how the machine is actually going to perform, a sales or marketing rep or an engineer doing the installation?

The user I quoted in the Analyst Day story, Tom Becchetti, says he was told about the 100% overhead best practice at a training class last fall, and concedes the instructor could have been going on outdated information. But if that’s the case, the lack of up-to-date information in the field about NetApp products are as big a problem as the overhead itself.

I’m hoping there are other NetApp end users floating around who can weigh in on this. What has been your experience with NetApp products? What has been your understanding of NetApp’s best practices? What would you like to see NetApp doing as it tries to reinvent itself?


March 13, 2008  1:46 PM

What’s your digital footprint?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The other day I blogged about an update to the “Digital Universe” report EMC sponsored with IDC, which amended estimates of the size of said digital universe upward.

Today while surfing around I saw EMC blogger Chuck Hollis’s post on the report, which contained an intriguing tidbit:

By the way, there’s some new bling for your PC.  Last year, as part of the study, EMC offered up a “digital clock” that attempted to measure all information produced in aggregate. 

This year, there’s a “personal digital clock” that (after answering a few questions) will estimate just how much digital footprint you’re creating: both directly and indirectly.  It’s a bit humbling.

As an example, the personal clock estimates that I’ve created well over a terabyte of “digital shadow” this year so far.  And that’s not even counting these blog posts!

Just doing my part for the storage industry, I guess…

I was definitely interested in finding out the exact dimensions of my digital shadow (proven fact: self-absorption is a key driver of Internet traffic), so I downloaded the mini-application they’ve put together with IDC to calculate one’s digital footprint.

It asked a series of questions about surfing habits, the amount of minutes you spend on the phone per week, the amount of TV you record on your TiVo, that sort of thing. I was actually a little embarrassed at some of the numbers I put in–some of them were high indeed, especially the ‘hours per week you are actively on the Internet’ one.

Once I’d answered the questionnaire, the applicaton calculated that I generate 6.18 GB of personal digital information per day, meaning that this year I will generate 2.25 TB of digital shadow.

Hollis, meanwhile, writes that he’s already generated over 1 TB this year. Today, March 13, is the 72nd day of 2008, putting him at about 13 GB per day, if my calculations are accurate. Given I spend virtually all of my working hours actively on the internet and estimated around five or so hours per day on the phone if you combine cell and land line usage, plus a hard drive partition bulging with over 8,000 digital photos … I have to wonder just what Hollis is doing to generate such a shadow.

You too can find out how much you’re contributing daily to the storage industry via the mini-app, which is posted for download here.


March 12, 2008  12:55 PM

8-gig era officially begins

Dave Raffo Dave Raffo Profile: Dave Raffo

QLogic this week became the third vendor to claim it is the first to ship 8 Gbit/sg Fibre Channel equipment.

QLogic says its 8Gb PCI-Express HBAs and 8-gig switches are available as a Hewlett-Packard StorageWorks 8Gb Simple SAN Connection Kit  and from QLogic distributors. Although other vendors claim to have 8-gig devices, QLogic marketing vice president Frank Berry said his rivals aren’t shipping those products yet. That’s news to Brocade and Emulex. IBM, Sun and NetApp have said they are offering Brocade’s 8-gig DCX Backbone director, and Emulex lists Ingram Micro and TechData among the distributors selling its 8-gig HBAs.

But QLogic is the first vendor to offer up a real live 8-gig user. Managed hosting services firm InteleNet Communications has been testing QLogic 8-gig HBAs and switches , and general manager Carlos Oliviera expects to be an early adopter. InteleNet provides storage, security, networking, data backup and disaster recovery services out of a 55,000 square foot data center located in Irvine, Calif., and a smaller data center in Denver.

Oliviera said 8-gig Fibre Channel gear will help InteleNet provide better service for its customers in several ways.

“Our machines are diskless,” Oliviera said. “We started testing 8-gig equipment to enhance the speed of data transfers. We have customers with a high demand for utilization; they need to open several applications on the same machines. They need high I/O.

“And we’re seeing a lot of demand for disaster recovery where they replicate content across different disk controllers over the SAN, and they want to get that done as fast as possible.”

Besides the performance boost, Oliviera said 8-gig lets him connect more actual and virtual servers to his storage and adds redundancy. He expects to add 8-gig to his production system by mid-year when InteleNet installs its next 50-server rack.

InteleNet is probably the exception at this point for seeing value in 8-gig. Not even the storage vendors expect customers to move to 8-gig as fast as they went from 2- to 4-gig a few years back. The 8-gig HBAs and switches will cost about 15 percent more than 4-gig gear at the start. And the 8-gig ecosystem will take longer to develop. No system vendors have disclosed plans for 8-gig systems yet, and hard drive vendors probably won’t ever develop 8-gig Fibre Channel drives.

But Oliviera said the more expensive 8-gig gear makes sense for him because it lets his company add revenue through new customers. “The return on investment is still very good,” he said. “One of things that pushed this is virtualization. Now we can sell a lot more serivces with the same resources, which gives us a better ROI.”


March 12, 2008  11:57 AM

Email archiving: focus or experience?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Last week I was briefed by Internet security software company Trend Micro on its new email archive offering, dubbed the Trend Micro Message Archiver, which was launched Monday.

The product, from a storage geek’s point of view, is about as bleeding-edge as its name. It has the usual checklist items we’ve been hearing about from earlier arrivals to this market, from indexing to .pst import. The product also does MD5 hashing for content-addressed storage, etc. At some points it feels like the email archiving players have all seen a Chinese menu somewhere, and they pick and choose certain features. There’s a superset of common product features so ubiquitous in that market it’s begun to feel commoditized.

What captured my attention when it came to TMMA isn’t the product but who’s offering it. Trend Micro is a 20-year-old, global, $848 million-a-year company. Since 2004, MSN Hotmail has been using Trend Micro to scan messages and attachments in its users’ accounts.

The first thing this means is that the product will be integrated as it matures with TM’s access controls, anti-spam and anti-virus filters, email certification and encryption features. Trend Micro’s not alone in this kind of integration (Lucid8 and others jump to mind), but they are pretty unique in terms of their size and brand recognition. And the times I’ve stepped out of my little storage-centric cave and spoken with people in adjacent markets–like, say, the e-discovery and legal compliance folks–I’ve heard many of them say that the storage guys aren’t getting it in some areas, like evidentiary standards that may apply to emails in court beyond what most email archivers offer today. It might be that a little expertise from other markets is what these products need.

This also might be where this new wave of non-storage vendors like Trend Micro making forays into the storage market will find a way to add value. For security-concerned customers, the TM product could offer a focus on security integration, delivery from an already-trusted vendor, and the ever-popular ‘one throat to choke’ as well.

But then again, the ultimate purpose of the product is to store and protect email data. The security features are nice, but secondary to the main function of the product. And many storage admins would probably rather go with a vendor that has experience in the core feature of the product, which is data protection.  

I’m also seeing this dichotomy emerge in another hot market–storage SaaS. In that market, there are also new offerings from experienced storage players competing against new ‘one stop shop’ offerings from adjacent players–EMC’s Fortress vs. new backup and hosted storage offerings from data center service providers like The Planet and Savvis.

I, for one, am curious to see which model users will find preferable as overlaps grow between the different disciplines of IT. Which will be more important: focus, as in focus on the existing relationship with the customer and consolidated vendor relationships, or experience, in designing and supporting storage products?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: