Storage Soup


March 25, 2008  9:32 AM

Another storage analyst defects to a vendor

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Barry Murphy, formerly of Forrester Research, has been named the new director of product marketing for Mimosa, tasked with “expanding the company’s eDiscovery and content management partner ecosystem and broadening awareness for and adoption of Mimosa Systems’ award-winning content archiving platform.”

The cynically inclined might say he already did a similar thing with his last major act as a Forrester analyst, the publication of two reports on message archiving products. The reports concluded that on-premise software archives (such as Mimosa’s) are gaining more traction and are more mature in their features than hosted archiving offerings.

I don’t really believe this was anything other than coincidence–the research for such a report goes on for months and the report was obviously started well in advance of this transition. It makes sense that an analyst whose expertise was in records management and archiving would go to a vendor in that sector of the market. But sometimes the appearance of a conflict of interest can be as problematic as an actual conflict of interest. At the least, from my perspective, it’s unfortunate timing.

Murphy joins Tony Asaro, who recently resurfaced as chief strategy officer for Virtual Iron after a short stint with Dell, as the most recent storage analysts to head to vendors. It has been suggested to me that most analysts wind up at vendors or doing consulting, so maybe this is a natural lifecycle we’re seeing.

Speaking of defections, it has also been announced that Dr. David Yen has left Sun for Juniper. Yen was formerly the head of Sun’s storage group, who was shifted to their chip group following the restructuring of the storage and server groups under John Fowler last year.

March 24, 2008  2:47 PM

Tape is dead, long live tape

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Ever since I started covering storage, I’ve been hearing the disk vs. tape debate, usually including proclamations that tape is dead or dying.

There are good reasons to make that assertion. Disk-based backup is catching on, particularly among SMBs, and data deduplication is evening out the cost-per-GB numbers between disk and tape for many midrange applications. Disk is preferable to tape in many ways, especially because it allows faster restore times for backup and archival data. Once again, people are starting to ask, what’s the point of using tape? Dell/EqualLogic’s Marc Farley posted a funny video on his blog to illustrate the question on Friday.

I’m not so sure we’ll ever really see the end of tape. When it comes to the high end, there’s simply too much data to keep on spinning disk. The cost of disk is often still higher per GB, depending on the type of disk and the type of application accessing it. And that doesn’t include power and cooling costs.

I’ve also heard lots of good reasons to give up tape. And maybe in certain markets, like SMBs, tape will die — if it hasn’t already. But whenever tape is on the ropes, another trend comes along to boost it back into relevance.  When disk took over backup, the data archiving trend kicked in, and tape’s savings in power and cooling and its shelf life for long-term data preservation came to the fore. Now, as data dedupe has disk systems vendors pitching their products for archive, too, along comes “green IT” to buoy tape.

Now, I’d like to ask the same questions Farley did, because I’m just as curious to know, and because he and I may have different audiences with different opinions. Do you think tape is dead? If not, what do you use it for? Let us know the amount of data you’re managing in your shop as well.


March 24, 2008  12:09 PM

The economy and technology innovation

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I love listening to NPR. I listen to, watch and read many news sources, but I find the stories they choose and the nuances they bring to their reporting refreshing. I was listening to NPR this morning when a very rare thing happened–I heard someone being interviewed that I’ve interviewed before myself. It’s not often that IT industry news makes a mainstream general-purpose broadcast, so I paid close attention.

The pundit in question was Rob Enderle, a technology analyst I interviewed last month when EMC acquired Pi. After hearing his brief comments on the current state of the US economy and how he predicts it will affect technology innovation in Silicon Valley, I called him up myself and dug a little deeper into the matter with him.

Continued »


March 20, 2008  9:28 AM

Mendocino Software, R.I.P.

Dave Raffo Dave Raffo Profile: Dave Raffo

Not all storage startups either went public or got acquired for big bucks over the past two years. Mendocino Software sold little of its  continuous data protection (CDP) software and found no takers for its intellectual property, so Wednesday it sold whatever was left at auction.

Mendocino did have five customers through an OEM deal with Heweltt-Packard, which rebranded Mendocino’s product as HP StorageWorks CIC.

According to an email HP sent to SearchStoage.com today, “HP has assigned a task force and is working closely with each of its five HP CIC customers to understand their specific information availability requirements and to determine an appropriate plan of action.”

According to the email, HP  is offering to switch CIC customers to HP Data Protector at no charge for the software and installation, and will transfer CIC support contracts to Data Protector.


March 19, 2008  12:40 PM

User response about NetApp and FC LUN snapshots – UPDATED

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Last week, I blogged about discussions I’ve recently had with NetApp and NetApp customers about the company’s messaging and products. One of the focal points of the debate was what users understood about best practices for overhead on FC LUN snapshots. A couple of users I’d talked to prior to reporting on NetApp’s analyst day event said NetApp best practices dictate at least 100% overhead on FC LUNs, but that NetApp salespeople tell them a different story before the sale.

However, when I followed up with NetApp, officials told me in no uncertain terms that their most current best practices for FC LUNs dictate the same snapshot overhead as any other type of data: 20%.

After posting on this, I got another response from a NetApp customer disputing those statements that seems worthy of adding to the discussion. Here’s the message verbatim:

Continued »


March 19, 2008  11:29 AM

Are storage vendors squeezing dedupe?

Dave Raffo Dave Raffo Profile: Dave Raffo

As the first vendor to make data deduplication a key piece of the backup picture, Data Domain has benefitted most from the dedupe craze. And now it has the most at stake when deduplication becomes mainstream. If all the major storage vendors offer deduplication, there goes at least part of Data Domain’s edge.

That’s not lost on Data Domain CEO Frank Slootman. He sees NetApp’s decision to build deduplication into its operating system and use it for primary data, and the move by other large disk and tape vendors to put dedupe into their virtual tape libraries as part of a strategy to marginalize the technology.

“NetApp’s and EMC’s fundamental strategy is to make deduplication go away as a separate technology,” Slootman said “NetApp has been giving away their deduplication, and we think EMC [through an OEM deal with Quantum] will fully charge for storage but give away dedupe. They don’t want dedupe to be a separate business, or even a technology in its own right.”

Slootman says he’s not worried, though. He sees the biggest benefit of deduplication as an alternative to VTLs, and claims many new Data Domain customers use deduplication to replace virtual tape rather than enhance it. He calls deduplication for VTLs a “bolt-on” technology, where Data Domain built its appliances specifically for dedupe.

And he maintains that deduplication doesn’t work for primary storage. It’s not a technical issue, but a strategic one.

“Primary data lives for short periods of time, why dedupe that?” he said. “It doesn’t live long enough to get any benefit to reducing its size. If data doesn’t mutate, it should be spun off primary storage anyway. It should go to cheaper storage. It’s the stuff that doesn’t change that mounts a huge challenge for data centers. You can’t throw it away, and it’s expensive to keep online.”


March 18, 2008  3:44 PM

EMC Centera exec leaves for CAS startup

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

EMC’s Centera has been something of a question mark for many in the industry over the last 6-8 months. Rumors seem to continually swirl around a major overhaul or replacement for the first content-addressed storage (CAS) system to hit the market. Those rumors and speculation persist even after hardware and software refreshes, such as the introduction of CentraStar 4.0 software last week, and despite insistence from EMC officials that no further major overhauls to the system are planned.

So far Centera remains the leader in market share and the best-known CAS product in the industry, but as we all know, the archiving market is heating up like never before right now, and other big competitors like Hewlett-Packard and Hitchi Data Systems  have been refreshing archiving systems to compete better, to say nothing of archiving startups (or re-starts) popping up like mushrooms all over the industry.

Today, in an interesting twist, one of those startups, Caringo,  revealed that Centera’s director of technology, Jan Van Riel, has left EMC to be Caringo’s VP of Advanced Technology.

Execs leave EMC all the time, often for positions of higher responsibility at newer companies. But there’s a tangled, shared history between these players in particular. The founders of Caringo were also among the co-founders of FilePool, which became Centera when EMC acquired it in 2001. Van Riel was the CTO of FilePool prior to joining EMC as part of the acquisition.

Caringo’s CAS uses standard CIFS and NFS protocols to ingest data, rather than a proprietary API as Centera does. Caringo’s product can run on clusters of virtually any kind of hardware (one example they showed me was the software running on a Mac external drive). With this product, they find themselves in the strange position of launching attacks against what they view as the proprietary, hardware-bound nature of a competitive product that they themselves created.

Who knows if it really means anything that Van Riel has joined with his old buddies again, but he also made a public statement critical of EMC in the press release Caringo put out announcing the move: “With EMC scaling down the Centera unit and the future of Centera unclear, the chance to join Caringo, which understands the potential of CAS, and partner once again with Paul Carpentier was too good of an opportunity to pass up.”

The plot thickens…


March 17, 2008  10:34 AM

QLogic president quietly slips away

Dave Raffo Dave Raffo Profile: Dave Raffo

QLogic opened last week by declaring it has 8-Gbit/s HBAs and switches available, and ended the week by quietly disclosing the resignation of its president and COO Jeff Benck.

The connection between the two pieces of news isn’t yet clear, but until Friday both 8-gig and Benck were considered keys to QLogic’s future.

Former IBM exec Benck has been considered the likely successor to CEO H.K. Desai since joining QLogic last May. He has been the vendor’s front man with Wall Street as well as in the storage industry with 8-gig and the emerging Fibre Channel over Ethernet (FCoE) protocol. Benck was quoted in QLogic’s 8-gig press release last Monday, and he even did an interview on QLogic’s strategy later in the week with the Orange County Business Journal Online for a story that appeared today.

QLogic did not even issue a press release on Benck’s resignation. It filed a notice with the SEC on Friday afternoon – a time when companies often release news they want to slip by with little scrutiny.

“This is quite a shock,” a financial analyst who covers QLogic said in an email to SearchStorage.com after hearing the news.

A source with knowledge of QLogic said Desai told Benck he would not renew his contract, which expires May 1. The move comes as QLogic and its long-time HBA rival Emulex battle to get a leg up on 8-gig and FCoE, and as switch vendor Brocade makes its move into the HBA space. This is an important time for storage networking vendors as they prepare for a coming convergence between Ethernet and Fibre Channel networks.

Wall Street analysts take it as a bad sign for QLogic, which is also without a CFO following Tony Massetti’s resignation last November to become CFO at NCR.

Kaushik Roy of Pacific Growth Equities LLC described Benck as “sharp and pretty engaged. I don’t get the idea that Jeff was incompetent,” Roy said. “There must have been serious disagreements between him and H.K. They’re still searching for a CFO. All these things don’t bode well for the company. It is clear that QLogic is having some serious strategic issues that are extremely hard to overcome.”

Aaron Rakers of Wachovia expressed a similar opinion in a note to clients today.

“We view this as a negative for QLogic,” Rakers wrote, adding that Benck was seen as Desai’s probable successor. “This announcement likely puts increased strain on QLogic’s executive management team, which has been in the process of looking for a CFO.”

Benck’s resignation raises issues regarding QLogic’s future technology direction, as well as the company’s fate. There has been speculation for several years that Cisco would acquire QLogic. Cisco resold QLogic Fibre Channel switches before developing its own fabric switches, the vendors are working together on FCoE, and QLogic has pieces that Cisco lacks for the coming converged network architecture.

But Roy, who was quoted in the Orange County Business Journal story saying he “would not be surprised” if Cisco buys QLogic and Brocade acquires Emulex, said Benck’s departure probably means no deal is imminent. “Why would Jeff leave before the acquisition?” he said. “It would make more sense for him to stick around until after the deal.”


March 14, 2008  1:47 PM

Interesting tidbits from around the storage blogosphere

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The storage market is a vibrant one right now, and as social networking concepts like blogs become more popular in the corporate world, the storage industry has a lively, varied blogosphere to match. Below are examples of some of the more interesting commentary I’ve seen lately, in case you missed them.

Continued »


March 13, 2008  2:26 PM

NetApp’s messaging and users’ experiences

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

NetApp has been on the radar screen more than usual this week, with the announcement of its rebranding campaign followed by its Analyst Day 24 hourse later.

When I’ve written about NetApp over the past few months, I’ve received feedback from its customers talking about the discrepancies between what NetApp’s claiming and what they’re seeing in their shops. A while back I heard from someone using one of NetApp’s arrays who said he felt there had been a difference in what his NetApp sales rep told him and what actually happened after he installed the machine, specifically when it came to Fibre Channel LUNs and snapshots.

According to NetApp, its most current official best practices state that LUNs have the same snapshot overhead as other data on FAS systems, an estimated 20%. But in the course of reporting on Analyst Day, I quoted a different user in the article talking about how he’s seen overhead issues with LUNs. This user’s understanding is also that the best practice is 100% overhead for snapshots of that data.

Another comment in my Analyst Day story suggested high-end customers don’t view NetApp’s boxes as matching the reliability of Tier 1 based arrays. Val Bercovici, director of competitive sales for NetApp, said that attitude is outdated because it doesn’t take into account the vendor’s more recent focus on higher-end storage. As he put it, it represents a “very rear-view mirror view of NetApp.” But, he conceded that this is exactly why NetApp is trying to change its messaging.

But messaging can be a big part of the problem when there is a discrepancy between what the sales rep claims and what the actual hands-on engineer wants to configure. Who knows more about how the machine is actually going to perform, a sales or marketing rep or an engineer doing the installation?

The user I quoted in the Analyst Day story, Tom Becchetti, says he was told about the 100% overhead best practice at a training class last fall, and concedes the instructor could have been going on outdated information. But if that’s the case, the lack of up-to-date information in the field about NetApp products are as big a problem as the overhead itself.

I’m hoping there are other NetApp end users floating around who can weigh in on this. What has been your experience with NetApp products? What has been your understanding of NetApp’s best practices? What would you like to see NetApp doing as it tries to reinvent itself?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: