Storage Soup


October 29, 2009  3:27 PM

Emerging vendors take a SaaS approach to storage reporting

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Two startups are taking a software-as-a-service (SaaS) approach to reporting on storage assets.

Storage Fusion Ltd, a UK company spun off by a private investment firm last year, claims to be reporting on storage environments of up to 60 PB, and signed a licensing agreement with GlassHouse Technologies last fall. According to managing director Graham Wood, the 15-person company is currently working with about 50 active customers, all with more than 50 TB, not counting “one-off” analytics done with some partners. The company was not able to provide an end user for an interview.

Storage Fusion’s product, Storage Resource Analysis (SRA), consists of a series of scripts which customers download and execute to collect data on EMC, Hitachi Data Systems, Hewlett-Packard, IBM, or NetApp arrays in their environment. The data generated by the scripts are then sent back to Storage Fusion’s data center, where Storage Fusion performs the analysis and provides results to the customer via the Web. According to Wood, if users get their data uploaded before 3 p.m., the analysis will be available the same day.

The company’s Web portal provides analysis according to gegraphical parameters, or the total resources, allocation and utilization at each data center location; a consumer view, which shows the hosts connected to the storage; a provider view, which normalizes the view of resources across heterogeneous storage providers into one report on the total storage environment; an environmental view, which provides energy consumption statistics according to published power and cooling specs from vendors; and an optional add-on business view, which translates storage capacity data into business-relevant statistics like dollars and cents. The reporting tool also looks for “exceptions” and provides error warnings and information on “orphan” reclaimable storage. The tool can decompose virtualization layers and supports thin provisioned arrays under its tiering tab.

Some customers, particularly large ones, might be wary of a third party peeking into their environment or sending data about their environment out of their data center. But Storage Fusion sales and operations director Peter White said “from a security perspective, our scripts are completely open — we hid nothing, and prior to running them, the user can look at them and see they’re just service log commands, the kind of command line utilities they execute all day.” The Web portal is also accessed via an SSL connection.

On the other side of the pond, and the other side of the customer-size spectrum, is Waltham, Mass.-based Aprigo, whose Ninja product has gotten several hundred free-version downloads since August. This first free version of the product collects file metadata regardless of hardware vendor on common file attributes such as name, type, size, and date modified. Aprigo compliles those attributes in a single view, and presnets them along with a cost calculator to show the dollar value of storing information on a yearly basis.

“It can be used for archiving or tiered storage business justification,” Aprigo CEO Gill Zimmerman said. Customers can also store up to 500 GB or 5 previous historical scans for trending reports. Aprigo is also working to put together a “community intelligence” report where users can compare themselves anonymously against other Aprigo customers.

Aprigo has a midmarket focus and doens’t use the term SRM because it’s reporting on file data rather than physical devices, according to Zimmerman. It’s working on a collector for other SaaS-based file systems like Google Docs. The company plans a Nov. 15 release that will also report on access control lists for file systems.

While some analysts have said the SRM space, which has seen its share of ups and downs, won’t mature until services and help for customers interpreting analytics results are more widely available, the SaaS or services-delivery model of SRM tools is not a new idea – Aptare has sold its backup and storage reporting tools to service providers for years; similarly, IBM’s Storage Enterprise Research Planner (SERP) storage resource management tools are deployed through IBM Global Services. CommVault began offering a SaaS-based backup reporting service in 2008; Dell has pledged a SaaS approach to services; and Continuity Software also offers a SaaS option for its disaster recovery change management tool.

October 28, 2009  5:13 PM

Thales launches key manager, gets support from Brocade and HDS

Dave Raffo Dave Raffo Profile: Dave Raffo

The encryption technology from defunct Neoscale is alive and well, and – after winding its way about Europe – shipping in a key management appliance that Hitachi Data Systems has started reselling.

A few years back, Neoscale, Decru (now part of NetApp) and a few others were part of what was seen as an emerging market for tape encryption devices. The market never took off, and U.K. enterprise security vendor nCipher bought Neoscale’s assets for $1.9 million in late 2007. Thales Group, a French-based company, then acquired nCipher for $100 million in July 2008.

Thales still sells CryptoStor tape encryption devices that Neoscale developed, and brought more Neoscale IP to market in Thales Encryption Manager for Storage (TEMS). Thales said this week that Brocade has qualified TEMS for it s Encryption Switch and the FS8-18 Encryption Blade that fits into Brocade’s DCX director switch.

Thales also secured a reseller deal with HDS to sell TEMS with Brocade switches. Thales is looking for similar deals with other storage vendors as it tries to become a player in the storage security game. TEMS handles encryption keys for devices from multiple vendors.

“We are attractive to Hitachi because we are security only, and not a storage and security company,” Thales director of product marketing Kevin Bocek said.

In other words, Thales is different than its rival RSA Security, which is owned by EMC.

“We’re not affiliated with a storage vendor,” Bocek said. “We’re neutral.”

Thales is part of the Key Management Interoperability Protocol (KMIP) alliance that is developing an industry standard, which Bocek expects to become ratified next year. Once that happens, he says, it will help vendors bring encryption products to market faster.

“Within a year or two, many more encryption devices will be brought into storage systems,” he said, sounding an optimistic tone for a market that hasn’t yet amounted to much.


October 28, 2009  2:12 PM

Quantum: ‘Our strategy is working’

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum execs are using their results from last quarter as evidence that the vendor can survive – and perhaps thrive – without EMC as a data deduplication partner.

Quantum reported a strong increase in disk and data deduplication revenue last quarter, the first since EMC’s $2.1 billion acquisition of Quantum’s dedupe rival Data Domain.

Quantum’s $28.2 million in disk and software revenue represented a 47% increase from the previous quarter and 36% from last year. That revenue includes Quantum’s StorNext clustered file system, but executives on the company’s earnings call Tuesday night said VTLs with dedupe made up a majority of the $28.2 million. They pointed to a “significant” increase in Quantum DXi disk sales, a “modest” increase in StorNext sales and slight decline in license revenue from EMC.

Quantum also cited three customer deals of more than $1 million each for its DXi7500 in the quarter. The increased disk sales helped Quantum beat Wall Street expectations with $175 million in revenue – up 9% from the previous quarter while down 19% from last year. Quantum’s $11 million profit was its second straight quarter in the black after a string of losses.

The numbers show Quantum still has a long way to go – EMC reported Data Domain alone brought in $105 million last quarter. But CEO Rick Belluzzo said Quantum gained momentum, and he hopes to take advantage of “disruption” caused by EMC-Data Domain by scoring new OEM deals and adding channel partners to transform the company.

“This company has historically been mostly about tape,” Belluzzo told SearchDataBackup after the earnings report. “Now I suspect people are watching the deduplication segment more closely.”

Belluzzo said the split with EMC prompted Quantum to chase large enterprise deals rather than rely on EMC to land those deals and pay Quantum a licensing fee.

“The EMC change did clarify things,” he said. “We made an aggressive shift in our go to market focus as the EMC relationship went through a dramatic change. We always struggled with large accounts on whether we would compete with our partner there. But after EMC bought Data Domain, we said we’re going to play to win. ”

On the earnings call, Quantum executives said they see opportunities to partner with other storage vendors looking to add deduplication. Afterward, Belluzzo said he’s still talking to new possible OEM partners.

“It’s too early to suggest the timing of a deal, but we expect to have another OEM partner,” he said. “We have a number of opportunities, although they could take different forms from the EMC deal.”

Belluzzo sees other possible openings for Quantum in the wake of the EMC-Data Domain deal. Quantum is already a Symantec OpenStorage (OST) partner, and Belluzzo says the vendor is looking to add resellers who feel left out in the wake of the EMC-Data Domain deal.

“There’s confusion in the reseller channel,” he said. “Some may have a relationship with EMC and not Data Domain, or the other way around, and there’s some insecurity around that. We get that feedback from a lot of people.”

Quantum is hoping its new DXi6500 midrange dedupe system can bolster sales through the channel.

“This was a critical quarter for Quantum given the economic downturn and changes in the deduplication landscape,” Belluzzo said. “Despite these factors, we were able to deliver strong results.”

Quantum’s stock price increased a whopping 21.5% today, up 35 cents to $1.98.


October 27, 2009  7:15 PM

Emulex leads its convergence strategy with Ethernet

Dave Raffo Dave Raffo Profile: Dave Raffo

Emulex used its analyst day today to officially roll out its OneConnect Universal Converged Network Adapters (UCNAs) and underscore its strategy of taking a 10-Gigabit Ethernet path to converged Fibre Channel and Ethernet networks.

Emulex said the OneConnect adapters it first talked about in February are available for partners to ship, and IBM has agreed to OEM its 10-GigE NIC and 16 Gbps Fibre Channel HBAs with the Power Systems server platform.

Emulex is taking a different path than its main rival QLogic, which already has several partners selling its single-chip 8100 Series CNAs. Emulex is releasing its 10-GigE adapter first with TCP/IP and TCP Chimney support, and hopes to deploy a pay-as-you-go strategy where customers will later upgrade with iSCSI and FCoE connectivity. Emulex gets its 10-GigE silicon through an OEM deal with ServerEngines.

Emulex is counting on 10-GigE and Intel’s Nehalem servers driving convergence, with storage connectivity to follow.

“We’ve taken an Ethernet approach rather than a storage-centric approach,” Emulex VP of corporate marketing Shaun Walsh told StorageSoup. He says the Emulex approach lets customers pay for only 10-GigE first, instead of having to pay for FC connectivity they might not use yet.

“Every NIC card has the potential to be an FCoE card,” Emulex CEO Jim McCluney said during his analyst day presentation.

Emulex customer Lars Linden, SVP of data center services for Royal Bank of Scotland, spoke at the analyst day to express his eagerness for a converged network. Linden said convergence will eventually help him simplify management, reduce cables and increase utilization, adding that it can’t arrive soon enough for him. He says he RBS spends about $500,000 a year on cabling.

“I have people on staff who do nothing but cabling all day long,” he said. “They’re very clever people and I would like to have them do higher value activity, but this is table stakes for running a data center. As soon as there is a commercially available set of capabilities and technologies supporting convergence, there will be a rapid adoption.”

Linden compared the eventual move to consolidated networks to virtualization as a “game changer” technology.

Nobody expects 16-gig FC any time soon, despite Emulex’s touted design win. Emulex executives acknowledged the market moved to 8-gig FC much slower than it went from 2-gig to 4-gig FC so there’s no hurry to push out 16-gig products.

“This is a future announcement,” McCluney said. “We’re not going to see any 16-gig revenue for quite some time.”

The suspicion here is that Emulex and IBM announced the 16-gig FC design win to end speculation that QLogic might replace Emulex FC HBAs on the Power Series after securing a CNA OEM win with IBM last week. Emulex is IBM’s exclusive partner for 4-gig and 8-gig HBAs on the Power Series.

“We believe investors may have questions regarding QLogic’s recent announcement that its FCoE CNAs had been qualified within IBM’s System p (Unix) server platforms given that Emulex has long been the sole-source provider of FC HBAs into these IBM server platforms,” Stifel Nicolaus Equity Research analyst Aaron Rakers wrote in a note to clients after the Emulex analyst day.


October 27, 2009  1:50 PM

Industry bloggers debate dedupe to tape

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It just wouldn’t be the storage industry if there weren’t technical debates popping up on a daily basis.

One that caught my eye today is an ongoing conversation between some storage bloggers about data deduplication to tape, and whether or not it’s a crazy idea. Or, more accurately, whether it’s “good crazy” or “bad crazy.”

Backup expert W. Curtis Preston got things started with a blog written after he visited CommVault’s headquarters in Oceanport, N.J., and discussed the concept of CommVault’s data deduplication to tape feature added in Simpana 8. “Dedupe to tape is definitely crazy.  But is it crazy good or crazy bad?” Preston wrote.

Everyone (including the CommVault folks) agrees that no one would want to do any significant portion of their restores from deduped tape.  But I also agree that if I typically do all my restores from within the last 30 days, and someone asks me for a 31 day-old file, it’s generally going to be the type of restore where the fact that it might take several minutes to complete is not going to be a huge deal.  (In the case that you did need to do a large restore from a deduped tape set, you could actually bring it back in to disk in its entirety before you initiate the restore.)

Now here’s the business case. Anyone who has done consulting in this business for a while has met the customer where everyone knows that 99% of the restores come from the last 30-60 days — and yet they keep their backups for 1-7 years.  What a waste of resources.  CommVault is saying, “Hey.  If you’re going to do that, at least dedupe the tapes.”  They showed me two business cases from two customers that doing this was saving them over $500K per year in their Iron Mountain bill.

Curtis made some declarative statements in that blog post, and when that happens you can expect someone in the storage blogosphere to write a post in opposition. EMC Networker data backup consultant Preston de Guise did the honors this time, with a reponse titled “Dedupe to tape is “crazy bad” if the architecture is crazy.”

Yes, it’s undoubtedly the case that the CommVault approach will reduce the amount of data stored on tape, which will result in some cost savings. However, penny pinching in backup environments has a tendency to result in recovery impacts – often significant recovery impacts. For example, NetBackup gives “media savings” by not enforcing dependencies. Yes, this can result in in saving money here and there on media, but can result in being unable to do complete filesystem recoveries approaching the end of a total retention period, which is plain dumb.

The CommVault approach while saving some money on tape will significantly expand recovery times (or require large cache areas and still take a lot of recovery time). Saving money is good. Wasting a little time during longer-term recoveries is likely to be perceived as being OK – until there’s a pressing need. Wasting a lot of time during longer-term recoveries is rarely going to be perceived as being OK.

An IT admin/blogger writing at Standalone Sysadmin picked up on de Guise’s post and had this to say:

My problem with this is tape failure. If one of the 50 individual backup tapes fails, it’s no problem. Sure, you lose that particular arrangement of the data, but it’s not that big of an issue. Unfortunate, sure, but not tragic. If you lose the 1 tape that contains the deduplicated data, though, then you immediately have a Bad Day(tm).

Essentially, you are betting on one tape not failing over the course of (in the argument of Mr Preston) 7+ years. And if something does happen in that 7 years, whether it’s degaussing, loss, theft, fire, water, or aliens, you don’t lose one backup set. You lose every backup that referenced that set of data.

So I would, if I could afford one, buy a deduplicated storage array in a heartbeat for my backup needs. But I would not trust a deduplcated archival system at all. The odds of loss are too great, and it’s not worth the savings. I’d rather cut the frequency of my backups than save money by making my archives co-dependent.

Of course, another user we talked to around the launch of Simpana 8 felt differently:

The global deduplication with Simpana 8 also extends to tape, making it the first product of its kind to allow for writes to physical tape libraries without requiring reinflation of deduplicated data. “That’s very appealing,” said Paul Spotts, system engineer for Geisinger Health, a network of hospitals and clinics in central Pennsylvania. “We added a VTL [virtual tape library] because we were running out of capacity in our physical tape libraries, but we lease the VTL, so we’re only allowed to grow so much per quarter.”

What say the rest of you?


October 27, 2009  10:37 AM

LiveDrive looks to hop across the pond with online data backup

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

U.K.-based LiveDrive, a competitor to consumer online backup services like MozyHome and Carbonite, is getting U.S. distribution thanks to a new partnership with LifeBoat Distribution.

Online marketing manager Jamie Brown says LiveDrive has 300,000 unique accounts worldwide, 120,000 of them already located in the U.S. LiveDrive creates a network drive that shows up on users’ PCs. Any files sent to that L:\ drive will be backed up to LiveDrive’s cloud; data can be stored there for safekeeping or users can use LiveDrive to keep the L:\ drive synced and share data among multiple machines. Users can also access their data through LiveDrive’s Web portal, which also offers mini-applications that allow users to edit or play back photos and video.

The company has data center infrastructure in the United States through collocation, but currently all users access data through load balancers in the UK. Brown said there are plans to expand the US infrastructure organically, but won’t rush, saying currently users aren’t experiencing performance issues with the way the infrastructure is set up.

After a year, though, Brown said LiveDrive hopes to have an office in the US within 12 months, and may also add a business-level service to compete with services like MozyPro and i365′s EVault Small Business Edition. It currently does not offer service level agreements or geographic redundancy for consumer users.

Despite its claims about its client base, LiveDrive was unable to provide a public customer reference before the announcement this morning.

Enterprise Strategy Group analyst Lauren Whitehouse said this is becoming an increasingly tough space for new players to differentiate themselves in. “Mozy has more than a million customers, and for Symantec’s SwapDrive the number’s even greater,” she said. “LiveDrive has plenty of formidable competition.”

One factor that might hurt LiveDrive, at least in the beginning, is the fact that data must currently be accessed through the U.K. “Anyone who has discomfort sending data out to the cloud might have more discomfort knowing there’s a geographic distance there,” Whitehouse said. Even if performance isn’t bad, “there could be ramifications if there’s a dispute.”

As for the differentiation of being able to manipulate content within the cloud, Whitehouse said LiveDrive will also face competition from players like Memeo and Ricoh’s Quanp, to say nothing of photo-sharing sites like Flickr and Photobucket, boith of which offer small photo-editing software suites with their services. “It’s somewhat of a Wild West situation right now ith different companies trying to do a ‘land grab’, capturing customers and then building from there,” she said, including LiveDrive in that mix.


October 23, 2009  7:38 AM

10-22-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Stories referenced:

(0:22) Cloud storage provider Zetta looks to replace production network-attached storage

(3:30) Quantum launches midrange data deduplication backup appliances

(5:02) IBM unveils new flagship storage system, DS8700

(7:14) Panasas delivers clustered NAS with SSD

(7:51) Riverbed updates RiOS; Steelhead WAFS device now supports Citrix and disaster recovery

(8:22) EMC cautiously optimistic about storage spending


October 21, 2009  8:58 PM

TIP: 2010 storage spending outlook slightly optimistic

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A report released this week by TheInfoPro says 2010 storage spending will probably be an improvement over 2009, but that’s not saying much.

The numbers released this week are the result of interviews with Fortune 1000 storage professionals, according to the TIP report.

Out of 252 respondents to the ongoing Wave 13 study, 27% said they expect a decrease in spending between 2010 and 2009, 31% said their budgets would remain flat, and 42% said budgets would increase.

This is better than the earlier Wave 13 numbers from the beginning of the year in which 36% of 258 respondents expected a decrease between 2008 and 2009, 26% thought it would be flat and 38% expected an increase.

TIP also broke out the size of the increases or decreases expected next year. Nearly 20% of those who expect an increase expect it to be between 1% and 10 %. Interestingly, the next largest category among those who expect an increase, close to 15%, expect an increase of 50% or more. However, of those who expect a decrease, more than 10% — the largest group — expect that decrease to be more than 25%.

Overall, 71% are in the are ranging from a 10% increase to a 10% decrease. That’s better than a sharp decrease, but hardly the “pent up demand” you hear about in some areas of the market (though maybe what’s being referring to is that 15% who expect a 50% increase, in which case I hope those few customers have someone screening calls and guarding the doors for them…). After spending plummeted between 2008 and 2009, flat-lining into 2010 isn’t necessarily good news, it’s just not more bad news.


October 20, 2009  7:07 PM

Email archiving vendor sues Gartner over Magic Quadrant

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Claiming that Gartner’s Magic Quadrant vendor-ranking reports constitute “disparaging, false/misleading, and unfair statements” about its email archiving product that have done damage to its sales prospects, ZL Technologies Inc. said today that it filed suit May 29 in US District Court in San Jose, Cali. against the analyst firm. The suit seeks damages of $132 million to account for what ZL says are lost sales, as well as punitive damages.

Gartner responded with a motion to dismiss the lawsuit in July, to which ZL filed a counter-motion. A hearing on these later filings is scheduled for this Friday.

ZL alleged in its initial complaint that Gartner consistently ranks its product in the lower left quadrant of its report, the “Niche” category, because its sales and marketing are not as strong as Symantec’s Enterprise Vault, which is consistently ranked in the highest Leader category. ZL’s contention is that this consitutes an unfair and defamatory means of evaluating and recommending products that has caused damage to its business. “Gartner continues to harm ZL and help entrench vastly inferior products in the American economy whose principal virtue, according to Gartner, is good sales and marketing,” the complain alleges.

What I found surprising about this initial complaint was the arguments ZL provided about just how much some end users rely on the Magic Quadrant report for purchasing decisions. Some examples:

Purchasers of the ZL Products have consistently and uniformly raised objections to even consider purchasing the ZL Products because of the Defamatory Statements. Even Oracle Corporation, one of the largest software vendors in the world, which resells the ZL Products, complains that it gets “Gartnered” when pursuing prospective customers for the ZL Products, i.e., that a prospect would not even consider looking at the ZL Products because of the low Gartner rankings. The power of a positive ranking in Gartner is immense because it is often the case that large purchases of technology are based exclusively on the MQ Reports.

For instance, the Office of the Inspector General, Department of Veterans Affairs (VA) recently conducted an investigation into the use of the Gartner’s MQ reports in connection with the VA’s $16,000,0000 purchase of certain leases and services from Dell. The Office of Inspector General reported that the VA made this large purchase based solely on the leadership rankings in the relevant Gartner MQ report.

[...]

In March 2009, ZL entered into contract to provide the email archive solution for one of the largest and most influential companies in the Silicon Valley. What makes this customer win especially relevant to this action is that: (a) the customer did not initially invite ZL to compete because of the Defamatory Statements, and (b) ZL won the contract only after beating out Symantec in an exhaustive, side-by-side “proof of concept” evaluation. Such a large customer win demonstrates that, but for the Defamatory Statements, ZL would have made many more sales than it has.

The complaint goes on to cite 10 more examples of potential sales in which ZL claims it was not invited to participate in a customer’s evaluation process, or pre-sales discussions were discontinued on the basis of the Magic Quadrant.

Gartner’s response is that the Magic Quadrant report amounts to a First-Amendment-protected statement of opinion, and that its rankings do not constitute a “false or misleading statement of fact” but rather a subjective conclusion.

ZL’s response in an opposition statement to the motion to dismiss argues, “Gartner tells the public that its research is “objective, defensible and credible”—it cannot now be allowed to escape the consequences of its misconduct by claiming the exact opposite, that its statements cannot be taken as anything more than its subjective opinion based on pure speculation and conjecture.”

ZL further argues that “a defendant may be held liable for statements of opinion that imply the existence of undisclosed facts. Gartner expressly stated that its statements had a factual basis, and intended that they be understood as being derived from a fact-based analysis, and can therefore be held liable even if the statements themselves are couched as opinion.”

“Try as it might, ZL cannot create a dispute where there is none,” Gartner countered further in a reply to ZL’s opposition to the motion to dismiss.

ZL alleges at great length in its Complaint (and recapitulates in its Opposition) that it has a strong product and satisfied customers. The Magic Quadrant reports do not say otherwise; the real point of contention here is not the quality of ZL’s product, but instead the subjective analytical model Gartner used to assess ZL’s market position and prospects. ZL does not contest Gartner’s basic assessments of ZL—that it has a good product but needs to expand its sales and marketing—but ZL challenges its placement on the Magic Quadrant Report because Gartner uses a “misguided analytical model” that gives “undue weight to sales and marketing.” Complaint ¶ 10. As the law makes clear, such analysis constitutes non-actionable opinion. None of ZL’s arguments dispute this bedrock principle. Instead, ZL focuses on a straw-man defamatory statement (that ZL’s product is “inferior”) that never appeared in—and is contradicted by—the plain text of the Magic Quadrant reports.

Personally, though I’m not a lawyer or legal expert, I think it’s unlikely that ZL can win its case. In one recent case in which securities rating agencies’ opinions relating to the subprime mortgage crisis were found not to be protected by the First Amendment, there were some different circumstances, such as one judge’s opinion that plaintiffs had sufficiently proven ratings agencies did not sincerely believe their own statements had basis in fact. The lawsuit between ZL and Gartner, on the other hand, seems to come down to the weight given to sales and marketing strength over technical product specifications rather than Gartner’s sincerity.

It would be easy to point out here that potential customers could also perform more of their own internal testing of products and not weigh the Gartner quadrants so heavily in their purchasing process, but it’s unclear whether that’s realistic in all cases. It also seems to be a matter of subjective opinion how important size of vendor and strength of sales are in the evaluative process of purchasing technical products. In a perfect world, maybe product evaluation would be a true meritocracy, but who among us hasn’t heard the old chestnut that “nobody ever got fired for buying IBM?”

I’m very interested in the peanut gallery’s response to this. Does ZL have a point about the weight being given to a subjective report in technical purchasing decisions? Or is this a case of impugning an evaluative process because of a disliked outcome?


October 20, 2009  3:35 PM

QLogic ‘swipes’ another FCoE win

Dave Raffo Dave Raffo Profile: Dave Raffo

Another piece of the Fibre Channel over Ethernet (FCoE) puzzle was put in place today when IBM said it will ship QLogic’s 8142 converged network adapters (CNAs) inside the Power Systems server platform for native FCoE connectivity.

IBM’s p Series servers run Unix and Linux operating systems. QLogic reps are crowing over the design win because it comes in its biggest rival’s backyard. IBM ships Emulex Fibre Channel HBAs exclusively with its p series, but has gone to its rival for FCoE adapters. Satish Lakshmanan, QLogic’s director of product marketing for host solutions, says QLogic’s is the only CNA that will ship with the IBM p series.

The 8142 is part of QLogic’s 8100 Series single-chip CNA platform that handles 10-Gigabit Ethernet traffic and has an integrated FCoE offload engine.

“This gives them a foot in the door in the p series,” Enterprise Strategy Group analyst Bob Laliberte said of QLogic.

Laliberte says the long-time FC HBA rivals have taken different paths in the early days of converged networking, with QLogic racking FCoE design wins while Emulex picks up 10-Gigabit Ethernet wins with its universal converged network adapters (UCNAs).

“QLogic was first to market with a single-chip architecture for FCoE that’s dramatically lower in power consumption, footprint and heat,” Laliberte said. “Being first out of the gate gives them a chance to get design wins in FCoE. Emulex is going in a different direction. You see Emulex getting wins for 10-gigE and down the road partners have an option to turn on FCoE.”

“This isn’t the only customer we’ve swiped away,” Lakshmanan said. “There’s more that will be coming.”

NetApp said in August that it would rebrand QLogic’s 8152 CNA as a built-in adapter for its FAS storage arrays. NetApp also qualified Brocade’s 1020 CNA but not Emulex’s yet.

IBM also sells QLogic’s 8100 CNAs on its System x and BladeCenter servers, and EMC qualified QLogic’s CNA on its Symmetrix, Clariion and Celerra NS storage platforms.

These are still early days for FCoE and converged adapters, though. Widspread adoption of FCoE on servers isn’t expected before 2011 and it will likely take years after that for it to show up in any volume on the storage side.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: