Storage Soup

Feb 1 2008   9:54AM GMT

NetApp pulls a fast one on EMC with SPC

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I don’t refer to many things related to data storage as humorous, but I have to admit this is a hoot.

Everybody knows by now that EMC hasn’t submitted its products to the Storage Performance Council (SPC) for performance benchmarking, saying it’s a rigged system.

So some in the storage industry feared that hell had frozen over when they saw an SPC benchmark published for Clariion this week (scroll down, it’s in the table). Actually, two SPC-1 benchmarks have been published for the Clariion CX-3 model 40, one with and one without SnapView enabled.

One little twist, however: in the “test sponsor” column next to EMC’s products is the name “Network Appliance Inc.”

Now that. is. hilarious.

Shockingly, the NetApp-submitted benchmark numbers show the Clariion CX-3 40 with lower performance than that of NetApp’s FAS3040. Or, as a NetApp press release put it:

In both cases, the NetApp FAS3040 outshined the EMC CLARiiON CX3-40, delivering 30,985.90 SPC-1 IOPs versus 24,997.48 SPC-1 IOPs (baseline result) and a robust 29,958.60 SPC-1 IOPs versus just 8,997.17 SPC-1 IOPs (baseline result with snapshots enabled). These results further validate NetApp as the high-performance leader for real-world data center deployments featuring value-add data management and data protection functionality. 

While I agree NetApp’s move is somewhat ridiculous, if there were a mom refereeing between these squabbling siblings of the storage market, NetApp could accurately say, “But he started it!”

In fact, not only did EMC start it, but it did this exact thing first. This bickering goes back to the hoary days of November 2006. NetApp released the 3000 series and published performance specs that showed its new array performing far better than EMC’s newest Clariions. Although performance testing is generally against its beliefs, EMC couldn’t let that stand, so did its own test on NetApp’s equipment. EMC’s internal tests showed that NetApp’s filers initially perform better than Clariion, but as NetApp systems fill up its WAFL file system causes  fragmentation that slows everything down. So EMC conceded NetApp’s original results, but contended the initial results were reflective of how performance on the NetApp system would change over time. I waded into this whole mess back when it happened, if you want to read what analysts had to say, it’s all here.

“Many companies have access to other vendor’s equipment–competitive analysis is nothing new,” argued SPC administrator Walter Baker. “NetApp’s not the only EMC competitor to have run competitive analysis.”

“But they’re the only competitor whose analysis you’re endorsing,” I replied. Baker insisted it’s not an endorsement–that publication on SPC’s Web site among all its other specs merely serves as notification that NetApp’s results have been submitted for approval. He also pointed out that unlike, say, vendor-published white papers about another vendor’s product, there’s a redress process for EMC in this case.

There’s a 60-day review period before the result is officially accepted. Until then it’s submitted for review status. In that period, it can be challenged by any member company, or in this case, EMC, that the testing was not compliant with the SPC-1 spec or did not represent the performance the Clariion should have attained.

Baker said EMC has not challenged yet. “Absolutely not–and they have been notified, because I spoke with them myself,” he said. He added,  “as the auditor I feel the result produced by NetApp is representative.” Pressed further, Baker said his basis for that conclusion was “talking to people who are familiar with EMC equipment.

“I understand what you’re saying,” he admitted. “At first blush it does seem to be a conflict of interest–but it really doesn’t serve Netapp’s purpose if they were to understate or undermine the performance of the EMC equipment, because it would bring about an immediate response from EMC.”

EMC hasn’t yet responded to my e-mail about this, but something tells me they’ll have something to say before the review period is up. And what about this really serves NetApp’s purposes anyway? Have they done anything with this than cast aspersion on the very spec at the core of this latest volley against EMC? If there’s anything to be learned from this from my point of view, it’s to add an extra shake of salt when referring to SPC benchmarks. 

And seriously, I would love to see the user considering a Clariion against an FAS3040 for whom this is the tipping point in one direction or another–I would love to see the user on the verge of signing on the dotted line for a Clariion suddenly saying, “But wait! NetApp’s performance testing shows this array doesn’t perform as well as the FAS3040!” 

It’s kind of like when MacDonald’s tells you its fries taste better than Burger King’s, when Coke tells you more compensated blindfolded taste testers picked its soda over Pepsi’s in a carefully controlled totally off-the-cuff random taste test, when a Red Sox fan walks up to  you with a T-shirt that says “YANKEES SUCK!” All that it really, reliably tells you is what one company thinks of its competitor. And we kind of don’t need a press release about that, especially not when it comes to NetApp and EMC.

11  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Beth Pariseau
    Hi Beth -- love the post! I, too, think it's a hoot. One thing I'm not clear about, though. Right now, EMC is not a member of SPC for a variety of reasons. Does that mean we have the right to challenge? Or does EMC have to pay to join SPC, implicitly endorsing them in the process, in order to understand what was done, and offer our thoughts? That's an interesting dynamic, isn't it? Maybe Walter knows the answer to that one. And, finally, does anyone really care all of this, other than for its unquestionable entertainment value? Maybe you (or your readers) know the answer to that one!
    0 pointsBadges:
    report
  • Beth Pariseau
    Well said and well overdue! Thank you Beth! Storage vendors need to come to grips with the commmodity label and the real storage differentiation and advancements are coming from the small independents. Hitachi and EMC arguing over SSD - ppppfffftttt - please - wont it be great in seven years when it is affordable for the remaining 99.99%. The real bake-off we need is ease of management - but oooops the small independants would take that crown as well.
    0 pointsBadges:
    report
  • Beth Pariseau
    And seriously, I would love to see the user considering a Clariion against an FAS3040 for whom this is the tipping point in one direction or another–I would love to see the user on the verge of signing on the dotted line for a Clariion suddenly saying, “But wait! NetApp’s performance testing shows this array doesn’t perform as well as the FAS3040!” ... Doesn't work that way. Part of RFPs/RFQs I've done in the past is demand SPC-1 numbers. Does that totally eliminate a vendor? No. Part of the decision matrix. Since EMC hasn't had numbers in the past, they get a big fat blank spot on the matrix. Now they have a number to go in there! The SPC benchmarks have been out for quite a while and a number of storage professionals use them. Take the time to read the full disclosures, read the specifications: http://www.storageperformance.org/specs/ You might learn something.
    0 pointsBadges:
    report
  • Beth Pariseau
    so rob, in that RFP process, a competitors' numbers gleaned from a competitors' testing environment, when that competitor has every reason to make those numbers as low as possible, would really be considered good comparitive information? this is something that would fly in most environments? i agree that performance numbers are important, but in this case doesn't the sourcing make the number a little suspect? if not, why bother to include SPC numbers at all? why not just rely on competitive white papers? think about the slippery slope aspects of this, too. turnabout's fair play, right, so HDS could go and take a NetApp filer and run SPC benchmarks against the stuff it OEMs from BlueArc, and then have those numbers be submitted to SPC as fact? if competitors are allowed to supply what are supposed to be objective measurements, the SPC stats that go into RFPs and RFQs, as you mention, become less meaningful. that doesn't bother you?
    0 pointsBadges:
    report
  • Beth Pariseau
    Beth, if one of NetApp's competitors did that, NetApp would have an opportunity, as EMC does now, to review the tests and contest them. As has been said, what is to be gained by fixing the tests, if it can be easily shown that they were fixed? It would be dishonesty on a very public and high profile stage. It's not like cooking the numbers for an internal whitepaper that you share with customers. The process here is a bit more transparent.
    0 pointsBadges:
    report
  • Beth Pariseau
    Even if the test results can be "tuned", then possibly so can real world to get similar results.
    0 pointsBadges:
    report
  • Beth Pariseau
    "so rob, in that RFP process, a competitors’ numbers gleaned from a competitors’ testing environment, when that competitor has every reason to make those numbers as low as possible, would really be considered good comparitive information?" Not necessarily. But if that is all there is and it satisfies an RFP process - so be it. One would think that EMC has the incentive to do better SPC numbers, but I doubt they do it. But it does seem rather strange that you see a lot of larger (volume, revenue) vendors posting numbers (IBM,HP,NetApp). "this is something that would fly in most environments? i agree that performance numbers are important, but in this case doesn’t the sourcing make the number a little suspect? if not, why bother to include SPC numbers at all? why not just rely on competitive white papers?" In my earlier post, I suggested you read what SPC-1 and 2 test. Educating yourself in what the tests are might help you to validate them as good tests. "think about the slippery slope aspects of this, too. turnabout’s fair play, right, so HDS could go and take a NetApp filer and run SPC benchmarks against the stuff it OEMs from BlueArc, and then have those numbers be submitted to SPC as fact?" Absolutely. And then BlueArc could do the right thing, run SPC and post higher numbers. HDS would then be forced to pull their numbers and look quite asinine in the process. That's why NetApp is taking somewhat of a risk here (minor) in that they risk EMC posting their own numbers. But even if EMC does, NetApp will crow they forced it and are confident their numbers (FS) will be higher anyhow. "if competitors are allowed to supply what are supposed to be objective measurements, the SPC stats that go into RFPs and RFQs, as you mention, become less meaningful. that doesn’t bother you?" Heck no. It is a self-healing process and the major reason you don't seeing this happening more often (it is a first) is: 1) It isn't cheap (purchase the equipment, dedicate the engineers, etc.) 2) Most of the major players are doing it, so it isn't exactly a target rich environment for gaming a competitor. 3) The vendors themselves knowing their own equipment would run their own SPC tests - post higher numbers and make the competition look silly in posting lower numbers. and as mentioned above, the NetApp test run sort of pressures EMC to run the benches and show how poorly NetApp did with EMC kit.
    0 pointsBadges:
    report
  • Beth Pariseau
    //In my earlier post, I suggested you read what SPC-1 and 2 test. Educating yourself in what the tests are might help you to validate them as good tests. // i think i understand what the tests are. the issue isn't whether the nuts and bolts of the tests themselves are good in concept, the issue here is the context. sometimes just the appearance of a conflict of interest is enough to invalidate some statements, or at least open them to scrutiny. i also find something a little wrong with basically saying, "if people who won't join our club want to refute the negative things our club says about them, they can just join us already." that seems pretty coercive to me, and not something that adds to SPC's credibility, either. //But even if EMC does, NetApp will crow they forced it and are confident their numbers (FS) will be higher anyhow.// exactly, so why would EMC play along with that? it's an unwinnable situation for EMC, as you yourself point out. so how can we call that fair? with all the reasons EMC has not to play along with this, it's clearly not a straightforward or automatically 'self-healing' process, and i don't think EMC's silence on the issue can necessarily be interpreted as acquiescence.
    0 pointsBadges:
    report
  • Beth Pariseau
    //But even if EMC does, NetApp will crow they forced it and are confident their numbers (FS) will be higher anyhow.// "exactly, so why would EMC play along with that? it’s an unwinnable situation for EMC, as you yourself point out. so how can we call that fair?" That NetApp posted EMC numbers? It isn't a matter of fairness but participation, getting numbers out there for all to see. The fact that NetApp posts EMC numbers isn't a unique idea, you know! Maybe they borrowed it from what goes on occasionally at SPEC benches. SPEC CPU benchmarks are well regarded and help many decide when making a purchasing decision. Here is an example of where Intel does AMD a favor and benchmarks their CPU: http://www.spec.org/cpu2006/results/res2007q3/cpu2006-20070723-01527.html Why would Intel do that? To show they have better performing kit, so folks can make reasonable assumptions about purchasing based on performance. Maybe AMD turned around and posted better numbers (I haven't looked, but I doubt it... they do use Intel compilers and I'm sure Intel tuned it up for their AMD run ;-) So is SPC a legitimate test? (That's all that's left to debate in my opinion). The fact that there are a number of major players participating, the tests are well documented lend credence to the tests.
    0 pointsBadges:
    report
  • Dan
    Nuff said ... HP comes clean in latest wave of storage benchmarks http://www.theregister.co.uk/2002/12/11/hp_comes_clean_in_latest/ while its an old article. The point about altering for a benchmark versus "real world applicability" illustrate the problems with SPC.
    0 pointsBadges:
    report
  • Beth Pariseau
    Nuff said … > HP comes clean in latest wave of storage benchmarks > http://www.theregister.co.uk/2002/12/11/hp_comes_clean_in_latest/ > while its an old article. The point about altering for a > benchmark versus “real world applicability” illustrate the > problems with SPC. What "problems"? Name one (please provide details, not hand-waving). So 6 years ago HP gets caught doing a benchmark "special" and I guess your point is the benchmark can't be legit because HP gets caught? That's rather lame as Sun pulled a fast one with SPEC and cooked the "art" benchmark, SPEC CPU had to toss that like they tossed "matrix" years before. The author of that article goes on to point out that LSI was guilty of the same stunt: "LSI switched off write cache mirroring when it tested its FastT arrays earlier this year." IBM (FastT) uses write cache on their more recent FastT tests. Perhaps HP was wise enough to realize that their stunt certainly didn't impress anybody and came clean. I'm sure more than a few get hair-brained ideas about gaming a benchmark. Either that portion of the benchmark is tossed (art, matrix in SPEC CPU), a rule is changed to prevent abuse, the offender is shamed into discontinuing the practice or the benchmark is replaced altogether (i.e. TPC-E set to replace the much too simplistic TPC-C for OLTP testing). Benchmarks are important and will need constant revisioning as systems get faster and engineers game them.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: