Storage Soup


July 22, 2008  1:23 PM

Amazon’s S3 crashes again; Web 2.0 goes Boom!

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The biggest difference between the last time S3 crashed and this time, in my observation, is that there was a much, much bigger chain reaction this time around. Last time, I knew of only a few companies using S3, like photo hosting site SmugMug, and startups that offer online backup services using their own interfaces on the front-end and Amazon’s hardware infrastructure on the back-end.

This time, not only were those types of Web 2.0 companies affected, but much bigger fish also felt the sting: no less than Web 2.0 microblogging phenom Twitter and some iPhone applications crashed along with S3.

The last Amazon outage was attributed to “growing pains” as the service gained popularity. I’d imagine adding popular apps like Twitter and the iPhone constituted another wave of painful growth. This is a new medium, and users of very new storage media accept some level of risk. But two major outages in six months is obviously raising some questions.

Skype has crashed and stopped responding, Twitter, Tumblr and other major websites are barely working, most aren’t displaying images, widgets or static material that was outsourced to Amazon S3 services,” reported blogger LinkFog as the outage occurred. “It’s kinda funny how this goes against the very nature of the web, in each networks are interconnected in several ways to ensure that a major breakdown won’t happen.”

Others, like a blogger at Web Worker Daily, were not happy with Amazon’s SLAs:

Amazon does offer an SLA for the S3 service, guaranteeing 99.9% uptime or part of your money back. With .1% of a month being around 45 minutes, that means they owe people money. The requirements for claiming a refund, though, are onerous enough that no one except large users will bother (hey, Amazon, how about an automatic refund when you know your servers are down?).

Recent reports suggest that this is actually what will happen.

Clearly it’s not a major disaster for people not to be able to Twitter for a few hours. But when it comes to things like the backup services attached to S3, it might be time for people to rethink whether one cloud back-end is the same as another. Amazon’s appeal is that it’s cheap and relatively unrestricted for Web developers–but I hope the backup companies basing their hardware infrastructure on S3 at least inform their end users what the back end is, so they can make an informed decision about service provider reliability.

July 21, 2008  1:24 PM

EMC gives LifeLine consumer storage product a facelift

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In the course of observing the festivities on NetApp and EMC blogs, I came across a sneaky little blog post/announcement from EMC about its LifeLine consumer storage product. According to The Storage Anarchist (aka Barry Burke), his post will be the only official announcement from EMC about LifeLine 1.1.

I’ve given up on trying to understand software release-numbering, btw. Sometimes it’s a “dot-oh” release for new OS support. Other times it’s a “dot one” for, say, Linux, Mac and NFS support, Active Directory integration, RAID 0, an embedded search engine, and oh, yeah, drive spin-down in a consumer NAS box.

But from Burke’s perspective as a guinea pig user of the product, maybe this release isn’t so significant. At the very least, it hasn’t checked off all the items on his wish list, including integration with Mozy similar to what was announced for Iomega hard drives last week, better TiVo integration and dedupe.


July 21, 2008  12:25 PM

NetApp bug-blog flap hits Jerry Springer proportions

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Photobucket
Source: DVDTalk.com

In case you missed it, there’s been an entertaining exchange going on between EMC, NetApp and even IBM bloggers over a bug in NetApp’s SnapLock software.

It all started when EMC’er Scott Waterhouse of The Backup Blog got his hands on a notification from NetApp to customers urging an upgrade to OnTap 7.2.5 to resolve a vulnerability in SnapLock’s WORM functionality. Waterhouse didn’t go into much detail about exactly what the bug was and quoted selectively from the customer-notification document:

“…versions of Data ONTAP prior to 7.2.5 with SLC have been found to have vulnerabilities that could be exploited to circumvent the WORM retention capability.” They go on to say: “NetApp cannot stand by the SnapLock user agreement unless the upgrade is performed.”

Now this is a really big deal. This is not a trivial little upgrade to OnTAP. This is a big one.

Predictably, he then segued into a sales pitch–”Maybe it is time to explore an alternative?”–without giving much more information about what the problem actually was, or why exactly the upgrade between dot releases of OnTap isn’t trivial.

Waterhouse’s take was then picked up by Mark Twomey, aka StorageZilla, who led with the headline, “NetApp SnapLock Badly Broken.” Twomey also emphasized the fear angle “Right now none of those who aren’t running 7.2.5 or above are not compliant and it turns out they never were without divulging further details about the problem.

This is about where I came in. I tried pinging Twomey to no avail; I also tried hitting up some of the folks on Toasters. the NetApp users forum, to see what they’d heard. I planned to ping NetApp as well, but if the bug was as bad as the EMC’ers were making it out to be, I didn’t expect them to be willing to talk about it.

They surprised me by contacting me before I could get to them, and last Friday chief technical architect Val Bercovici gave me NetApp’s side of the story, telling me, “We expanded our testing on SnapLock to a third class of protection from tampering with the WORM feature.”

The first two classes, which had already been tested, concerned protection against malicious end-user removal of data, as well as protection from malicious administrative actions. The third and most recent class tested against was a case “where knowledge of the source code combined with some other products that are out there could be used for data deletion” inside SnapLock. Bercovici also didn’t want to give all the gory details, saying the vulnerability had not been exploited in the field, and NetApp wanted to keep it that way.

“It’s a highly unusual case, and in any event would be an audited deletion from the system,” Bercovici said. “It’s a level of testing EMC has never done” with Centera, he added.

Not quite “not compliant and never were”. NetApp bloggers were all over the EMC bloggers last week about the tone of their blog posts. It had begun to seem like the EMC-NetApp rivalry had faded a bit, as both companies go up against new competitors and find themselves with bigger fish to fry. But this was just like old times.

Things have gotten so moody so fast that blogger Tony Pearson from NetApp big brother IBM felt the need to tell EMC to pick on someone their own size:

I was going to comment on the ridiculous posts by fellow bloggers from EMC about SnapLock compliance feature on the NetApp, but my buddies at NetApp had already done this for me, saving me the trouble. . . .The hysterical nature of writing from EMC, and the calm responses from NetApp, speak volumes about the cultures of both companies.

But wait, there’s more. Remember how I mentioned heading over to see what was being discussed about this on Toasters? While there I ran across a thread that mentioned OnTap 7.2.5, and contained another message from NetApp to its customers:

Please be aware that we are investigating a couple of issues with quotas in Data ONTAP 7.2.5. As a precautionary measure, we have removed Data ONTAP 7.2.5 from the NOW site as we investigate the issues. We will provide an update as soon as more information is available.”

According to Bercovici, OnTap 7.2.5, issued as a bugfix for SnapLock, had its *own* bug, this time one that caused quota panic in some filers. In other words, the bugfix NetApp issued for what it said was an esoteric issue spawned another bug, and this time it caused some filers to ‘blue-screen’, according to the Windows analogy Bercovici used to describe the problem to me.

Version 7.2.5.1, which purportedly fixes both bugs, has since come out. As far as I’m concerned, the whole SnapLock bug was a tempest in a teapot, but NetApp still came out of this whole thing with egg on its face, as 7.2.5 introduced a severe and immediate problem in what seems like a well-intentioned effort to protect customers from an obscure corner-case hack. Also, they wound up with multiple EMC bloggers doing the Web equivalent of throwing chairs at them, a la Springer. As they say, no good deed goes unpunished. . . .


July 18, 2008  10:27 AM

Sun reveals SSD partner, claims better durability

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Sun yesterday identfied Samsung as its SSD supplier, solving at least part of the mystery around the source for its Flash drives. But Sun’s systems group senior director Graham Lovell says that Samsung will be one of several partners, not a sole source. The other partners remain unnamed.

Sun and Samsung also claim that the Flash devices they’ve collaborated on will have five times the durability of other single-level cell (SLC) enterprise Flash drives, such as the ones manufactured by STEC for EMC. Like other SSDs, the NAND devices still have a finite number of write-erase cycles, and single-layered memory cells exacerbate that problem. Lovell said that wear-leveling algorithms will be built into the Flash memory controller on the Samsung SSDs. A certain proportion of memory cells will also be kept in reserve by the drives for wear leveling.

According to Samsung, these developments mean that the lifespan of its SSDs will up to five times longer than that of other SLC Flash devices. Unfortunately, Lovell was not able to provide details about what testing Samsung has done to substantiate that claim. He also says that Sun won’t “preannounce” the availability of the drives from Samsung. Samsung’s PR rep told me he’d have to forward those questions to Samsung headquarters in Korea. . .which means the drives might be available before I hear the answers.

There’s been a lot of talk about SSDs lately. It seems this past month has started an inevitable “trough of disillusionment” with the technology, as excitement over its advantages has been balanced by industry observers pointing out the disadvantages of first-generation products (such as their durability).

IDC’s Jeff Janukowicz doesn’t see a problem with that. “This is the kind of improvement and innovations IDC predicts you’ll be seeing as this technology comes of age,” he said. Those predictions were written up in a recent report based on lab testing of SSD performance in PCs.

Storage managers for the most part are holding back on deploying SSDs, which to me is understandable. If I’d rushed out and bought a drive from EMC only to hear that another vendor had a much-improved model, I might be kicking myself. And I’d certainly be wondering what was coming next.


July 16, 2008  3:22 PM

NetApp Plasmon’s Trojan Horse for Enterprise Data Centers

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Since he sold Softek to IBM and was appointed CEO of British-based optical storage vendor Plasmon last November, Steven Murphy has had a tough row to hoe. Plasmon is one of the few survivors–if not the only survivor–of the optical storage market, which historically has been stunted by usability and cost concerns.

Because of optical’s past, Murphy is pitching Plasmon as an “archiving solutions provider.” He says Plasmon led too much with its blue-ray optical media in the past. “It’s important, but what’s more important for IT managers is a conversation about managing their data requirements,” he said. “This is a change for Plasmon.”

Plasmon began its transition with software for its Archive Appliance that allows applications to access the optical drives through standard CIFS and NFS interfaces, rather than requiring applications to understand the optical media management going on under the covers. That extended to its Enterprise Active Archive software, which supports multiple media libraries as a grid and offers encryption-key-based data destruction for individual files.

Last month Plasmon introduced RAID disk into its branded systems for the first time through a new partnership with NetApp. The main goal of the resulting integration, called the NetArchive Appliance, was to give users a nearline single-instance NAS-based archive for rapid recovery of archive data, since optical media typically has at least a 10 second response time, according to Plasmon chief strategy officer Mike Koclanes. 

Plasmon has several irons in the fire when it comes to a turnaround strategy under Murphy, including licensing its media for resale by other channel partners, and riding the wave of interest in archiving brought on by amendments to the Federal Rules of Civil Procedure in December 2006. But it’s still going to take some doing to get tactically-oriented IT folk thinking on as long-term a strategic scale as Plasmon’s value proposition demands–users worried about putting out fires aren’t going to be moved by talk of reducing data migrations over decades.

This is where the NetApp partnership comes in. Plasmon has a big task in front of it, trying to bring a medium back to some of the same doorsteps where it has already been passed over before. Maybe if it’s disguised as NAS, goes the current strategy, it could help get a foot in the door.

Koclanes gave the example of one customer who said he was interested in Plasmon’s archiving, but wouldn’t get funding for a new capital equipment project. When the NetArchive product was discussed, it occurred to the customer that he did have built-in funding for more NAS space. “We’re trying to focus on the ways the archive can solve other problems, and ways for users not to have to try to get an entirely new platform put in,” he said.

Plasmon’s newly-minted direct sales force will also emphasize that single-instance archiving to an optical jukebox provides backup relief by removing stagnant files from primary storage systems, has inherent WORM capabilities, and can natively be used either as an on-site part of a library or an off-site data copy for DR, Koclanes said.

Still, anybody checking purchase orders carefully at such a company might notice some unusual costs–while Plasmon systems go for anywhere from $50,000 to $800,000 (“That’s not a price range,” one of my colleagues said when given these figures, “that’s the entire pricing spectrum”), the “sweet spot” according to Murphy is around $225,000. Each optical disk costs about $33–cheap in absolute terms, but not when compared with the cost-per-GB ratios available in today’s hard-drive-based systems.


July 15, 2008  3:35 PM

Sexism or lack of execution? Rumbles continue in Greene’s wake

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Perhaps her greatest accomplishment has been to not only survive, but thrive, in the macho male world of EMC. Joe Tucci is an intelligent, honorable human being – I know firsthand – but it will take a long time before sexism no longer exists in this industry. Diane Greene beat the odds and thrived in an industry stacked against her, in one of the more good ole boy male dominated companies in a male dominated business world. –Steve Duplessie, “Kudos Diane Greene“, July 8, 2008.

As he is wont to do, Steve Duplessie ruffled a few feathers with that comment.

But no amount of indignance from EMC’s side of this story has stopped the speculation about whether Greene’s gender among male egos at EMC played a role in her dismissal. The Register’s Ashlee Vance put it this way:

We suspect that Tucci became less enamored with Greene’s style as VMware’s fortunes rose. He would have very little leverage over the firebrand in Palo Alto. She was responsible for making him look good. She wanted too much control of this VMware gem. She caused too many headaches. People kept thinking maybe she should have the EMC CEO post. Ultimately, she had to go.

Lucky for Tucci, investors, as they are wont to do, set unrealistic expectations for VMware. The company had doubled revenue every year since its birth. Why expect that party to end just because VMware’s revenue had swelled past $1bn? Why think that Microsoft entering the market with a server virtualization product tied to Windows would harm VMware’s fortunes?

So, with VMware failing to meet these insane goals, Tucci found the opportunity to justify Greene’s dismissal as a business decision when it’s anything but.

Well, is it, or isn’t it, though? Where do you begin to separate communication style from effective leadership? If what Vance calls Greene’s tendency to be “short” extended to sales calls with potential customers, for example, I can see the argument that personal style impacts business performance. The exec she’s being replaced with was once third in command at Microsoft, bolstering EMC’s argument that he’s the better candidate for steering the company at this stage of its growth.

But unfortunately, EMC has itself to blame for public perception on this issue. Last fall, news of a lawsuit brought by two female former EMC sales employees surfaced in the Wall Street Journal. And it was ugly. It may have been limited to a single sales office and sales manager, as EMC insisted in the wake of the WSJ report, but the company still faced questions about how this kind of behavior had been tolerated at all.

More recently, further discrimination allegations have come out, this time from a male employee who alleges EMC retaliated against him for trying to act on behalf of female employees who felt they were discriminated against. (EMC says he downloaded confidential information before leaving the company and hasn’t given it back.)

There are two sides to every story, and you don’t have to look very hard for sexism in the IT industry (and many others). But there’s no denying that EMC has been accused–and it’s not just a single complaint. 

EMC blogger Storagezilla responded to Duplessie’s comments with “We’re not in the 90′s anymore, Steve. Change the tune…one wonders for how long the company should apologise for behaviour it has corrected and works to ensure is never repeated every day of every year?”

But these most recent allegations about discrimination weren’t filed in the 90′s. The first two women filed their suit, according to the court documents, in 2004, and the public story only came to light 10 months ago. The male employee who has complained about discrimination did so this April, according to the Boston Business Journal. In answer to ‘zilla’s question, I think it’s going to take more than a matter of months for the company to have to stop apologizing or demonstrably correcting the behavior, especially when that behavior has to do with something as serious as discrimination on the basis of race or gender.

Fair or not, given EMC’s background, when a female CEO gets canned from the wildly successful company she founded and helped steer to its “unsatisfactory” 49% growth, and her “inexperience” and “lack of execution” are cited as the reasons, the tongues are going to wag.


July 9, 2008  9:53 AM

EMC enters no-spin zone on VMware CEO’s departure

Dave Raffo Dave Raffo Profile: Dave Raffo

A day later after the fact, is has become clear that VMware didn’t dump CEO Diane Greene because of personal differences between her and EMC executives. The final straw was a business dispute with EMC brass.

Sources close to EMC say the issue that cost Greene her job was she wanted EMC to spin off VMware into an independent company while EMC CEO Joe Tucci and his team want to keep the majority stake EMC holds. EMC spun out around 15% of VMware’s shares last August in an IPO that raised $1.1 billion.

“Diane wanted a spinout of VMware, and EMC said no we won’t spin out VMware at this time because it is our golden goose,” said a financial analyst who covers EMC.

A full spinoff of VMware has been an issue ever since the first IPO. Analysts have asked Tucci on EMC’s last two earnings calls if a spinoff was in the works for next year (a spinoff would be tax-free if EMC waits until 2009). Tucci gave almost the same answer both times: He said EMC had no plans to spin off VMware “right now” because both companies were performing well, and that creates shareholder value. He added both times that management and the board are focused on creating maximum shareholder value for both sets of shareholders.

Try telling that to investors who have watched VMware’s stock price drop from $125.25 to $40.19 in eight months while EMC shares fell 11.6% Tuesday to a 52-week low of $13.39. Now some analysts are wondering if investors will demand a spinoff.

EMC shed little light on the situation since yesterday morning’s news release revealing that Greene would be replaced by former Microsoft and EMC exec Paul Maritz. Neither EMC nor VMware held conference calls to discuss the move, although Tucci and Maritz did speak to selected media outlets. Tucci told Reuters and the Wall Street Journal that Greene lacked experience to run such a large company as VMware. Maritz told BusinessWeek he’s not underestimating VMware competitor Microsoft, nor is he awed by his old company.

A few of EMC’s bloggers have weighed in, wondering what all the fuss is about. EMC’s VP of Technology Alliances Chuck Hollis wrote that some people are reacting irrationally to Green’s departure.

The real surprise for me is that it was such a surprise for so many people. If you think about it, successful companies often go through different phases, and it’s not unusual for different management skills and styles to be required during the journey.

True. But I know one company that has kept the same CEO for seven years while morphing from a storage systems company to a hardware/software/security/services/server virtualization/wannabe network management behemoth. So EMC actually makes the case against change for change’s sake.

In a blog titled VMware Greene’s contract not renewed, EMC’s Mark “Zilla” Twomey points out that Greene’s contract expires in late July. That explains the timing of the move, but not the reason for it.


July 8, 2008  11:41 AM

VMware CEO Greene gets pink slip

Dave Raffo Dave Raffo Profile: Dave Raffo

Last week we ran a story on SearchStorage.com by Beth Pariseau about friction between VMware and storage vendors as more storage gets connected to virtual machines. Because the story focused on technology and not the internal workings of VMware, EMC was not among the storage vendors listed as banging heads with VMware.

But we found out today just how much friction there was between VMware and EMC, which still owns 86% of VMware after spinning out the rest in an IPO 11 months ago. The friction came to a head this morning when VMware’s board, chaired by EMC CEO Joe Tucci, replaced CEO Diane Greene with EMC executive Paul Maritz. Maritz joined EMC in February when EMC acquired his Pi Corp. and installed Maritz as head of its Cloud Division. Before he started Pi, Maritz spent 14 years with Microsoft and was on Microsoft’s executive committee.

EMC did not say Greene was fired, but its press release did not say she left on her own. Nor did it include any comment from Greene. On behalf of the board, Tucci thanked Greene “for her considerable contributions to VMware” and then predicted that VMware would increase its market share lead in the server virtualization market.

Execs from VMware and EMC have never had a warm relationship, but Greene and her team were allowed to run the company as they wanted as long as it kept growing revenue every year at astronomical numbers. Today’s release also mentioned that VMware was “modestly” adjusting its 2008 revenue guidance below the 50% year-over-year growth it forecast. But a slight miss is hardly enough reason to fire a CEO who has had nothing but success since EMC bought the company in early 2004. Perhaps it was an opening, though, and an excuse to make a move that the board wanted to make for awhile.

One financial analyst who follows both companies attributed the move to “a fundamental difference between VMware and EMC management” more than any failures on Greene’s part. There shouldn’t be much difference now — Greene was the lone VMware executive among VMware’s eight directors. Maritz becomes the sixth EMC rep on the board.

Maritz’s Microsoft background could be another reason for the move. Microsoft’s Hyper-V is now on the market as a competitor to VMware, which may be making some investors nervous. VMware’s stock price has also tumbled from $152.25 last October to $53.19 at today’s opening, but that was largely attributed to the overall market. Today’s news only accelerated the fall — shares fell more than 24% to $40.19 at the market’s close.

Forrester Research analyst Galen Schreck said the drop before today was likely due to market conditions more than increased competition. “I don’t think the entrance of Hyper-V has made much of an impact in this short period of time,” he said.

Going forward, Schreck said Maritz might want to take more of an industry leader role than Greene did. “Diane had been characterized as a humble person, not really flashy, not a big evangelist.,” he said. “Paul could take on a role with more public presence, be more of a keynote-speaker type.”

EMC hasn’t yet replaced Maritz as the head of its Cloud Division but an EMC spokesman said he expects a replacement soon.


July 3, 2008  8:59 AM

ILM seems to be MIA, but why?

Tskyers Tory Skyers Profile: Tskyers

I don’t see the terms ILM or Data Lifecycle Management mentioned much anymore on the Interwebs. Odd that we have this many regulatory pressures, and the one thing that could actually save us some money, time and stress when dealing with those pressures hasn’t been seen in headlines for at least a year, maybe two.

Where did ILM go? Did it morph into something else? Based on all the coverage it was getting two years ago, you’d think we would have progressed to hardware-based products supporting ILM initiatives by now. Yet the storage hardware vendors, with the most noticeable exception being Compellent, still haven’t added ILM features to their hardware. Storage software vendors, like Veritas, IBM Tivoli, CommVault, et al., still offer ILM features for their software suites, but today it’s dedupe that has the spotlight. And I’m not so sure I see the reason why.

I understand the marketing behind dedupe: “Hard economic times are ahead, so save money and don’t buy as much disk.” But if you look at the sales figures from the leading storage vendors, they are all meeting their sales estimates, and in some cases exceeding those estimates by a good margin, so businesses apparently haven’t yet gotten into the whole “save money” or the “buy less” aspect of that marketing push.

Managing one’s data seems to me the better way to spend, if you know when to move it to cheap disk, commodity tape and through to destruction. It would free up capacity on fast expensive disk, and reduce the effort needed to satisfy policy pressures. I distinctly remember eons ago sitting in a conference hall and listening to Curtis Preston for the first time, and this topic was the thrust of his talk: Manage your data, figure out where it should live and put it there.

This message holds true now more than ever. Just think, three or four years ago, 250 GB drives were the largest SATA drives certified for storage arrays. Now, with 750 GB to 1 TB in each slot, we have even more of a need to know when the data was created and when it needs to be archived or destroyed. With SSDs rapidly making their way into storage arrays, data management and subsequent movement becomes a crucial cost saving tool.

The part about all this that baffles me the most is liability. You’d think that if you were going to be legally liable to either hold onto or destroy files or information, you’d probably want an automated, “people-resistant” system in place to handle all that. At another recent Techtarget event on DR, Jon Toigo talked about a data map and knowing how valuable your data is in order to best protect the most valuable data. Sounds like a straightforward, common-sense approach, but as far as I know only one vendor is doing it in hardware, and most of the software vendors have gone quiet with their marketing behind it.

The term Information Lifecycle Management conjures up thoughts of managed data, orderly storage environments, documented processes, and responsible governance for me. All these things I’ve seen brought up in blogs (some of my own included) and articles, expressed as concerns for businesses large and small. So why has ILM gone underground?


July 2, 2008  10:10 AM

More server virtualization benchmark drama

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Last time on As the Benchmark Turns, we saw NetApp pull a fast one on EMC. This week’s episode brings us into the torrid realm of server virtualization and Hyper-V.

It began with a press release from QLogic and Microsoft lauding benchmark results showing “near-native throughput” on Hyper-V hosts attached to storage via Fibre Channel, to the tune of 200,000 IOPS.

Server virtualization analyst Chris Wolf of the Burton Group took the testing methodology to task on his blog:

The press release was careful to state the hypervisor and fibre channel HBA (QLogic 2500 Series 8Gb adapter), but failed to mention the back end storage configuration. I consider this to be an important omission. After some digging around, I was able to find the benchmark results here. If I was watching an Olympic event, this would be the moment where after thinking I witnessed an incredible athletic event, I learned that the athlete tested positive for steroids. Microsoft and QLogic didn’t take a fibre channel disk array and inject it with Stanzanol or rub it with “the clear,” but they did use solid state storage. The storage array used was a Texas Memory RamSan 325 FC storage array. The benchmark that resulted in nearly 200,000 IOPS, as you’ll see from the diagram, ran within 90% of native performance (180,000 IOPS). However, this benchmark used a completely unrealistic block size of 512 bytes (a block size of 8K or 16K would have been more realistic). The benchmark that resulted in close to native throughput (3% performance delta) yielded performance of 120,426 IOPS with an 8KB block size. No other virtualization vendors have published benchmarks using solid state storage, so the QLogic/Hyper-V benchmark, to me, really hasn’t proven anything.

I talked with QLogic’s VP of corporate marketing, Frank Berry, about this yesterday. He said that Wolf had misinterpreted the intention of the testing, which he said was only meant to show the performance of Hyper-V vs. a native Windows server deployment. “Storage performance wasn’t at issue,” he said. At least one of Wolf’s commenters pointed this out, too:

You want to demonstrate the speed of your devices, the (sic) you avoid any other bottlenecks: so you use RamSan. You want to show transitions to and from your VM do not matter, then you use a blocksize that uses a lot of transitions: 512 bytes…

But Wolf points to wording in the QLogic press release, claiming the result “surpasses the existing benchmark results in the market,” and claiming that “implies that the Hyper-V/QLogic benchmark has outperformed a comparable VMware benchmark.” Furthermore, he adds:

Too many vendors provide benchmark results that involve running a single VM on a single physical host (I’m assuming that’s the case with the Microsoft/QLogic benchmark). I don’t think you’ll find a VMware benchmark published in the last couple of months that does not include scalability results. If you want to prove the performance of the hypervisor, you have to do so under a real workload. Benchmarking the performance of 1 VM on 1 host does not accurately reflect the scheduling work that the hypervisor needs to do, so to me this is not a true reflection of VM/hypervisor performance. Show me scalability up to 8 VMs and I’m a believer, since consolidation ratios of 8:1 to 12:1 have been pretty typical. When I see benchmarks that are completely absent of any type of real world workload, I’m going to bring attention to them.

And really, why didn’t QLogic mention the full configuration in first reporting the test results? For that matter, why not use regular disk during the testing, since that’s what most customers are going to be using?

On the other hand, QLogic and Microsoft would be far from the first to put their best testing results forward. But does anyone really base decisions around vendor-provided performance benchmarks anyway?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: