The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog

» VIEW ALL POSTS Apr 18 2008   10:40AM GMT

VMware white paper review: “Comparison of storage protocol performance”



Posted by: Joe Foran
Tags:
Joseph Foran
Storage
VMware

Since I just love to read white papers (n.b., sarcasm), I grabbed a copy of VMware’s Comparison of Storage Protocol Performance. Actually, I found it to be a good read. It’s short and to the point. This sums it up quite nicely:

“This paper demonstrates that the four network storage connection options available to ESX Server are all capable of reaching a level of performance limited only by the media and storage devices. And even with multiple virtual machines running concurrently on the same ESX Server host, the high performance is maintained. ”

The big four storage connections are:

  • Fibre Channel (2 GB was tested)
  • Software iSCSI
  • Hardware iSCSI
  • NFS NAS

The paper infers that network file system (NFS) is perfectly valid for virtual machine (VM) storage, performing in all of the tests at a level comparable with software iSCSI, very close to hardware iSCSI and lagging behind 2 GB Fibre Channel (FC). This doesn’t surprise me one bit: I like NFS network-attached storage (NAS) for VM storage. I prefer storage area network, or SAN-based storage because I prefer to store on a virtual machine file system; but for low-criticality VMs, NAS’s price is right (well, as long as you don’t count Openfiler, IET, etc.) Also, it’s plausible to build out a virtual infrastructure storage architecture using nothing but Fedora Core and be supported.

I was particularly interested in the FC vs. iSCSI performance results presented in this VMware white paper. At the lowest end of the scale, iSCSI beat FC. Granted, the low end of the scale isn’t what will be seen in most production environments but it is  interesting data. What I liked most was that nowhere did 2GB FC truly outclass 1Gb iSCSI. It was faster in most of the higher I/O testing, but never did it double the performance. 2 GB FC did show a big performance improvement in the multiple VM scalability test but not double (about 185 MB per second vs. about 117 MB per second).

On to what I didn’t like in this white paper:

  • No 4 GB FC comparisons. 4 GB FC is the sweet spot for high-performance enterprise SANs being put in place to support the big iron now being virtualized. It should have been covered, even if it is still a little bit of a nascient technology (well, not in terms of maturity but in terms of it’s market segment.)
  • No 1 GB FC connections. (There are still plenty out there.)
  • No NIC Teaming comparisons. I want to know how much additional CPU overhead is involved. I want to know how much performance is improved if you team NICs on your software iSCSI targets and initiators.
  • No multipathed comparisons. This should have been done. Mutipathing is a way of life for anything as mission critical as a server that hosts multiple servers.
  • No 10 GB Ethernet iSCSI comparisons. VI 3.5 is out. 10 GB Ethernet support is built into VI 3.5 (see the HCL, page 29.) Not to test this is a big oversight.
  • No internal-disk storage was tested. Ok, maybe it’s not reasonable for me to expect this to be tested. Maybe I’m just grouching now.

I was surprised to see that software iSCSI got its tail handed to it in CPU workload testing. I’ve never done this testing but I knew there was a big overhead involved. I just didn’t expect it to be that big, especially compared to NAS, which I expected to be right there with iSCSI rather than much more CPU efficient (FC was the 1.0 baseline, NAS scored about 1.8-ish 1.9-ish, and SW iSCSI was about 2.75.) This means one thing: while performance is great across all protocols, plan on extra CPU power for software iSCSI.

I was pleasently surprised to see hardware iSCSI dead-even with 2 GB FC. I had expected some additional overhead even with dedicated hardware, but that wasn’t the case. I would expect to find that in a dedicated iSCSI solution–unless you’re using really cheap equpment like hooking up a couple of big drives to that old desktop–you won’t hit the CPU-use ceiling unless you fail badly at planning.

All of these protocols are perfectly valid. There could have been more meat in the paper, but it did a good job of accurately testing four of the most common storage architectures used with VMware’s products.

Overall, I give this white paper seven “pokers.” Why pokers? Because stars and 1-10 ratings are common. Pokers are mine. Because fireplace pokers can jar you into action if you get bit by one, seven pokers means you should read this paper if you have any responsibility for virtualization.

1  Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Joe Foran
    Joe, Nice review of the white-paper. I agree with you on how narrow the focus of this resarch was! 2Gb FC when we have been deploying 4Gb for a couple of years now and 8Gb parts are already starting to hit the market. I also think a comparison with PCI-X and PCI-E HBA's would have been good. I also note there are no details of the Storage on the other end of the FC wire and also no details of the NFS server (software iSCSI as well?) and the iSCSI device. Personally i always do performance testing these days with IOzone (www.iozone.org) as it gives nice blocksize-v-filesize 3D graphs which can be very illuminating. G
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: