Storage Soup

A SearchStorage.com blog.

» VIEW ALL POSTS Nov 3 2008   12:52PM GMT

ESX and SATA: So happy together?



Posted by: Tskyers
Tags:
disk drives
VMware

I’m a big fan of SAS. I’ve professed my undying love and devotion to it (at least until solid-state disk becomes just a little more affordable). So why on earth would I be writing about putting VMware’s ESX Server on SATA disks?

I was poking around on the Internet a few weeks back and came across a deal I couldn’t possibly refuse: a refurbished dual Opteron server with 4 GB of RAM and four hard drive bays (one with a 400 GB drive) with caddies and a decent warranty for $229. No, that’s not a typo!

The downside is that the server is all SATA, and ESX won’t install on some generic SATA controllers attached to motherboards, or even some add-in SATA cards, without some serious cajoling and at least hunting around on VMware’s support forum. These hacks are not officially (or even unofficially) supported. In fact, VMFS is explicitly NOT supported on local SATA disk(scroll to the bottom of page 22). There are exceptions to this and there is work on a SATA FAQ where you can get more info.

When my server arrived, I installed an Areca SATA RAID card in it, downloaded the beta Areca driver for ESX 3.5 update 1 and went about the install. Areca is my favorite SATA/SAS add-in card manufacturer. Their cards are stable and have proven to be the fastest cards in their class for the money.

Why not use LSI, you may ask, especially considering that their driver is in the ESX kernel? Well, I like speed, and I usually don’t make compromises when it comes to the performance of a storage subsystem unless it is absolutely, positively required due to some other dependency. (As a side note, I have LSI  installed in my Vista x64 desktop due to a lack of x64 driver availability for my Areca card at the time of install. See my Windows on SAS blog post for more info.)

The Areca driver install went smoothly, and I currently have Exchange 2007, a Linux email server, a Windows 2003 domain controller, a Windows 2008 domain controller, a Windows XP desktop, and an OpenSolaris installation on the 1.2 TB of local SATA storage. So far, things look strong and stable with the driver in place. Normally, I try to stay away from needing a driver disk at install time to get to storage, as it makes recovery a nightmare if you are forced to use a live CD to repair your system. But this process was so stable that I may need to add exceptions to my prejudices. . . .

I’ve had acceptable performance (according to my subjective non-scientific benchmarking) with the relatively small number of VMs I have deployed on this server, so all in all, the storage subsystem is performing admirably.

This leads me to the question: Is SATA in ESX’s future? I wonder, seeing that Xen has native support for a myriad of storage subsystems, and Windows Hyper-V will install on SATA. I can’t see VMware being able to hold the line of local VMFS on SCSI/SAS only for much longer.

Couple this with storage Vmotion and it leads to more questions about the storage subsystems used for VMware and how it affects VMware’s already strained relationship with storage vendors. On the other side of that coin, why isn’t Microsoft or Citrix taking any heat for allowing the use of onboard SATA attached to cheap integrated controllers? Have the generic onboard storage controllers (Marvell and Silicon Image come to mind) reached the point where they are deemed capable of handling heavy server I/O loads? I’m sure they’ve advanced but I’m not convinced they can handle the storage I/O loads that a server with 20 VMs would generate.

But I have a bigger dilemma than all of this. How would I free the terabytes of storage I’d have trapped in these servers if I used eight 1 TB SATA drives? ESX isn’t exactly the platform of choice for a roll-your-own NAS. While it is technically possible for one to install an NFS server daemon on an ESX host, doing so may not be such a good idea. I’ve seen posts on how to compile and install the NFS module on an ESX host, but I haven’t seen any hard documentation as to the overall performance of the system when serving up VMFS via NFS and host/guest VMs simultaneously. You could also create a guest and turn that guest into an iSCSI or NFS target and point your ESX server to it, but that is also a bit kludgy.

In other words, just because I can run ESX on SATA disks local to the host doesn’t mean I should. I haven’t found a compelling business reason to have local VMFS on SATA in these times of inexpensive NAS. However, the pleasant side effect of this dilemma is that until I come up with a business case for using SATA as primary storage for an ESX server, I have a super cheap host that I can test various tweaks and changes to ESX before I roll them up into a QA or production environment.  Maybe that’s reason enough to have one or two ESX deployments on SATA.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: