Storage Soup

Apr 10 2008   2:44PM GMT

SAS storage on a Windows Vista desktop

Tskyers Tory Skyers Profile: Tskyers

This blog is about three months in the making.

First, a bit of background. Several posts ago, I predicted the death of SATA in favor of SAS, which is only marginally more expensive (not talking the dirt-cheap integrated SATA controllers, but higher-end cache-carrying SATA RAID controllers) for an admittedly smaller capacity but much higher speed.

After using SAS on some of the servers and blades at work, I came home to my SATA-based desktop computer and wept silently whenever I did anything disk-intensive, because it was soooooo much slower. I have SCSI for the OS in all my server equipment, but even those machines weren’t as peppy as the SAS stuff at work. Taking these two things into account, plus the fact that the games I like to play are all disk I/O intensive, then throwing in a bit of friendly rivalry for good measure, I decided to upgrade my desktop machine to use SAS storage.

I convinced the home finance committee (my wife) to approve the purchase of a few new components for my experimental SAS-based desktop. In a previous post I mentioned my buddy Karl, who has been persistently making fun of the small/low benchmark numbers of my desktop. I quipped that his larger/higher benchmark numbers were simply to make up for deficiencies in other areas of his rig and he was overcompensating. Secretly, I was impressed and had to see what it felt like to hit the magic 200MB/sec throughput mark on my desktop. So I hit eBay, credit card in hand, in search of the components I needed for my SAS-based desktop monster.

I researched which card and what drives to purchase, and settled on a couple of 15k 72GB Seagate SAS drives and an LSI 8204ELP SAS/SATA array controller. I got everything relatively quickly and unboxed it all. It’s difficult to put into words the anticipation I felt at that moment of beating Karl’s benchmarks. . .only to feel the crushing blow of disappointment when I took a look at my old motherboard, which, in my hasty competitive blur of eBaying, I had forgotten to check for the correct PCI Express slots.

My motherboard had 2 PCI-Express 1x (very short, relatively slow slots mainly for audio and gigabit networking) and 1 PCI-Express x16 (much faster, much longer physical slot mainly used for high-bandwidth boards like video cards). The LSI 8204ELP RAID card is a PCI-Express x4 device (quiz on this stuff in five minutes!). It doesn’t fit in the x1 slot and my x16 slot was occupied by my video card. Topping Karl’s benchmark would have to wait a little longer.

Fast-forward three months. More research, more waiting, and more pouncing later, I bought a motherboard that has more than one big high bandwidth PCI-E slot that can handle my LSI card.

This is where the fun really begins.

After three days of flashing firmware, updating BIOSes and fiddling with cabling, I finally got the LSI card and associated drivers to work properly and got an operating system loaded (x32 Vista). I ran a couple of benchmarks on this system. Success! Karl was going down! (The irony here is I still didn’t beat Karl’s benchmarks. Not only that, but he’s going for 300MB/sec — he’s waiting for his drives to come in.)

As you can see by the stark difference in the captures below, one is clearly a more smile-inducing experience to a storage geek than the others, but the bigger story is the single SAS drive vs. the single SATA drive.

In day-to-day activities like searching my email or installing an application or even playing a game (Command and Conquer 3 missions load in seconds instead of mind-numbing hours), things are peppier, as is to be expected. But it is surprising the speed at which things happen. Vista feels fast –yes, I said it– it feels better, more responsive, and over the last few days I found myself in my Debian install less and less, believe it or not. I’m having an okay experience with Vista (no, I haven’t installed SP1 yet…I’m waiting for the first service pack for it before I take the leap!). I wonder if Microsoft can convince LSI and Dell to build a commodity SAS chip on-board for them?

Single SATA II Drive

This experience on the desktop was certainly more involved than on a server.* One would think some of the lessons these vendors learned in the enterprise would have trickled down to the desktop by now. But I guess that’s asking too much.

Single 15k SAS Drive

It also tells me that as modular and approachable as these desktop systems have become, cutting-edge is still not somewhere the uninitiated can be. That seems obvious, but when I think of 64-bit operating systems, I don’t think cutting-edge, because they’ve been out for 3-4 years now, 8GB of RAM is less than $100 on the open market and more than half of that amount would be entirely useless in a 32-bit environment.

LSI RAID 0 15k SAS 64k Stripe

Was all my heartache, frustration and re-installation worth it? Heck yes. . .and then some! Until one of the disks in my RAID 0 set dies and I blog about what a crock the million-hour MTBF numbers are, I’ll be the happiest storage geek this side of Seagate’s skunk works. Scroll all the way down for some notes on Vista x64.
______________________________________________________

* I know this is a storage-oriented blog, but this process drove me so far up a wall I needed a parachute to get down. I feel the need to share the fine print, to hopefully help someone avoid the same devilish Catch 22′s and gotchas to doing this with the 64-bit version of Windows Vista.

1) Windows Vista x64 works very well, but has very limited driver support, and storage devices are the only exception. Most storage vendors have great 64-bit drivers. Unfortunately almost no one else does, even Microsoft themselves (try using Groove in x64).

2) While Windows Easy Transfer is great, it will not let you transfer your files and settings from an x64 computer to an x32 computer. Instead, you have to virtualize your old system using VMware and run it in a VM on the system you’re migrating to. One more note about virtualizing your old machine: If you’re using 64-bit Windows and try to take the hard drive out and stick it in your new machine so you can build a new VM using a raw disk, that won’t work either.

3) If you decide to use your old hard drive in your new one, make sure your BIOS isn’t set to boot from it on your new system, especially if you’re moving from onboard controllers to add-in controllers. Most system BIOSes will want to set the onboard devices higher in the boot order by default.

4) Be mindful of how many add-in cards/controllers you have loading BIOSes — apparently there’s a limit to the number (memory, maybe?) you can have loaded, and once you’ve hit that limit you cannot enter the BIOS of the last card in the bootup sequence. For example, on the system board I have now there are three storage controllers (two SATA and one EIDE) plus the LSI card I’ve added. When the BIOS of the EIDE is active, I cannot enter the BIOS of the LSI card (last one to load) to configure an array. I have to disable the EIDE, configure my array and re-enable the EIDE. Why this happens in today’s systems is beyond me, but be wary, it happens in servers too. I’ve tackled a similar problem with 4 PCI-X Areca cards and a couple EIDE cards in a production server.

11  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Tskyers
    fyi dirty secret in the industry, all storage vendors know SAS outperforms SATA, and in many cases FC for an incredible cost per IOPS/GB. FC drives continues to exist due to the higher margins, revenue, and "perception" in the industry that FC are more reliable/faster performing
    0 pointsBadges:
    report
  • Tskyers
    I've always wondered about that as well. The other point of storage is how fast the drives on-board controller can spit out data. Is there anything on the horizon besides SSD that can speed up throughput from the actual drive itself? Will we ever see 250MB/s coming from ONE drive vs a pair setup in RAID 0?
    0 pointsBadges:
    report
  • Tskyers
    You've got to be joking. SATA drives are far more likely to drive out SAS drives than the other way around: SAS is just not at all cost-effective save for very specialized needs. Sure, a 15Krpm SAS drive has a bit over twice the performance (in both bandwidth and random access) that a 7200 rpm SATA drive has - at around 20x the price/GB (at least that's what a quick glance at Fry's and Newegg suggests today). Or, if you really need only 146 GB total on your system, perhaps only around 6x the cost per GB: 80 GB SATA disks are far below the per-GB sweet spot. If you want comparable bandwidth to your SAS set-up, just buy twice as many SATA drives for your RAID-0 array (and pay about 1/3 as much total while obtaining twice the capacity in the process, or perhaps 1/2 as much total while obtaining 8x the capacity). Same for parallel IOPS. Only for serial IOPS is it impossible to match the 15Krpm SAS performance with SATA drives - and even there, you pay (as noted) handsomely for your ~2x performance improvement. As for single-drive bandwidth, expect 250 MB/sec from a single SAS drive within a couple of years, and the same from a single SATA drive within 4 years or so: linear density has been improving at around that rate just about forever, and doesn't seem to be in danger of hitting a wall yet. - bill
    0 pointsBadges:
    report
  • Tskyers
    Bill, I have to honest, I was of the same mind before getting an opportunity to really hammer at SAS without production timeline worries. Why spend the extra $$ on SAS when you bundle enough RAID 0 SATA drives together and get similar performance. The problem I ran into was the failure rates, heat and power. When you increase the number of drives in a RAID 0 array you are exponentially increasing the chances for a failure in the array that render the array useless. Not to mention once you get into the 500 plus GB range your rebuilds will take up to 12 hours depending, the density of heat and the power needed to spin the platters (~15w/disk). Granted there are things one can do to avoid this like buying from different drive manufacturers and batches etc, but once I started playing with these SAS drives I will gladly say I'd trade the speed for capacity any day long (well ... at least until I run out of room). The general public I don't think has any idea faster exists, at least by the way the system vendors are behaving. I've seen 4200rpm drives still being offered. I will admit that I may currently be in the minority when it comes to the performance/capacity debate, but I look at the WD Raptor and I'm curious as to why it is still so popular among the enthusiast crowd because it costs roughly the same as a 10k SAS drive w/ similar capacity.
    0 pointsBadges:
    report
  • Tskyers
    (I realize that this topic has gone a bit stale by now, but having wandered back here accidentally I thought I'd respond anyway:) Your thinking seems to be muddled by generalities and misconceptions rather than the specifics of the situation that you described. 1. You say "the problem I ran into" when I suspect you mean "the problem that I *imagined* I'd run into". In fact, what I suggested was using 4 SATA drives in a RAID-0 array instead of the 2 SAS drives that you described: that's hardly going to come anywhere near breaking your heat and power budgets (the processor that you're depending upon for your high-performance machine already consumes more than the disks will - and if you're running anything more than a minimal graphics card that may as well) or dramatically affect your failure rates (yeah, they'll about double, or a bit more if the individual SATA drives are noticeably less reliable than the individual SAS drives were: so what? it's a RAID-0 array, hence you need protective backups anyway even if you're not worried about other sources of error, and the probability of encountering a disk failure over the entire service life of the four drives will still only be a few percent). 2. Increasing the number of drives in a RAID-0 array does not increase failure probability exponentially, just linearly. Perhaps you're confusing the situation with the probability of a *double* failure in a *RAID-5* array (where it increases roughly with the square of the drive count). 3. Your rebuild time will be unchanged, unless you take advantage of the additional inexpensive data capacity that the SATA drives can provide: unlike RAID-5 arrays, a RAID-0 array doesn't require that the array be initialized prior to use (actually a smart RAID-5 implementation doesn't either, but that's a different topic) - you just lay down the data as it comes off your backup (or other) medium and leave the other disk sectors as they were, so you can stream out the data to the 4 SATA drives at about the same speed as you'd stream it out to the 2 SAS drives (that was the point of using twice as many SATA drives, after all). - bill
    0 pointsBadges:
    report
  • Tskyers
    It's not dead as long as we are corresponding!! As a matter of fact it is interactions like these that makes blogging so enjoyable for me. I would disagree about the misconceptions, I have in fact run into the problems of multiple simultaneous failures on a set of 8 WD SATA drives (mirrored RAID 0 set of 4 drives), bad luck maybe but needless to say it cost me time and effort. You do have me on a point of fact, the failure rate is in fact linear and not exponential. However the statement achieved its goal of drawing attention to the fact that RAID 0 w/ many drives increases the risk of failure by the amount of drives involved. As far as the failure rates go we would certainly have to speak in generalities as sometimes you get a bad drive out of the box. We have about ~400 spinning disks here about 80% of them are SCSI or SAS and the remaining 20% are SATA we've had at least 4x the failures in our pool of SATA drives as we've had in our pool of SAS/SCSI even though we have 4x as many SAS/SCSI. I'm sure there are numbers out there both from users and the industry that would back those unscientific assertions. The other aspect is in home use drives normally see more rigorous usage w/ many cold starts and higher ambient operating temperature than in a controlled datacenter environment. As an aside, I'm amazed that drives are as robust as they are SATA or otherwise, but that is another blog hahaha. I will also concede that comparatively power and heat from a hard drive is lower (by large amounts) than a video card and CPU's, however the heat envelope is vastly more an issue than the power consumption, and it really is an issue, 4 drives produce 2 times the heat as 2 drives even with the difference in the RPM. Your last point about rebuild times is a fact, however the only advantage SATA has over SAS is capacity so if I were to build an array I'd be more likely to use the full capacity thus incurring the "Rebuilding 10%" prompts for 8 hours
    0 pointsBadges:
    report
  • Tskyers
    Hmmm. With a RAID-0 array of four individually-mirrored pairs you'd have to be unlucky to lose data with a simultaneous double or even a simultaneous triple disk failure: the chance that the second failure would be the mirror of the first is only 14%, and the chance that a third failure would be the mirror of one of the first two is only 33% of the remaining 86% (thus the chance that you'd lose data with 3 simultaneous failures in such an array would be about 43%). It's a lot worse if you set up the array as two mirrored 4-disk RAID-0 sub-arrays, which is why people don't do that. But perhaps you're just saying that replacing the failed disks cost you time and effort rather than that they resulted in any actual data loss from the array that cost time and effort to repair. In any event it's not clear how that experience would apply here, since it's the first disk failure that's significant with the simple RAID-0 array that we were discussing. I'm afraid that industry numbers do not support anything remotely resembling your anecdotal experience that SATA drives fail 'at least' 16x as often as SCSI or SAS drives. Not only do the manufacturers rate the difference as less than 2:1, but independent studies of large disk groups (the most recent that I'm familiar with was presented at FAST08 and of course includes references to earlier ones in its bibliography) have generally found it to be in the vicinity of 3:1 or less for similar usage patterns (whereas the manufacturer ratings often assume a lower duty cycle for the desktop drives and thus could be a bit optimistic for use in the current situation). Furthermore, at least with respect to temperature home drives may see at least as good operating conditions as datacenter drives: another FAST08 study found that the sweet spot for drive temperature is about 30 - 35 degrees C, which is right where my current desktop drive is operating according to its SMART information. As for power, my Seagate Barracuda 7200.10 SATA manual states that for the largest (4-platter, 750 GB) disks in the line it varies from 9.3 W (at idle, but spinning) to 13 W (under heavy load), whereas their 300 GB Cheetah 15K.5 SAS drive (to pick the largest drive in that line for that product generation) varies between about 14 W and about 18 W across that range of conditions. So two of those 300 GB SAS drives would consume 28 W - 36 W and provide 600 GB of storage capacity, while four of the 750 GB SATA drives would consume 37 W - 52 W and provide 3 GB of storage capacity. If you kept capacity roughly constant, however, you'd be using four single-platter 160 GB SATA disks: while the manual does not give operating power figures for them, it does note that their peak spin-up current is only about 2/3 the peak spin-up current of their larger siblings, so if that holds for operating power as well (which would be reasonable: only one platter's worth of air resistance and only a single pair of heads moving back and forth) the four SATA drives would actually consume a bit *less* total power than the two SAS drives. I guess you didn't understand the fact that there is *no* 'rebuilding' process required for a RAID-0 array: the only question is how long it takes to stream your data to it - and thus it would be about the same with 4 SATA drives as it would be with two SAS drives if the total quantity of data was the same. Or, to look at it another way, a contemporary SATA drive can stream data at an average of around 60 MB/sec, so repopulating it (e.g., from a 'drive image' backup, which allows close-to-streaming performance unless the drive is only sparsely populated) runs at around 4 GB/minute or 240 GB/hour (i.e., worst-case time would be about 4 hours even for today's largest SATA drives filled to capacity as long as you weren't limited by the source bandwidth - and for the mere 300 GB or so that the two SAS drives in your example could hold repopulation could theoretically complete in as little as about 20 minutes - whether to two SAS drives or to four SATA drives). In any event, in the current comparison (x SAS drives vs. 2x SATA drives) repopulation performance is affected not by the choice of drive type but by total data size (and distribution, if saved file-structured rather than as a sector stream), so it's not reasonable to penalize SATA by claiming that it will tempt you to retain more data because it's so inexpensive to do so. And even if you *do* retain significantly more data choosing SATA will *still* cost you a lot less for storage than SAS would (so capacity is far from 'the only advantage' that it enjoys). - bill
    0 pointsBadges:
    report
  • Tskyers
    Slightly off-topic.... One thing I learned a while back - Check the firmware levels, especially on some Maxtor drives, drives with certain firmware levels were prone to random timeouts. In a Dell server they would fail, and if you simply re-seated the drive it would plod along for another month or so before it failed again. At my last job I had to go through about 50 servers doing flash upgrades - of course you had to do them slowly because re-flashing the maxtor drives at least wipes them. (I believe that's true of most drives, however I've not had to flash any seagate drives in so long I don't honestly know) Multiple failures suggests a Raid-card failure as well... As to SAS vs. SATA. SATA will be in demand as long as the consumer-grade motherboards have integrated SATA controllers. People buy what they have built-in support for. I've not seen any low to middle end motherboards on the market with SAS controllers yet, so i don't see it on the horizon.
    0 pointsBadges:
    report
  • Tskyers
    Sorry for the dealy in responding it's been busy!! I've also heard stories about Western Digital SATA drives failing sequentially (lots and lots of them), and as I mentioned before I've had my own personal experience w/ SATA drives failing. I've had issues w/ both SAS and SCSI drives failing but not at the rate my SATA drives have. The newest batch of "Enterprise" SATA may address some of the issues but based on what the SAN/NAS vendors are charging for them ..... I think I may take my chances w/ the non "Enterprise" versions of the same drives! However w/ all this said, based on what is going on in the market today I also think Western Digital was thinking along the same lines as you all were because they released the VelociRaptor. That drive essentially kills my argument, dead. Based on all the reviews I've read about it, it has low power usage, low heat dissipation, and really fast speeds in both access times and sustained data rates. So much so that it even beat out similar 10k SAS drives based on the StorageReview.com's benchmarks of a pre-production model. To your point Jessie this drive brings the performance of SAS to the commodity SATA chipsets on any motherboard. I've been dying to get my hands on one or two to pit them against my SAS drives, and I'll let you know what I find when I do, but until then .... I still grin when I hear the "whiirrrr click" of my SAS drives!!
    0 pointsBadges:
    report
  • Tskyers
    Have you tried a velociraptor yet for comparison? I have one, but yet to install. Have to wait for RMA replacement of a new motherboard. I'm wondering whether to try a 15k SAS in addition...
    0 pointsBadges:
    report
  • Tskyers
    Michael, not yet, I'm in the same boat. I have one drive in hand and waiting to get clearance from the home finance committee to get he second drive. I'd like to test against a set of RAID 0 VelociRaptors. I have a sneaking suspicion the performance will not be as high as the SAS drives but will be close enough to not make the additional expense of a SAS controller worth it, except of course at a LAN party for bragging rights.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: