As solid-state drives (SSDs) come down in price, more users are considering them to upgrade storage system performance, improve storage density and reduce power consumption. Although manufactured in hard disk drive (HDD) form-factor packages that plug into existing disk arrays, SSDs have little in common with mechanical spinning disk drives. This makes the process to compare SSDs different from that used with HDDs.
Increasing storage system performance is usually the reason solid-state storage devices are first considered. Performance, especially IOPS, for SSDs is typically an order of magnitude (at least) greater than HDDs for writes and even better on reads. This read/write differential is due to the fact that SSDs must erase blocks of storage space ahead of each write, a process called garbage collection, which adds a significant amount of time to the write cycle. When an SSD is first used, referred to as FOB, or “fresh out of box,” it can accept writes without running this erase cycle, since it’s essentially empty. After the device has had all its NAND cells filled, it must run garbage collection before each write and is then in a “steady state” condition. A write saturation curve can show this graphically, how performance degrades as the device moves from FOB to steady state. Obviously, it’s essential to measure performance only after reaching steady state.
SSD endurance is the other main factor to consider when comparing SSDs. Unlike spinning disk media, NAND flash has a finite number of writes it will accept before reliability suffers. Called program/erase, or P/E, cycles, this statistic can give an accurate assessment of when an SSD will wear out. Manufacturers know the maximum number of P/E cycles their devices can sustain and use this to compute a total bytes written (TBW) spec. When comparing SSDs, TBW figures should be comparable.
There are some other factors to consider when comparing SSDs, like the knowledge of the vendor and its ability to provide reliable information that, interestingly enough, can help users make good product choices. But generally, there’s a lack of standards in the industry, in the terminology used for different specs as well as in the data reported. This means that published specs offer only a starting point, a way to narrow the list of candidates. In-house testing with an environment similar to that used in production is usually warranted, something we’ll address in another post.
Follow me on Twitter: EricSSwiss