Posted by: Eric Slack
Eric Slack, SSD, Storage Channel
This blog is part of a series of posts around the use of solid-state storage and how this category of products is a natural fit for storage VARS. For more information on solid-state storage, please see Storage Switzerland’s SSD Resources page.
Disk arrays have always used solid-state storage, called cache, to speed up overall system performance. Typically, caches are built into HDDs as well as the array controller itself, but for the sake of this discussion, we’ll assume that they are all one cache.
As solid-state memory has become more economical, disk array manufacturers have increased the capacity of this cache to point that it can be used for more “persistent storage,” meaning certain parts of data sets can be “pinned” into cache and not constantly moved back and forth between cache and disk. These data are often database logs, indexes and other types of metadata that’s constantly accessed in the course of running an application.
The problem with this approach is complexity and cost. For some data sets, buying enough cache to improve performance is just not feasible. They either don’t have the same high-transaction data that make sense to pin into cache, or there’s too much of it. With other applications, these data may just be too hard to identify. This is where SSD can help. With costs typically less than 10% the cost of array cache memory, but performance that’s in the same ballpark, SSD can provide enough space to accommodate more and larger data sets on this faster medium.
In addition to caching, SSD LUNs can be established as a performance Tier 0 and used by automated or manual tiering software supported by that array. This would be similar to the practice of pinning the most active data into cache but could include more data from more applications.
SSDs are packaged into the same drive form-factor units that disk arrays use, so they can replace individual drives as needed. Implementation involves some planning and a number of steps, but the physical connectivity is plug-and-play. For example, SSDs are usually implemented in full RAID sets, and a hot-spare SSD should be included. But there are some realities to simply putting SSDs into an existing disk array that can affect how much application performance improvement is seen.
With SSD IOPS a couple of orders of magnitude higher than HDDs, controllers in traditional disk arrays can have trouble supporting an array full of SSDs. To be fair, they weren’t designed for the kinds of performance that SSDs can bring because they didn’t need it. Storage controllers have historically been ahead of the performance that HDDs required, even with all kinds of overhead like deduplication, snapshotting, replication, etc. SSDs change that equation and can move the bottleneck to the controller, reducing overall performance substantially from what people expect when they add solid-state storage.
In addition to the hardware, some applications also can’t take advantage of the performance improvements SSDs offer. SSDs are at their best in random, read-intensive use cases with high IOPS requirements. These are typically high-transaction database and Web applications, as opposed to those that deal with larger files and more streaming I/O.
Obviously, as a VAR, it’s up to you to consider these and other details of this type of SSD implementation and to manage customer expectations. The complexity of integrating SSDs effectively is similar to just about any other effort to improve performance. Given their plug-and-play format, SSDs have often been improperly implemented, resulting in some disappointed users. This is actually a good thing for VARs as it can reinforce your value to your customers.
Follow me on Twitter: EricSSwiss.