In the last post we talked about how NAND flash memory devices differed from magnetic disk drive storage and the importance of understanding flash endurance. In this post we’ll discuss flash implementation, specifically devices that are installed in the application server.
Flash is getting more affordable but is still several times the cost per gigabyte of hard disk drives (HDDs). Because of the cost disparity, it’s often used to augment HDD performance: The more performance-critical data sets are placed on flash, sometimes temporarily, to take advantage of its orders-of-magnitude better performance, especially IOPS.
Read caching involves placing a copy of the most frequently used data objects into SSD to speed access times by applications or users. When data’s changing, it’s more complicated (write caching) since the primary data set must be kept updated and writes must be protected against system failures until they’re committed to nonvolatile storage. But there are algorithms that can do this too, as part of the OS or application or as a part of a PCIe card that houses the flash storage itself.
Caching can be the simplest to implement, since it leaves the primary copy of data intact on existing disk storage, often operating transparently to the application. It can also be implemented with a relatively small amount of SSD capacity, since data can be moved into and out of the cache area rapidly.
Tiering is like caching except it involves moving an entire application or data set into cache and then copying back to primary storage when the period of high activity is over. For this reason, tiering typically requires more SSD capacity than caching and may involve configuration changes to the application.
Server-side flash implementations can be done with PCIe flash devices, which can have up to a terabyte or more of flash capacity and may include caching software as well. Flash can also be in SAS or SATA drive form-factor packages, which plug into 3.5-inch, 2.5-inch or 1.8-inch drive slots. There are also SSDs that plug into an empty DDR3 memory slot on the motherboard and connect via a SATA cable.
Server-side flash is dedicated to the server it’s installed in, meaning less flash capacity is required than with array- or network-based flash devices and implementation is simpler than in those shared storage scenarios. Although flash capacity is significantly more expensive than HDDs, SSDs caching or tiering can actually provide a lower-cost alternative, with better performance.
For VARs, server-side SSD implementation can be an ideal way to break into a new account or capture new business in an existing account that’s currently going to an array vendor. Whether implemented as a cache or tier or just a high-performance storage area, server-side flash can provide an immediate solution for a slow application. For other use cases, an all-flash array or flash appliance may be a better alternative. We’ll look at those in another post.
Follow me on Twitter: EricSSwiss