In the ongoing effort to improve application performance, tiered storage — especially using SSDs — is a tool vendors often use. The concept is simple: Move the files supporting these applications to faster storage, where they can enjoy lower latency and thereby increase IOPS. But this requires either enough storage space on the faster (more expensive) tier to hold the entire data set, or a way to determine which files are used the most and just moving those to the upper tier. Now there’s another way: tiering file metadata via metadata management. This can deliver a nice NAS performance improvement.
Metadata is the “data about the data,” or the information a file system uses to manipulate files. Examples would be data on permissions, access history, file system structures, indexes, etc. Every time a file is opened, searched, modified, saved, backed up or even deleted, metadata is updated. These activities are called metadata operations, and they outnumber file system operations on “regular” data, often by many times. They also consume CPU cycles and system resources, which increases latency. So it stands to reason that speeding up metadata operations would be an effective way to improve performance.
Back to tiered storage. Since metadata comprises a small percentage of a total file system’s capacity (single-digit percentages are typical) it would seem to be a natural candidate for faster storage tiers, especially SSDs. NAS vendors are now coming out with systems that can tier metadata, automatically storing it on the fastest storage that’s available. The result is improved storage performance without increasing spindle counts or upper-tier storage capacities significantly.
For VARs, metadata management is another tool you can use to improve NAS performance without exploding the storage budget. With NAS becoming the storage platform of choice for more and more environments, this capability is a natural topic to get a meeting with a new customer who’s looking for a better solution.
Follow me on Twitter: EricSSwiss.