Storage Channel Pipeline

A SearchStorageChannel.com blog

» VIEW ALL POSTS Aug 1 2011   10:18AM GMT

Solving the IOPS problem



Posted by: Eric Slack
Tags:
Eric Slack
IOPS
Storage Channel
Storage performance

In a recent article, Jeff Boles broaches a subject that’s probably at the forefront of many storage meetings that VARs have with their clients: storage performance. Most users can tell when they’re out of capacity, but solving a performance problem is not so clear-cut. This is due in part to the fact that getting data into and out of a storage system is arguably as important as how much data it will hold. The ugly truth for many users is that they’re still adding physical capacity to disk arrays that have long since run out of “I/O capacity.”

 

Judging by the amount of traffic we’re seeing for a recent article on I/O, “What is I/O, and why should you care?,” storage performance is a topic users want more information on. Here’s a synopsis of the article: Disk drives list performance in terms of sequential and random reads and writes, or simply “transfer rate” and I/O per second, or IOPS. These specs refer to how fast a disk drive can get a single data object (like a file) onto or off of the drive (transfer time) and how many individual read/write operations the drive can accomplish in a second (IOPS). Except for very large reference file applications, in the vast majority of use cases, IOPS are the critical performance spec.

 

Most disk drives produce less than 200 IOPS, due to rotational latency or the time it takes for the platter to spin under the head to the desired portion of each track. It doesn’t help that disk drives have been stuck at 15,000 rpm for more than 10 years. Unfortunately, the average storage array often needs to produce a lot more IOPS (at least it does at certain points during the day) than can be had from the aggregate of its disk drive inventory. Therein lies the performance problem that most users have.

 

With the rise of server virtualization and the overall increase in file storage in most IT shops, IOPS will continue to be the thing that storage managers need to watch. The “I/O blender” as my colleague George Crump calls it, can bring almost any storage system to its knees. For VARs, the opportunity light should be going on here. Since the early days of open systems, users have been cobbling together infrastructure to meet their storage needs, and just as fast, creating opportunities for VARs to untangle the mess. Storage performance bottlenecks are one of the latest examples of this behavior.

 

This is one of those “devil’s in the details” discussions that I used to love as an integrator. People want hard-and-fast answers to questions, like which storage system to buy, but the right answer is usually “it depends.” It depends on the answers to a lot of questions, like what kinds of workloads are being created by software applications, how many servers are hitting a shared-storage resource at one time, how much server virtualization does the environment have, etc. It also depends on knowing the IOPS performance those resources can produce. Fortunately, these are all questions VARs are well suited to answer.

 

Follow me on Twitter: EricSSwiss

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: