Storage Channel Pipeline

A SearchStorageChannel.com blog

» VIEW ALL POSTS Sep 12 2011   11:50AM GMT

Making the hard drive cost question easier to answer



Posted by: Eric Slack
Tags:
cloud storage
Eric Slack
hard drive cost
hard drives
Storage Channel
Storage systems

Every VAR has heard the hard drive cost question: “Why does adding storage to my disk array cost so much when I can buy a 2 TB disk drive for less than $100?” The answer is some combination of “You’re not just buying the drive; you’re buying the storage system,” “You’re getting enterprise-class storage, not dime-store RAID,” “You’re not just buying a disk drive; you’re buying the company behind it” or the VAR’s favorite, “You’re also getting me with that storage capacity.” Customers know why capacity for even a lower-end RAID array costs more, but the question still comes up.

 

Now VARs may have a really different answer to give these customers: “You’re right; you shouldn’t have to pay minibar prices for generic storage capacity. Let me show you a way to put that cheap disk drive into your storage infrastructure and use it.” And, they get to say the cloud may actually be the reason.

 

Symform has created a decentralized cloud infrastructure that taps into unused storage capacity from customers’ environments, leveraging a peer architecture, similar to what the music sharing networks use. Customers load a client piece of software onto a designated server in their environments, which must provide a “business class” connection to the Internet (512 Kbps upload and 1 Mbps download).

 

Symform parses data from a designated volume into 64 MB chunks, which are encrypted and divided into 64 1 MB fragments. Thirty-two parity fragments are added to these 64, creating a total of 96 MB, which is uploaded to the cloud and distributed throughout the Symform network. With the object-based architectures available today, this is a relatively straightforward process that actually provides security and improves availability, since the data’s encrypted and spread across multiple physical locations.

 

Users must provide local capacity in the form of direct-attached storage or network-connected storage, at 1.5 times the amount of data they want to put into the cloud. The parity data described above is the reason for this extra capacity. When a failure occurs or data is corrupted, the system can regenerate lost fragments automatically. This is where the 1.5-to-1 ratio provides a high level of confidence by allowing the systems to accommodate multiple endpoint failures without losing data. Also, this level of redundant data blocks allows for a faster rebuild time.

 

Symform lets customers exploit the unused capacity they probably have lying around in servers and storage systems throughout the data center. And when they need to add more storage, they can do what they’ve been complaining about not being able to do for years: buy the cheapest disk drives available and plug them in. For VARs, Symform can provide a revenue stream and another cloud storage option. But best of all, it can give them a great answer to the hard drive cost question.

 

Follow me on Twitter: EricSSwiss

1  Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Cmackin1
    I've heard this question so many times. The simple answer for me is that the SATA drive in your desktop is not the same technology as your SAN or server drives, at least those that are dedicated to high availability high-speed access tasks like database read/writes, etc. I've always insisted on using RAID arrays of SCSI (or now SAS) drives that are manufactured (and then tested and validated by the SAN provider). When customers have tried to deploy 7,200 RPM SATA drives in Tier 1 situations, we have replaced drives every couple of weeks in some cases. They just aren't manufactured to work as reliably in enterprise class situations unless lower tiers of storage are dedicated to them for archiving and low level access. The additional testing and validation is worth the extra expense, especially when there's a cost involved in replacing failed drives in the array. I have replaced dozens of desktop IDE and SATA drives and only a relative few FC or SCSI drives in servers. The MTBF (meantime between failures) of these enterprise class drives is phenomenal in my experience. And when we put them in a 16 drive array enclosure the reliability (much lower risk) goes way up as well.
    0 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: