I had the pleasure of going to ExecEvent’s first Next-Generation Object Storage Summit last week in Miami. In attendance were a number of the major players in the object storage space — some well-known companies and some that may not be. They were Data Direct Networks, Cleversafe, Quantum, Nexsan and Scality. For a write-up on the companies that attended the Object Storage Summit and their products, see the link here.
As discussed in the last post, object-based storage architectures are clearly a hot topic and one that VARs should be familiar with — and be selling. Object storage provides a scalable file storage solution that meets the performance requirements of “big data” use cases. But it also delivers to customers a compliant lower-tier storage solution and long-term archive that can replace the data protection process for much of their current data assets as well.
So many files, so little time
We should also say “so little space” or performance or — you get the idea. In the file/block controversy, which started as the SAN vs. NAS war, it’s clear that files won. In addition to HD media and other digital content that’s being delivered to mobile devices, the preponderance of pictures generated by cell phone files are another example of just how much data we’re creating.
As an interesting aside, I spoke with a gentleman at the airport parking lot who, instead of writing down his row number or dictating a voice memo, simply snapped a picture, and consumed about 2 MB if he’s using an iPhone. But there’s also a big data side to this discussion.
Machine-generated data, the (relatively) small pieces of information captured by all manner of sensors that pervade our lives, are frequently files too. These include time, temperature, runtime status, error codes, GPS positioning data, RFID tag information, etc. What makes these files big data is the need to process them quickly and be able to retrieve them at any time. But they also need to be archived, often forever, another capability that file servers or NAS systems don’t have.
In addition to addressing the performance and size limitations of file systems that we discussed in the last post, object storage systems provide a superior archiving solution. Object storage architectures use erasure coding, a data integrity mechanism that’s much more space- and processor-efficient than the traditional RAID methods it’s replacing. It saves storage space (typically 25% overhead instead of 100% for RAID 10) and doesn’t force the system to endure a rebuild lasting hours or days when a drive fails, as RAID does.
Object storage systems can also maintain long-term data integrity at the subdrive level. By running special algorithms regularly, they can detect and repair corrupted data segments in the background, without impacting performance. This capability is critical for meeting regulatory compliance and just good sense given the perceived value of these data sets.
Follow me on Twitter: EricSSwiss