At Storage Switzerland we get regular briefings from storage vendors releasing new products and updates to existing technologies. Last week we spoke with the LTO Consortium, which has released its latest generation, 6, of the venerable Linear Tape Open standard.
From its beginnings as an alternative to the proprietary DLT format, I must say LTO has been quite a success story. Its history of delivering continuous innovation has been impressive, increasing capacity and performance with every generation, plus adding features such as WORM, encryption and especially LTFS, the Linear Tape File System with LTO-5. Now LTO-6 has expanded the “history buffer” in the compression engine, giving it a 2.5:1 compression ratio and a 6.25 TB per cartridge capacity.
While the roadmap on the LTO Ultrium website has been laid out to Generation 8 with plans for 32 TB of capacity per cartridge, we were told that the next generation beyond that would hold 50 TB. Continued »
Disaster recovery protection is about much more than simply putting a second backup appliance offsite and replicating data to it. Real DR requires a comprehensive DR plan, which always features testing, early and often. DR is insurance, and part of the value is checking to see that it’s still working.
As a VAR, DR seemed like an ideal solution since it included multiple products and was relatively complex so it would drag a decent amount of PS. But it was always a tough sell. Customers were usually clueless about what a good disaster recovery solution entailed, but getting them to pay anything to resolve this problem was very difficult. You could say they “didn’t know what they didn’t know,” namely, that they had a problem, and therefore were less apt to spend anything on it.
One way to sell DR is to get users to focus on the negatives, the risks they’re running, the cost of downtime, etc. This is the approach everyone takes after a hurricane, like Sandy or Katrina, and was particularly popular after the Sept 11 terrorist attacks. But people have strong denial skills and just aren’t compelled by potential pain to themselves when it’s exemplified by the misfortune of others. Part of the reason may be that the solution has more than a little pain itself.
This is another example of the pain-of-change equation. If it’s more disruptive, expensive, etc., to do nothing than it is to fix a problem, people do nothing. Testing a traditional DR system can be disruptive and expensive as it often requires after-hours work by a number of people at the primary and remote locations and maybe some application downtime as well. It would follow, then, that making DR testing quick and easy is a good way to lower that pain-of-change delta and get people interested in a disaster recovery solution.
Hybrid cloud DR systems allow customers to back up their application servers to the cloud as VM images and then restart those virtual machines on host servers in the cloud. These systems have the added benefit of making DR testing almost trivial. Users can start these virtual servers in the cloud with a couple of mouse clicks. This can significantly lower the pain of running a DR solution and potentially make it an attractive topic to bring up with customers.
Follow me on Twitter: EricSSwiss
I had the pleasure of going to ExecEvent’s first Next-Generation Object Storage Summit last week in Miami. In attendance were a number of the major players in the object storage space — some well-known companies and some that may not be. They were Data Direct Networks, Cleversafe, Quantum, Nexsan and Scality. For a write-up on the companies that attended the Object Storage Summit and their products, see the link here.
As discussed in the last post, object-based storage architectures are clearly a hot topic and one that VARs should be familiar with — and be selling. Object storage provides a scalable file storage solution that meets the performance requirements of “big data” use cases. But it also delivers to customers a compliant lower-tier storage solution and long-term archive that can replace the data protection process for much of their current data assets as well.
Object-based storage devices, or object storage for short, have been around for a number of years. But it’s become a hot topic with the near-endless capacity needs of cloud storage and the “big data” requirement for increasingly large, shared storage infrastructures that can be accessed and searched like a single system.
Here we are again after a natural disaster talking about DR. It seems like we do this every few years, starting with 9/11, then Hurricane Katrina and now Hurricane Sandy.
Humans are reactive, not proactive, except in terms of the next event. After each of these disasters, there was certainly a heightened awareness and some action taken by companies, but it’s safe to say that fewer companies took the lessons of disaster preparedness to heart and actually implemented credible DR plans.
Part of the reason is that credible disaster recovery planning has historically been expensive and complex. Starting with offsite vaulting of backup tapes and evolving through disk backup, deduplication and WAN-optimized replication, up until recently DR solutions have remained beyond the means of most SMBs. Now, however, technology may have come to the rescue. The cloud and widespread server virtualization have created a real DR solution that most companies can afford.
Remember Overland Data? Way back in 2000, it was the tape company that was selling a modular tape library (using AIT and DLT drives back then). It was a great story: Stackable libraries grew with capacity demands, even though very few customers ever bought the additional modules. But the stackable concept was appealing, and the company sold a lot of libraries. More importantly, it drove tape library design for the next decade, with every manufacturer coming out with some kind of scalable story.
In the mid-2000s, Overland Storage (which took a new name in 2002) kind of lost its way a bit with a failed attempt at a storage resource management (SRM) software product. If you remember back then, customers weren’t buying SRM tools even from the big companies. But you had to admit that it took guts to try that. I think Overland had a new management team trying to move the company away from a reliance on tape — but it was an innovative move.
Since then, Overland has continued to sell tape libraries with the Neo series and has more recently focused on the small-to-midrange disk business. It bought the SnapServer line popularized by Quantum and built it into a solid business. Earlier this year, it came out with SnapSAN, a clustered block storage device that reminds one of the original LeftHand Networks or EqualLogic systems with a more flexible design, more features and a lower price tag.
Now Overland has come out with SnapScale, a clustered NAS system that grows into the multiple-petabyte range with a global file system and a host of storage features. This “loosely coupled” cluster architecture stores data at the file level instead of chunking files into smaller blocks and spreading them out across the available storage nodes. The result is a much simpler system from a metadata perspective, and one that’s probably more appropriate for Overland’s target market.
These days the idea of scale-out, commodity storage seems to make sense, but it may be a bit too much of a “roll your own” approach for midmarket companies. Many of these products are software solutions that require a skilled IT group to make them fit — something smaller companies typically don’t have. Overland’s scalable SAN and NAS systems are leveraging that scale-out architecture and the economics of commodity hardware to keep costs down, but they still bring big-company storage functionality to smaller companies.
Overland has been an innovator and always had a strong VAR following. When I was a VAR it was a first choice for its technology as well as its field people and channel program. I think this may be the right product for the times, from a vendor that understands VARs.
Follow me on Twitter: EricSSwiss
Readers of this blog know that I’ve got a special interest in new products, especially those that can open doors for a storage VAR or MSP. Working for an independent reseller myself for a dozen years, I was always on the lookout for products with a high sexiness factor because they got meetings for the sales team. Once inside, they were free to follow any potential opportunity, even if it wasn’t for the product that landed the appointment. In this and the next few entries I’m going to present some products that can do that.
Managing a growing virtualization environment is a challenge for more and more companies, but one that’s easy to put off doing anything about — or spending any money on. VARs know only too well how hard selling IT management software is to the midmarket companies that make up a large part of their calling bases. Continued »
I went to an analyst event a couple of weeks ago sponsored by a storage company that many in IT may not be aware of despite that the company has been in business for more than 10 years. It is private and has yearly sales of more than $250 million and more than a thousand active customers. DataDirect Networks (DDN) makes some of the fastest storage systems in the world. Originally focused in the high performance computing (HPC) and media and entertainment spaces, it’s now expanding into the more mainstream IT infrastructure market. For VARs who are looking for a way to unseat a “three-letter” incumbent at an account, this should be of interest.
I know that “big data” has become one of the darlings of the storage industry, as evidenced by the number of times this term is used in online technical media articles. Storage Switzerland, the firm I work for, has added its voice to the chorus but hopefully has provided some clarification. A piece we did called “What is Big Data?” is the first in a series of articles on the topic that attempts to define this overused term and go into why it was created in the first place. What I’d like to do in this blog is synopsize that information and discuss what big data means to VARs.
We’ve coined the term big data and talk about it because it represents a problem. Continued »
I read an article by Gartner recently that talked about the “devaluation of IT.” It discussed how over the past 10 years budgets have remained flat but the expectations of management on IT and the requirements to understand and implement solutions involving the cloud, virtualization, mobile devices, etc., have kept increasing. To cover these “unfunded mandates,” IT has done more than just cut fat; it has killed investment, including things like ongoing training for the staff and upgrades for the existing infrastructure. What does this mean for the channel? Are there things VARs and MSPs should be doing to avoid these problems, or could this situation actually present an opportunity?