Storage Channel Pipeline


November 8, 2010  12:59 PM

Options for adding file capabilities to a SAN environment

Eric Slack Eric Slack Profile: Eric Slack

When considering how to add file capabilities to an existing SAN, traditional file servers, especially when run as a virtual server, are perhaps the easiest option. They’re relatively cheap to deploy and are the least disruptive, leveraging a familiar operating system and offering familiar services. But these general-purpose OS file servers don’t offer all the power and functionality of a dedicated NAS.

 

Dedicated NAS appliances provide a range of services that has made them an appealing alternative to traditional file servers for years. But putting in a standalone system means either running two storage infrastructures (a SAN and a NAS) or consolidating block storage onto the NAS. This precludes any sort of storage consolidation, something that’s hard to swallow for companies that are committed to a SAN.

 

A NAS gateway, essentially the controller portion of a dedicated NAS appliance, is available from most storage vendors as a way to get high-quality file capabilities integrated into a block storage SAN. These solutions often bypass the storage services that are already included in the base SAN system and can still effectively represent another point of management.

 

A virtual NAS appliance is a solution that combines many of the benefits of the other alternatives with a new one — the cloud. Continued »

November 1, 2010  9:30 AM

Hybrid cloud storage spawns software-based approach

Eric Slack Eric Slack Profile: Eric Slack

Last year I wrote a blog entry on selling services as a strategy for VARs to beat the recession. The rationale was that a subscription-based product could be more appealing to prospects that didn’t have the budget to implement a traditional infrastructure project. Selling data protection as a service can also be a revenue stream for VARs, a different direction for you if you’re accustomed to putting in hardware and software. But there are new developments in the space — in the form of a software-based hybrid cloud storage solution — that may be interesting to VARs, especially for those that need more flexibility in the solution and more advanced features than have been available in the past.

 

Cloud is the technology that’s behind many of the subscription-based offerings that resellers are looking at. Continued »


October 25, 2010  10:21 AM

LTO-5 enables simple backup, DR and the cloud

Eric Slack Eric Slack Profile: Eric Slack

LTO-5 came out earlier this year, with the doubling of capacity we’ve come to expect, bringing to 1.5 TB (native) the amount of data that can be stored on a linear tape. The LTO consortium (run by HP, IBM and Quantum) has also continued to add interesting features to its format. Each generation adds the features from the last, beginning with WORM on LTO-3, drive-based encryption on LTO-4 and now a file system on LTO-5. More accurately, the group has partitioned the tape into two sections and made a file system written by IBM available as freeware that uses the LTO-5 partitions to store data and metadata separately. Linear Tape File System (LTFS) has enabled tape to move beyond backup and even archiving. Continued »


October 18, 2010  9:30 AM

Beyond backup/restore: Business continuity systems

Eric Slack Eric Slack Profile: Eric Slack
In case there hasn’t been enough political messaging in your area, I’ll bring back an oldie: “It’s the economy, silly.” OK, so I’m paraphrasing, but living in Colorado these past few months has left me a little tired of the abrasive language from negative campaign ads. Apparently, we’re No. 1 — in outside political advertisement spending, that is. Where’s this going? I’d like to bring back another familiar theme: “It’s the recovery” or, more accurately, “It’s the business continuity system.” Continued »


October 11, 2010  11:45 AM

Trial-and-error storage monitoring and management results in cost explosion

Eric Slack Eric Slack Profile: Eric Slack

My father was in artillery in the Marine Corps, and as a kid I was fascinated by how they shot howitzers. Essentially, it’s a huge trial-and-error exercise — at least it was during the 1950s and ’60s (now it’s probably GPS-controlled, but stay with me). The first shot established the aiming point, and successive shots incorporated corrections in elevation, aim and range until the target was hit. Of course, these shells could be effective even if they didn’t actually hit the target — think of the “horseshoes and hand grenades” expression on a grand scale. Trial and error is fine for some things, but doesn’t work as well for others, like data storage management and storage monitoring.

 

By trial and error, I’m referring to a process of asynchronous control, where a change or adjustment is made, then a delay, or latency, occurs before you perceive the effect of that adjustment. Can you imagine using trial and error to control many of the things that you do? How about optimizing application performance in an IT infrastructure? Continued »


October 4, 2010  10:33 AM

Primary deduplication’s effect on data integrity, performance

Eric Slack Eric Slack Profile: Eric Slack

Dedupe has been with us for the better part of 10 years. Because of the percentage of duplicated data in the backup space, it was deployed there first. But risk also played a part in its appearance first in backup. Let’s face it: If your dedupe box craters, it’s still just a backup that’s lost. As a technology matures, it gets more stable, and users start looking for new places to apply it. Generally, their expectations of how much impact it will have (in this case, how much space it will save) also decreases. It’s kind of a risk-reward scenario. That explains why primary deduplication is getting attention these days.

 

The thought of primary deduplication certainly came up early on in the adoption cycle of the technology, but there were plenty of “high-value targets” for the dedupe vendors to go after in backup. When it was first introduced to backup customers, they were promised effective data reduction in the double digits — in the high double digits for some data sets — and by and large they got it. While dedupe has certainly not been adopted by everyone (current estimates hover around one-third for market penetration), dedupe vendors seem to be ready to move on. Continued »


September 27, 2010  1:03 PM

Optical storage solutions alternative: Software-only WORM

Eric Slack Eric Slack Profile: Eric Slack

For years, companies have used optical storage solutions — disk drives and libraries — for storing and archiving document images and other data subject to regulatory compliance. At one point, optical was the only technology that met these requirements for longevity and immutability (write once, ready many, or WORM). When the ~50 GB capacity of optical disks was sufficient, this was a workable solution. But as a technology, the optical industry couldn’t increase data density like hard drives did and instead had to develop new formats to keep up with storage demands. This meant users had to absorb expensive hardware refresh cycles and endure data migration from the old format to the new.

 

But times have changed. Continued »


September 20, 2010  10:17 AM

Transparent Interconnection of Lots of Links (TRILL): IP networking’s future

Eric Slack Eric Slack Profile: Eric Slack

Spanning Tree Protocol (STP) was invented years ago as a networking technology to prevent bridge loops (a destructive feedback type of condition) by allowing only one path between network switches or ports. A Layer 2 network protocol, STP computes a plan for routing traffic between every connected device through a “root bridge” such that only one path is used. This path, which is based upon rules configured by the user of the protocol, may not always be the most direct. The plan, or “spanning tree,” describes this set of nonredundant paths and disables all others. When a network segment goes down, an alternate path is chosen, but this process can take a few seconds, something that may be OK for communications but can be unacceptable in a storage network. Another standard, Transparent Interconnection of Lots of Links (TRILL), is designed to address this problem. Continued »


September 13, 2010  10:30 AM

Backup alternative: Arkeia Network Backup

Eric Slack Eric Slack Profile: Eric Slack

In the last post, I talked about a couple of interesting storage alternatives for VARs who maybe weren’t thrilled with their existing disk vendors. As a VAR, having alternatives to show is your stock in trade. Customers rely on VARs, especially those they already do business with, to keep them up to speed on what’s out there. When they first entertain the thought of switching from an existing supplier, they ask their VARs in that space for options. In this post, I’ll talk about an alternative in the backup space: Arkeia. Continued »


September 7, 2010  11:24 AM

Alternatives for VARs that feel unloved by their current storage vendors

Eric Slack Eric Slack Profile: Eric Slack

I was talking with a storage VAR at VMworld last week and realized how little has changed even in the midst of so much change. First the change: Technology marches on, and that march is approaching a double-time pace. There are more and more product opportunities for storage VARs to sell, all from vendors that have compelling technologies and are eager for quality representation in the marketplace. Now for the part that’s not changed: Many/most existing storage players, especially in the disk space, seem to care less and less about the welfare of their smart, independent storage VARs. I don’t know what I expected, especially in a down economy and in an industry where storage technology is becoming more generic every day. It’s dog-eat-dog, and a lot of VARs must feel like dog food. Maybe there’s an opportunity here someplace? Continued »


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: