We think in terms of solutions; we categorize data that way. Specific answers are easier to remember than vague capabilities. For years, when IT had a performance problem, the first answer was faster drives and then more spindles. Now the solution is solid-state storage. But it’s rarely that simple, and the tendency of customers to jump to a solution before they contact their VAR can make for a frustrating experience.
They ask VARs to show them a solution that they’ve typically done some research on (maybe) or just heard about. They don’t say, “I think my storage is causing application performance issues; what types of things can you show me to solve that?” Instead we get, “Can you show me some SSDs?” Continued »
Investment or commitment is what drives a successful channel partnership. Real “value-added” resellers don’t carry a huge line card; they pick and choose their vendors. And they usually choose vendors that pick and choose their VARs. For vendors, this investment means time spent on training VAR technical teams, useful sales training and opportunity generation. Notice, I didn’t say “leads.” This is a partnership, and both parties must bring deals to the table and typically take turns. Opportunity generation comes from resellers sharing their calling base and vendors sharing introductions to companies they’ve accumulated through corporate marketing activities. It can also include joint lead generation activities like seminars, sporting events and “lunch and learn” sessions.
Good vendors and good VARs alike should be looking for partners who understand the need for commitment and investment. Part of that investment is taking the time to understand the organization you’re considering. Continued »
Amazon was in the news last week for the partial outage to its EC2 cloud computing platform, which caused a disruption for some popular websites. Several weeks ago it was Google’s turn in the fishbowl, although its problem was different and the impact was on email rather than hosted Web services. While some storage vendors may use these cloud outages to push one product or another, I think the message for the user community and the VARs that service it is the same — be prepared.
If (when?) a cloud storage provider has a systems issue affecting performance, a partial cloud outage or goes down altogether, companies need to be ready. I think perhaps an even scarier scenario is a smaller cloud provider simply going out of business. This would most likely be very sudden, since most companies in financial trouble take great pains to hide this fact, until it’s too late. For backup customers, this would mean a little nervousness until that first full backup is taken (and sent to the new cloud provider). For those using the cloud for primary storage or even reference data, it’s another story. Continued »
Change-based replication products have been around for a number of years providing a simple solution for high availability (HA) and disaster recovery (DR). The technology creates a second copy of a given data set, folder, directory, drive letter, etc., on another server set up to run the same application. It keeps the “target” in sync with the “source” by capturing disk writes at the source computer’s file system layer and replicating these byte-level changes to the target. This continuous process creates a near-real-time copy of the data required to run the application. Some products also include a mechanism that can sense when the source server’s process encounters a problem and can fail over operations to the target server. This has been an industry-standard alternative to more complicated (and expensive) clustering software that is typically required for each application. Moving the target server to a remote location created an effective DR solution as well. With the rise in server virtualization, another application for this technology has come up: eliminating “virtualization stall.” Continued »
Everybody “gets” the cloud and its potential benefits. A cloud implementation can offer nearly limitless scale, simple sharing, off-site protection, utility-like pricing, etc. But, as is often the case with newer technologies, the implementation details can be as important as the core technology itself. An example of this concept is the popularity of the appliance format, which can enable new products to be implemented more easily, reducing the difficulty some users have getting up and running. Continued »
Over the past several years, alternatives to traditional backup software products have been developed for backing up virtual machines. These image backup solutions leverage the fact that VMs encapsulate the entire application, OS and server configuration state in a single file — like a VMDK for VMware. This allows the entire server backup process to be reduced to a single (large) file backup and removes the complexity involved with traditional backups, which usually had to understand file structure, application data objects, etc.
But, even with image-based backups, returning applications to operational status requires several steps. Continued »
While a lot of attention is paid to natural disasters, the truth is that natural disasters don’t hold a candle to the man-made variety, in terms of the impact on IT and data protection. Events like the recent Gmail fiasco can be a watershed for smart VARs. Continued »
I wrote recently about how VARs are in need of a good block disk array solution for their midmarket customers. I explained that arrays from the traditional Tier 1 vendors are lacking the differentiation they used to enjoy, and “by the pound” storage from the commodity vendors may not have the reliability that even a smaller business needs. But there’s more.
In the increasingly virtual world of IT storage, the focus continues to shift to software features as a way to differentiate vendors’ products. It seems like every week a startup comes out with a new storage solution that installs as software on “commodity hardware.” But what about that data storage hardware? How well is it designed, how well is it supported, and how long has the company that makes it been in business? In a rush to talk about software features, have we forgotten the hardware? Continued »
A sales trainer once said, “Never ask a question you don’t know the answer to.” You don’t want to create an objection that you can’t resolve. Said another way, you shouldn’t bring up a pain point that you can’t address (hopefully with a PO). On the customer side, there’s a similar sentiment; IT admins and managers don’t usually go looking for problems that they don’t have solutions for. Call it denial, pragmatism or just self-preservation, but why go looking for trouble? Also, who needs a few more projects on the whiteboard?
E-discovery, or the process by which electronic documents and information in response to a pending legal action are searched for and obtained, has historically been in the realm of problems that people don’t want to look for. When it comes to the question of how to get their arms around all the data that could be a liability, IT hasn’t had an answer — so it hasn’t asked. The prospect of crawling individual files systems, network storage devices and email servers to create an index of data, with the CIO and legal department breathing down IT managers’ necks, was bad enough. But the real elephant in the room has been legacy backup tapes.
Many organizations have elaborate schedules for taking backups and supporting restores of short-term data, but most have a subset of backups that end up off-site for perpetuity. And for a large percentage of companies, these are still on tapes.
The prospect of having to recall, restore and search these archived backup tapes in response to pending litigation is truly a nightmare scenario for most IT managers.
Fortunately, there is a solution available that can address this problem. Index Engines has recently rolled out a cloud e-discovery service that will search, index and recover files from legacy backup tapes for less than it would probably cost a company to do it themselves — assuming the company has the time, personnel and infrastructure available.
This service, called Look and Learn, starts with clients shipping tapes to the Index Engines facility (or have them sent by the archiving company), where they’re scanned and indexed without being restored. This index is then put online for the client to review and decide which (if any) files are needed to support their pending litigation. The people that review these data are often from different organizations (outside legal firms) and geographically dispersed, making the cloud the perfect medium for this service. When the required data is identified, copies can be ordered from the appropriate tapes and sent to the client. And after the project is complete, the index can be retained online for as long as needed to support future discovery requirements, and the tapes are sent back to the client’s archive.
The technology for this process was developed by Index Engines and has been sold as a software solution to companies themselves and for use by e-discovery service providers. This new cloud e-discovery implementation brings the solution to a larger group of potential customers, many of which haven’t been involved in legal discovery activities of electronically stored information in the past. For IT managers in these organizations, and the VARs that support them, the Look and Learn service can be the answer to a question that’s not getting asked.
Follow me on Twitter: EricSSwiss
In a previous post I discussed the process of comparing SSDs and how it’s a little different from evaluating traditional hard disk drive (HDD) storage systems. First off, solid-state flash storage has several characteristics that make it very different from mechanical hard disk drives (HDDs) but that must be understood. Also, there are fewer standard specs in the SSD space, so meaningful comparison between vendors is more difficult than it was with HDDs. The recommendation was to make comparison a two-step process in which you use published specs to create a short list and then undertake in-house testing to determine the best product or vendor. In that first post I went into the details of comparing published data, and in this one I’ll talk about how to perform an SSD test, including details to consider when testing SSDs. Continued »