Dedupe’s been around for quite a while and has found its way into most backup products — even showing up in non-backup storage products as well. When looking at backup products, most customers fixate on the backup window, and more specifically, when the backup session is complete, relative to that window. This is understandable, since backups that extend beyond this allocated time can be pretty disruptive to networks and other applications. However, the completion of the backup session also signifies that point at which data is safe – which is really the objective of the entire backup process to begin with. For organizations that send these backups to an offsite location, the overall backup session time, or “wall clock” time, must include the data replication process as well. Continued »
In a recent article on SearchStorage.com, the question was raised as to whether file virtualization is a technology in decline or just transitioning. As a review, file virtualization abstracts the physical location of a file from the user or application, bringing flexibility to storage managers and shielding the user from the disruption of storage administration and data protection.
The use cases presented for file virtualization are: a way to scale existing NAS systems (to alleviate the “silo effect” that NAS has historically had), a migration engine for archiving, an alternative to Microsoft’s Windows Distributed File System (DFS) and, most recently, an automated tiered storage engine and a bridge to the cloud. The point is raised that some customers might be hesitant to put a third-party appliance on top of their collection of high-end NAS systems, partly because a lot of NAS systems have addressed the silo effect. This may just mean that clustering high-end NAS boxes isn’t the best use case. What about the others? Continued »
The drivers for virtualized server projects include simplifying the creation and management of server instances, consolidating the virtual machines on a few physical servers and providing overall flexibility. However, a shared storage infrastructure is needed to support basic functionality for the virtualized server environment: operations like VM and storage migration, as well as high availability and load balancing. Shared storage also enables off-host backup and a DR strategy. This storage virtualization can be implemented with a NAS platform or block-based iSCSI or FIbre Channel. Continued »
In a recent post, we started talking about automated tiered storage and the technologies it involves. With the advent of solid-state storage, a “Tier 0” has been added above the current Tier 1, which has traditionally been fast disk (Fibre Channel or SAS). Given the price tag of SSD, leaving this new tier partially filled is cost-prohibitive. Automated tiering offers a mechanism to move data into and out of the SSD tier, but also a way to better utilize storage tiering in general compared with manually moving data between tiers.
Automated tiered storage puts the data movement decision closer to the storage, rather than on the application server, for example. This “data placement” decision is typically based on the activity level of the data. There are a few different ways automated storage tiering is being implemented, the first of which is within the disk array itself. Many of the major disk array manufacturers, as well as a number of smaller vendors, now offer some kind of data movement functionality in the storage controller. For most of these solutions, automated tiering software essentially tracks the access patterns of data blocks, LUNs or files and moves them to the most appropriate tiers of storage, including SSDs.
Intelligent caching appliances are another area of automated tiered storage that’s being driven by the need to effectively implement solid-state storage. Like the array-based solutions, these appliances dynamically more data up to faster tiers of storage, usually SSD or DRAM. These appliances can be connected to different vendors’ storage systems to provide a consolidated solution.
A third implementation of automated tiering is file virtualization. This technology usually resides in an appliance that sits on the storage network and manages access to files. The appliance can transparently move files among different storage subsystems that it’s connected to, usually based on access patterns. Like the caching appliance, this implementation can be used to create a multivendor, integrated solution.
The takeaway for VARs is that automated tiering is an up-and-coming technology, one that can generate a lot of interest in its own right. But it can also enable other solutions — like SSD, intelligent archiving and even cloud storage. Find out what the automated tiering capabilities are for the solutions you currently represent and understand where you may need to add another vendor.
Follow me on Twitter: EricSSwiss.
I’ve always liked infrastructure management solutions, as products for VARs to sell. They have broad appeal, they promote higher-level discussions (rather than fulfillment) and they usually lead to other projects. Here are two we’ve seen recently. Continued »
How do you pick a good storage vendor partner? Maybe you’ve seen or heard about their technology or had customers ask about it. But aside from their technology or solution, how do you know upfront if they’ll be a good vendor for you? Here are three things to think about, outside of product details, that can give you some insights into how a company that makes a great product will be as a storage vendor partner. Continued »
Automated tiered storage isn’t a new concept — it’s been in archiving systems (HSM) as well as various iterations of information (or data) lifecycle management for years. It refers to the process of moving data between different classes, or tiers, of storage without human intervention. Storage tiering has been a cost saving strategy, mostly, and has typically been implemented with Fibre Channel or SAS drives on one tier, SATA drives on a second tier, and tape (if present) on a third tier. Archive systems moved data off high-speed disk to slow disk or tape when it became inactive and brought it back when it was needed.
Recently, solid-state disk (SSD) created another storage tier and brought a new application to automated tiered storage. Instead of moving less active data to slower, cheaper storage, systems now move more active data to faster, more expensive storage. This new wrinkle kind of ups the ante for storage tiering as a technology. Continued »
Symantec last month published the latest edition of its State of the Data Center report. It’s got some interesting results for VARs, especially in the area of data protection solutions and services. Here are the top five findings of the survey: Continued »
I mentioned in a previous post how IT seems to focus on point solutions and short-term tactical thinking, as opposed to the more long-term strategic kind, and how this is due in part to tighter budgets and the risk avoidance that’s called human nature. People are more comfortable making incremental changes than big, sweeping ones. They’re also more apt to get them funded. But when you’re focused on taking these smaller steps, how do you know if you’re going in the right direction? IT organizations have a compass that helps them understand which products will add value to their infrastructure.
The points on the compass are things like cost reduction, power consumption reduction, management simplification, utilization improvement, performance increase, etc. Continued »
In a post last month, I wrote an open letter to vendors in an attempt to improve on a long-standing issue that VARs have with their vendors: lousy product training sessions. In it, I offered some suggestions to vendors about how to make these meetings more effective. I brought the topic of IT reseller training up to some others who work with the VAR community and got some additional input, which I’d like to share in this and future posts. Continued »