In a recent post, we went into the use of image backups as an alternative to traditional data backup solutions. When you consider the cost and complexity of the traditional approach, including backup servers, client agents, application modules, scheduling and the work involved in managing a system that moves as much data as these do, it’s no wonder people are interested in another option. We discussed how backing up server images was much less complex and seemed to make more sense. But in addition to just being a better way to back data up, server images also present some advantages that can help create a better backup. By better, we mean storing less data, handling less data, completing the entire process faster and being easier to administer. Continued »
Deduplication was effectively introduced by Data Domain about 10 years ago, as a storage appliance. It was presented to the backup software as either a disk LUN or, more frequently, a NAS mount point. This dedupe technology’s early success was due largely to the complexity of the disk-to-disk backup alternative of the day, virtual tape libraries (VTLs).
Although another dedupe tool was available at about the same time, it was installed on the client server. The storage appliance implementation was the overwhelming favorite as a technology, since it didn’t require replacement of the backup software or tax client servers with the dedupe processing overhead.
Deduplication algorithms differ somewhat, but all use some method of examining each block of data, assigning it a unique identifier (a “hash key”) and comparing it with an index (a “hash table”) of all previous blocks. Continued »
Voices of IT analyst Tony Asaro wrote an interesting article for this month’s issue of Storage magazine. “Content chaos” is a great way to describe the junk-drawer approach to file storage management (can we really call it “management”?) that many organizations take. I’ve used this analogy before: Unbridled data growth is akin to a garage that’s full of stuff. Sometimes it may be cheaper to build a bigger garage (buy more capacity) than to organize and throw stuff out (delete data). The rising cost of admin time and IT’s inability to get data owners to stay on top of their stored data may be part of the problem. Whatever the reason, the result is often a growing pile of unstructured data that most certainly contains duplication and useless files that nobody has time to go through.
The answer in a lot of cases may be to not go through it. Continued »
Cloud storage seems to be a solution looking for a problem. The industry hasn’t been able to agree on exactly what “cloud storage” really means. But there is no shortage of vendors ready to tell your customers what they need and no shortage of customers ready to agree with them — and start planning what they want to buy. It seems to be human nature — at least in the tech industry — that we spend more time thinking about a solution than we do on really defining the problem and how it should be solved, or if it should be solved. Therein lies the opportunity for a VAR. Continued »
I was listening to a podcast on the latest data classification trends and got to thinking about how good of a job current methodologies are doing. When it comes to storage tiering, I’d have to say there’s lots of room for improvement. Tools that assess application latency can help improve the situation.
In the storage space, data classification deals with ways to identify data objects (files, usually) so that they can be stored “appropriately.” What’s appropriate depends on the motivation for classifying the data in the first place. After all, it’s certainly easier to throw it all into a junk drawer than to worry about how your data’s organized. Continued »
From the VAR perspective, backup has an interesting history. About 10 years ago, selling and integrating backup systems was a very successful business model for many VARs in the storage space. Project costs typically included as much in software as hardware, and professional services to install, integrate, configure and train customers could reach 25% of the total. With the advent of affordable disk backup and especially dedupe, backup started to decline as the product of choice for storage VARs. However, the use of server images, like those created for virtual machines (VMDK files in VMware), has spawned a new approach to backup — image backup — that may offer a new opportunity for VARs. Continued »
A recent brief published by global IT consulting firm Bain & Company described five questions that IT needs to answer as businesses emerge from the recession. A couple of these questions are pertinent for VARs interested in being a part of that economic recovery in 2010. Continued »
“Entropy” is a term you may remember from science classes. It refers to energy levels and the organization of complex systems. It takes energy to maintain order in a complex system, and since everything in the universe is constantly being pulled toward a state of lower energy, systems become disorganized. Stated another way: Everything eventually falls apart unless it’s maintained. There’s a show on the History Channel called “Life After People” that could be renamed “Entropy in Action.” It shows computer-generated images of cities and structures as they’re slowly overrun by vegetation, surviving animals and the weather. So what’s this got to do with storage optimization?
Computer systems suffer from entropy as well. It’s the quiet deterioration in performance or available storage capacity that seems to occur over time, as a result of the constant change a system sees — changes in workloads, resources, even administrators. Continued »
SearchDataCenter.com contributor Mark Holt recently put together the “Top 20 universal truths in the data center.” This is an extremely insightful list, and I’m going cherry-pick a few of these and discuss how I think they relate to data storage reseller business.
Buying new hardware doesn’t solve business problems — unless the business problem is the hardware. Continued »
Fibre Channel over Ethernet, as the name implies, allows Fibre Channel traffic to be run over Ethernet networks. Technically, the FCoE protocol encapsulates FC frames inside jumbo Ethernet frames, replacing the FC 0 and FC 1 layers in the stack with Ethernet. At the server, it replaces the Ethernet NICs and FC HBAs with a single converged network adapter (CNA), which provides all the network connectivity for that host via a single 10 Gigabit Ethernet (10 GbE) connection. This cable connects to an FCoE switch, which consolidates FCoE cables from multiple servers, as it connects to the existing data center FC SAN and Ethernet LAN. Continued »