In a recent article, Jeff Boles broaches a subject that’s probably at the forefront of many storage meetings that VARs have with their clients: storage performance. Most users can tell when they’re out of capacity, but solving a performance problem is not so clear-cut. This is due in part to the fact that getting data into and out of a storage system is arguably as important as how much data it will hold. The ugly truth for many users is that they’re still adding physical capacity to disk arrays that have long since run out of “I/O capacity.”
Judging by the amount of traffic we’re seeing for a recent article on I/O, “What is I/O, and why should you care?,” storage performance is a topic users want more information on. Here’s a synopsis of the article: Disk drives list performance in terms of sequential and random reads and writes, or simply “transfer rate” and I/O per second, or IOPS. These specs refer to how fast a disk drive can get a single data object (like a file) onto or off of the drive (transfer time) and how many individual read/write operations the drive can accomplish in a second (IOPS). Except for very large reference file applications, in the vast majority of use cases, IOPS are the critical performance spec.
Most disk drives produce less than 200 IOPS, due to rotational latency or the time it takes for the platter to spin under the head to the desired portion of each track. It doesn’t help that disk drives have been stuck at 15,000 rpm for more than 10 years. Unfortunately, the average storage array often needs to produce a lot more IOPS (at least it does at certain points during the day) than can be had from the aggregate of its disk drive inventory. Therein lies the performance problem that most users have. Continued »
What, are you NEW here?
For a lot of vendors trying to leverage the channel, the answer would seem to be “yes.” As analysts, we take a lot of briefings from companies with new products and new companies trying to sell their first product. I’m always amazed when I hear a vendor say they’re “committed to the channel” and then explain a channel program that was obviously put together without regard to the needs of their channel “partners.” Usually, this is due to a lack of understanding about how VARs and integrators operate and what their overall value proposition is. Oftentimes, these folks have never worked in the channel, and some, it would seem, have probably never dealt with the channel. But there they are, putting together a channel partner program.
When I was a regional manager at a large storage integrator, two examples of this lack of awareness on the part of vendors would come up time and again. Continued »
As the core of many MSPs’ client bases are turning to cloud providers to reduce IT costs, MSPs are facing some unpleasant alternatives. They can become a reseller of cloud services or take on the task (and cost) of setting up their own cloud infrastructure. As we detailed in the last blog, this “MSP challenge” has meant accepting the lower margins of a cloud reseller and largely abandoning their existing business, or accepting the risk and financial burden of setting up and running a cloud computing infrastructure.
Storage Switzerland spoke with the founder and CTO of a Boston-area MSP that’s found a solution to the MSP challenge. For the past two years, it has been running the VM6 Managed Cloud Platform. This software solution runs on Windows 2008/Hyper-V-compatible servers and enables MSPs to create an affordable, all-in-one, virtual cloud infrastructure without a complex networking or storage environment.
Private shared cloud
While attractive, typical public cloud offerings don’t always sit well with businesses that have relied on an MSP to handle their IT infrastructure in the past. According to the CTO we spoke with, “They’re still leery of becoming a (small) customer in a (very large) public cloud environment. Although the VM6 cloud means they’re sharing infrastructure with our other customers, this solution allows our clients to have the cloud experience while keeping their trusted MSP engaged.”
A concern for MSPs running a cloud computing infrastructure is finding a way to scale their infrastructure efficiently to maintain economies of scale and remain cost-competitive. By consolidating customer data into one virtual infrastructure they can reduce costs and gain the flexibility to expand as needed to support growth. This virtual infrastructure allows MSPs to leverage existing technical staff to support more customers, increasing revenue. But the VM6 solution has given this MSP some other benefits as well.
Reduction of downtime
“The redundancy of this virtual environment allows us to set up VMs for critical application failover easily, and we can migrate VMs as needed to support upgrades and other maintenance events, transparently. This also reduces downtime,” said the CTO. This results in an upgraded level of service for customers that didn’t have true high availability previously. “The ability to move applications off of troubled hardware when problems occur lets our support staff conduct break/fix activities in the background, during regular business hours, instead of in real time when the pressure’s on.”
‘Asynchronous’ support activities
The MSP can also conduct regular maintenance without scheduling off-hours maintenance windows and requiring employees to work nights and weekends. According to the MSP, “This gives us the flexibility to maximize what’s probably our most scarce resource, specialized technical staff. And, fewer off-hours deployments can greatly improve the satisfaction level for these employees.”
Perhaps the biggest benefit, according to this MSP, is the knowledge that it has a reliable infrastructure that can be scaled when needed easily and maintained in an efficient manner. It also has a working environment that’s more appealing to its most critical employees, with fewer after-hours work and fire drills. This confidence allows the company to bring on more clients and run at leaner staffing levels. And, the CTO can sleep better at night knowing the technical staff is also sleeping, instead of working after-hours on customer problems and scanning the job boards.
Follow me on Twitter: EricSSwiss
MSPs are in kind of a tough situation. Their customers are increasingly looking at outsourced IT services from cloud providers, potentially taking away a big chunk of what has traditionally been their bread-and-butter client base. To keep these customers, an MSP is faced with some unpleasant options. It can become a cloud services reseller for a cloud provider, but this severely undermines its primary value proposition of being its clients’ trusted IT services provider, not to mention its margins. Or, it can attempt to set up a cloud services business of its own, something that can require expertise and money they just don’t have. VM6 has a solution that’s giving MSPs another option to meet this challenge. Continued »
Randy Kerns made a good point in a blog recently. He said that IT customers are looking for “best of need” solutions, not necessarily “best of breed.” The distinction he drew between the two was that best-of-breed solutions probably contain more features than the user needs at the time or would probably need in the future, and they cost more as well. Time frame is a key consideration, since the refresh cycle of IT products is often only a few years.
In a perfect world, users would buy products designed exactly for them (best of need). But in an effort to get their most important requirements met, they often settle for products that have more than they really need (best of breed), like more performance and more features — at more cost. Sometimes this is a “nice to have” vs. “need to have” evaluation, but not always. Sometimes what a customer needs isn’t available, at least not in a single product, a situation that represents an opportunity for VARs and integrators. Continued »
In the last post we talked about the propensity most of us have toward saving data, or at least not deleting it (I think there’s a difference), because we might need it someday. There are some hidden costs to saving too much data, outside of simple acquisition, power, cooling and floor space. These are “opportunity costs” related to how excess data can make finding the information you need take longer, reduce productivity and increase frustration. The idea is that we have a fixed number of hours in a day, and when we’re doing one activity, we’re not able to do another (the opportunity).
From a VAR’s perspective, these are the kinds of “pain” situations to look for. Continued »
We save too much data. It’s easy to do, and the alternative (getting rid of it) is difficult. As Andy Rooney, a “60 Minutes” commentator for many years, once said, “We never throw anything away until we make a copy of it.” So true. It’s an example of the principle of taking the path of least resistance. As storage becomes cheaper (at least to buy), the incremental cost of storing each document or data object is less significant — at least it feels that way. When compared with what seems like this near-zero cost of storing a document, the risk of possibly needing it in the future seems greater, no matter how small that possibility is.
Aside from the obvious operating expense costs, like power, cooling, data center space and admin time consumed for each terabyte of storage, there are some other hidden costs to storing data that can change the arithmetic of the earlier equation. Continued »
There’s been a fair amount of discussion around what impact the cloud will have on existing technologies and the companies (and VARs) that have built their businesses on them. Cloud backup is a good example, as online services from the consumer level on up are pretty well established and are eating away at the installed bases of more than a few backup product vendors. For most VARs, backup-as-a-service has probably gotten in between more than few deals for backup hardware and software in the recent past. But people don’t like change, especially in areas like IT, where the wrong change can profoundly affect a company’s survival.
Believe it or not, the question of whether to back up to the cloud is as disconcerting for the customer replacing its familiar, on-site infrastructure as it is to the VAR that’s staring at a potentially lost deal. Continued »
For VARs, backup and recovery solutions are the gifts that keep on giving. It seems like every company has a data protection issue of some sort that they’d like to fix, a fact that bodes well for integrators who make their living solving problems. Echoing this sentiment, a recent CompTIA study found that almost four of 10 people said new backup and recovery solutions will be a priority over the next 12 months. In addition, almost half stated that they needed to modernize aging systems, especially those that could be vulnerable to security threats. Of course, public-sector customers will be expecting to get more “bang for their buck” (no surprise here) as tight budgets continue to rule the day. Continued »
For VARs unfamiliar with RDX storage, it’s an innovative technology that may deserve a place on your line card. It’s essentially a removable hard disk drive that’s close to the size of an LTO cartridge. The drives are ruggedized, not from a Mil-Spec sense, but they’re made to be handled like a tape cartridge and can facilitate off-site data movement for backup and DR. Being a true random-access device, they also offer a way to extend an archive by storing cartridges on the shelf but can support much faster searching and file recovery than can linear tape. The other advantage they have in an archive use case is longevity. The dock essentially provides only power and connectivity — the disk drive is in the cartridge — so old RDX storage “media” doesn’t require users to keep older-generation docks around to replay them. Continued »