People want to bring their mobile devices to work, and they want to (or feel they have to) bring their work home. They also insist on being connected in transit — as evidenced by the number of (dumb) people looking at their smart phones behind the wheel — but that’s another issue.
Fifteen years ago, workers started replacing desktops with laptops and used VPNs to log in to the corporate network when away from the office. The concept of the remote worker was born, and corporate IT had to adapt. Continued »
People like to learn things, especially curious technical people, the sort of folks who make up a VAR’s calling base. Aside from curiosity, IT people need to learn things in order to do their jobs, usually more than they have time to devote to the process. That’s why they rely on their VARs to provide that education. Spending some time in this pursuit is a good way to help establish that trusted-advisor status that most VARs and resellers strive for, particularly if the topic doesn’t revolve around a product presentation. “Big data” is an interesting topic, one that provides an opportunity to impart some education and build that value with your customers.
Big data initiatives are complex, expensive and often misdirected. Continued »
Scale-out storage solutions that use industry-standard hardware have some interesting capabilities since they include significant processing power in each storage node. For these grid-based architectures, having relatively abundant CPU physically distributed with the storage gives them the ability to maintain performance as they scale and to support additional storage services (snapshots, replication, deduplication, etc.). They can also load a compute engine, such as Hadoop, into these storage nodes and perform distributed processing tasks.
Cleversafe has established itself in the scale-out storage market with its Distributed Storage Network (dsNet), an object-based storage system that provides secure, geographic data dispersion using proprietary erasure coding and data encryption. It has built a user base that includes some very large cloud companies, including the ShutterFly photographic website.
I’m hearing a lot these days about VARs who are afraid of the cloud, expecting it to kill their businesses. For those who are simply looking for new products to run through their sales process, the changes cloud brings are a little disconcerting. But for those VARs who are paying attention to these changes and are willing to invest a little bit, nothing could be further from the truth. The cloud is a good thing for VARs (I never would have thought I’d be quoting Martha Stewart in this blog). Sit down, upgrade your skill set, hire some new people if you have to and figure out how to make money with it.
In another life I worked as an industrial distributor of mostly 3M products used in manufacturing. Although not in the high-tech industry, 3M certainly was an innovative company. I remember hearing that it encouraged its scientists and engineers to spend one day a week working on projects outside of their normal work. This was a brilliant move because these kinds of people were doing that anyway, and 3M would own anything they developed at work (rather than on the weekends).
3M also had an understanding of how long its innovation would last before the competition came out with a product the market thought was comparable (we’re talking about customer perception here, not necessarily functional equivalence). Again, 3M used the 20% figure, saying it had to generate 20% of its yearly sales volume from products that weren’t available the previous year. The company calculated that every year, one-fifth of its product sales volume would evaporate due to the technology becoming somewhat generic.
VARs see this same thing as solutions they offer that used to command good margins no longer do. Continued »
I had lunch with a storage integrator recently. I asked him what made his company look for new products to sell and what drove it to add vendors to its line card. He said the company is pretty conservative when it comes to selling new products, like those from startup companies or new technologies from existing companies that had less of a track record.
But he said his company would start looking for new solutions when customers had clearly unmet needs that it couldn’t satisfy with its existing products. Said another way, it was time to find a new vendor when defending the technology of its current vendors’ products no longer made sense. All vendors’ product lines cover a range of use cases or certain segments of the overall market. They also have “sweet spot” use cases that they handle really well, and the opposite, which I like to call “flat spots” (think of the bottom of a melon). These are the use cases their technology doesn’t cover well.
All VARs (and most customers) are aware that most vendors will exaggerate their range to some extent and tend to ignore their flat spots, which can become more pronounced as a technology gets older. Continued »
The Dodd-Frank Wall Street Reform and Consumer Protection Act is legislation that was passed in the aftermath of the financial meltdown of 2008. It seeks to create more oversight of the financial industry but will have much broader implications for companies in other industries as well. This should drive a new set of compliance requirements that VARs can use to create discussions with their customers and prospects around infrastructure.
What the Dodd-Frank legislation means for companies depends on a lot of factors — such as the industries they’re in — and the ramifications for data storage and IT are still unclear. It establishes some new agencies, such as the Office of Financial Research and the Financial Stability Oversight Council, which will develop the policy that will drive requirements that companies must comply with. This compliance will eventually mean specific IT initiatives.
When users need more performance for critical applications, most vendors offer SSD board upgrades to their disk systems to use as a cache or they support replacing disk drives with drive-form-factor SSDs to create a high-performance tier within the array. The results from this storage-side SSD implementation aren’t always the best since legacy disk array systems often aren’t architected to support the IOPS that SSDs can provide. Also, array-based solutions apply only to that system, limiting ROI, and installing and running an SSD tier can be complex and disruptive. Network-based caching, whether for file- or block-based storage, is an alternative to adding SSDs to existing storage systems that can be a better technical fit and a better solution for VARs to sell.
In this post we’ll continue our discussion of flash implementation options by looking at all-flash storage array systems.
Server-side caching or tiering systems essentially augment hard disk drive (HDD) performance; they don’t replace disk drive arrays. This means they need to run a process to analyze and determine which data sets should be on flash and then move those data at the appropriate time. The net effect of these processes can be increased system complexity and reduced overall performance.
For tiering systems, this analysis is typically applied only after data’s been written to the HDD tier and can involve a “warming” period where multiple accesses are analyzed. Moving those data at the appropriate time also creates storage controller overhead, especially on the hard disk array that it’s supporting.
In the last post we talked about how NAND flash memory devices differed from magnetic disk drive storage and the importance of understanding flash endurance. In this post we’ll discuss flash implementation, specifically devices that are installed in the application server.
Flash is getting more affordable but is still several times the cost per gigabyte of hard disk drives (HDDs). Because of the cost disparity, it’s often used to augment HDD performance: The more performance-critical data sets are placed on flash, sometimes temporarily, to take advantage of its orders-of-magnitude better performance, especially IOPS.