Storage Considerations for Virtual Desktops- Part 1: Understanding VDI and its impact on storage (Sponsored Post)
This is a sponsored guest post by Vikram Belapurkar, a solutions marketing manager at Dell focused on storage virtualization and consolidation solutions.
Welcome folks! This blog post is first in the series of three posts that will discuss challenges related to virtual desktop deployments and explore ways to overcome these challenges. In the first post we will primarily discuss the impact a VDI workload can have on the supporting storage system, and how that affects your VDI project success.
With proliferation on many client devices that vary in form factor, desktop virtualization may be the most relevant technology for streamlining client device management and providing a consistent user experience. By separating operating system and applications from physical client devices, desktop virtualization helps streamline management, lower operational expenses and facilitate adherence to compliance and security requirements.
However, if not correctly designed, desktop virtualization can spell disaster. There are three primary factors that determine the success of your VDI project:
- User experience that is consistent and acceptable by the end users
- A positive ROI that justifies this shift in client computing model
- Simple and streamlined desktop VM provisioning
And storage is the critical element in addressing all of these factors. Storage plays a vital role in enabling successful VDI roll-out. Let us take a quick look at what it means for storage to host and service your virtual desktop workloads.
Why storage matters:
First of all, VDI is not just another enterprise workload. VDI workload is highly variable in terms of I/O demand it places on supporting storage infrastructure. Storage that supports a VDI environment essentially stores the virtual machines that power the client devices. Every time a client device accesses the OS, application or user data, it generates I/O requests for the storage infrastructure. As you can imagine, this access pattern is not evenly spread-out over the course of the day. There are periods of time when many client devices are accessing large amounts of data from the VDI infrastructure, and the supporting storage arrays. For example, in the morning when a large number of client devices boot simultaneously, they generate massive amounts of I/O requests for the storage devices. These are called boot storms. If the storage is not able to service these requests within acceptable latency, the client devices experience delays and the user experience is compromised.
Overprovisioning of storage can help solve this issue to some extent. By overprovisioning, you allocate more storage capacity, and thereby typically more disk drives, to your VDI than you really need. The ability of storage to handle certain number of Input Output Operations per Second (IOPS) is a function of the number of disk drives it contains. A larger number of disk drives in the storage system can support a higher IOPS demand. However, this is cost-inefficient. That leads us to our second challenge; the cost of your storage system.
It is important to realize that when you implement VDI, you are actually moving the entire client attached storage; all the local hard disk drives (HDD); over to your enterprise datacenter. The storage infrastructure that supports VDI deployments is more costly than the client attached HDDs. When you combine the need for lots of enterprise storage, with the possibility of overprovisioning capacity to meet performance goals, costs can become an issue. Carefully managing your storage footprint is critical in order to achieve the ROI.
Lastly, your VDI administrators are responsible for provisioning and managing a very large pool of desktop VMs. In order to achieve operational efficiency and eliminate sources of errors, they need ways to efficiently and rapidly provision multiple desktop VMs from predefined templates. Tightly integrated storage and hypervisor management plays a key role in enabling rapid VM deployments and simplifying VDI management.
I will discuss throughout this blog series the possible ways you can manage and mitigate these storage challenges.
A successful VDI implementation can help streamline desktop management, lower operational expenses, and facilitate security and compliance adherence. However, inadequate design considerations can lead to failure. While designing your VDI environment, it is critical that you ensure acceptable user experience, positive project ROI, and a rapid and streamlined VM provisioning model. Proper storage selection can help ensure optimal performance, reduce storage footprint and speed VDI provisioning. Storage is the key enabler in ensuring a successful VDI roll-out.
In the next blog post, I will go over a storage sizing illustration for delivering optimal user experience. Following that, in my third and final post in this series, I will explore key storage features that enable successful VDI implementations. Stay tuned.
Storage virtualization is still a pretty new technology, considering virtualization is still being worked and figured out in enterprise IT departments. Whether you want to diminish the amount of hardware or maximize your experience with server or desktop virtualization, storage virtualization may be the way to go. Check out these great resources that we’ve discovered (some through member recommendations!).
John Chambers is a fascinating character in the history of Silicon Valley. He’s one of the longest-serving tech giant CEOs (since 1995), helping grow Cisco from a billion in revenues to $40 billion, and at one point making it the most valuable company in the world.
But today, he very publicly addressed what he saw as one of his recent missteps: Betting big on the consumer market. In a statement sent to employees yesterday, and released publicly today, he wrote:
IT Knowledge Exchange hit the ground running for Storage Virtualization month, but it occurred to me: What about those who aren’t quite familiar with storage virtualization? So today I’m backing up a little and compiling some of the basics concerning the subject. For all of those storage virtualization pros out there, I welcome any corrections, addendums, and clarifications! Leave them in the comments section or send me an email at Melanie@ITKnowledgeExchange.com.
There’s probably no more exciting, imagination-capturing branch of military research than DARPA (Defense Advanced Research Projects Agency). They’ve brought us robo-hummingbird spies, self-driving hummers and, last but not least, the Internet.
So they can be excused for wanting just a little something in return: To be able to use their iPhones and Androids securely. From a recent Request for Information:
The primary purpose of this RFI is to discover new technologies and methods to support full disk and system encryption of the CMDs (specifically Apple and Android platforms) to include a pre-boot environment to load the operating system. The solution must use an AES-256 bit encryption algorithm compliant with FIPS 140-2 as published by the National Institute of Standards and Technology (NIST). In order to meet this objective, DARPA extends an invitation to industry and universities to submit a whitepaper with ideas/concepts that describe an innovative existing technology approach that can be deployed in less than 90 days.
Currently only Blackberries and high-end secured phones are allowed in many DoD environments, meaning Angry Birds is out. It sounds like DARPA is looking for a full-drive encryption bootloader to pick up where the consumer-friendly Droids and iPhones have left off. To be fair, Apple has beefed up its security offerings in recent iterations (with a few nay sayers), but the business need isn’t new. In my time reporting on mobile devices, I’ve heard any number of security schemes to get around security concerns: Everything run as SaaS, with no sensitive local data stored; A specialized encrypted card that held or encrypted and decrypted the data; and a number of virtualized environments that sat (supposedly) securely inside the everything goes-consumer devices.
It will be interesting to see what DARPA picks: Freedom of Information request, anyone?
Open-source software isn’t anything to write home about anymore, but hardware design is less transparent. Facebook is changing the game today with its Open Compute Project, designed to share the specs and design of the custom servers built for Facebook’s data center in Prineville, OR. As Larry Dignan pointed out, this is a symbiotic move on the social networking company’s part:
In many respects, Facebook is open sourcing its data center and server designs. Jonathan Heiliger, vice president of technical operations, said the Open Compute Project is a way of giving back. It’s also a way to get vendors with more scale to incorporate Facebook’s designs to meet its needs with cheaper systems.
Despite what it means to vendors who may be scrambling to replicate Facebook’s designs and solutions, the Prineville data center’s PUE rating speaks volumes: 1.07 PUE versus the 1.5 PUE average, and the 1.4 – 1.6 PUE of Facebook’s leased data center.
- Vanity-free packaging with little-to-no screws, which resulted in 22% fewer materials, and 6 lbs less in weight.
- No duct work and no air cooling system in the data center.
- Dual AMD Opteron® 6100 Series socket motherboard with 24 DIMM slots.
- Intel Xeon® 5500 or Intel Xeon® 5600 socket motherboard with 18 DIMM slots.
- Localized uninterruptable power supplies serving six server racks.
How do you see this changing industry standards or vendor offerings, if at all? Share your feedback in the comments section or send me an email at Melanie@ITKnowledgeExchange.com.
You know what Dell thinks is cool? A billion dollars, which is just about what it spent in its recently closed acquisition of storage virtualization trailblazer Compellent Technologies, which had been receiving industry nods for its top-of-the-line hardware/software one-two punch.
Dell has been making strong inroads in the storage market, but some high-profile defeats have stymied its progress, most notably an embarrassing loss (after early bold announcements) of 3PAR in a very public bidding war with HP. A Gartner Magic Quadrant report on the storage market noted “the failed bid to acquire 3PAR may add additional shadow on the existing relationship with EMC.”
But Dell isn’t licking its wounds. That $2.4 billion it didn’t spend was burning a hole in its pocket, launching a $2 billion cloud spending spree. All that dropped money might help position Dell nicely as businesses look to bridge the public/private storage cloud divide, particularly since it’s playing nice with VMware and Azure in addition to its various in-house options.
The big question is if all the pieces come together neatly enough to compete with the established players in the area.
DataCore Software, known for its storage virtualization software, has released a survey comprising over 450 IT organizations across North America and Europe, “The State of Virtualization.” The findings can be a little disturbing, especially to a company who creates a product that many medium and large enterprise IT orgs are leaving out of their virtualization plans: Storage. The study found that 43 percent had mistaken the impact storage would have on server and desktop virtualization or had shied away from a virtualization project because storage-related costs were too high.
Delayed virtualization projects weren’t the only downside to this apparent misunderstanding of storage virtualization costs; even among those who had already deployed server virtualization, 66 percent view the increase in storage costs as their biggest problem. Higher costs doesn’t mean higher quality, either. Almost 40 percent reported unhappiness with their storage infrastructure due to slowed or limited availability for applications. To exacerbate the decreased performance, 22 percent of IT admins feel locked-in to their storage hardware provider, with about 40 percent of respondents using two different storage systems from the same vendor.
In order to achieve the agile, cost-effective, and enduring IT infrastructures you seek, those old ties to physical storage devices must be broken, just like you’ve done with servers. To do so requires tackling the next “Big Problem” plaguing data centers today – dissolving the expensive and restrictive dependency on disk hardware.
Are storage cost forecasts keeping your company from virtualizing, or do you wish now that it had? Share your stories in the comments or send me an email directly at Melanie@ITKnowledgeExchange.com.
Despite growing knowledge and interest in server virtualization, storage virtualization seems to be lagging behind. The truth is that if you’re interested in or are currently deploying server virtualization, storage dilemmas – and budgets – can hold you back.
Former leader of research and development at VMware, Kieran Harty’s new company, Tintri, is aiming to take virtualization to the next level. When Harty understood that many companies hesitate to include mission-critical operations in their virtualization deployments because of storage performance issues, the seed for VMstore was born. Harty’s team members at Tintri have backgrounds in virtualization at VMware and Citrix and storage background at Data Domain and NetApp.
The goal of VMstore is to create a storage solution specifically for virtualized environments from the ground up, reducing complexity. While traditional storage accounts for 20 percent of a typical enterprise’s IT budget, virtualization budgets see closer to a 60 percent dedication to storage. Harty told the Computer Technology Review that “the key bottleneck slowing virtualization adoption is the legacy storage systems that were architected before virtualization was even a consideration. Our products are designed to help enterprises virtualize 80 percent or more of their IT infrastructure.”
From the delightful DrunkenData, a delightful post (well, translation really) by Jon Toigo on the inevitable Rites of Spring, including the coming of fresh flowers, melting ice and, what else, storage marketing hype.
It’s worth a read simply for the echoes of T.S. Eliot applied to the economics of single-sourced hardware, but in the thousand-plus word piece dives into another topic: Why is storage, one of the most commoditized of the IT Dark Arts, still so expensive? For years, we’ve been regaled by tales of storage de-duplication, cross-platform standardization, and cloud/shared/virtualized storage solutions that all promise to decimate costs, but budgets keep going up. What gives?
There’s the simple, obvious explanation: That more stuff is being stored, dummy, so even as price-per-gigabyte goes down, total costs go up. But Toigo writes that this doesn’t paint the full picture:
In 2011, roughly 40 years after the distributed computing revolution, the costs of distributed computing have only increased – especially in storage. Despite the onset of commoditization in drives and enclosures (all disks come from four OEMs these days, and all chassis from a half dozen or so enclosure makers), array prices have accelerated at a rate of about 120 percent per year.
Part of the explanation is value-add software that vendors insist on joining to proprietary array controllers. Another part is the failure of the industry to define truly standardized interconnects so that two switch providers can build products to a common standard with absolute certainty that they will not work together in the same fabric.
All these sexy management frills added to an un-sexy commodity service are keeping prices high and preventing real savings in the storage department. In the end, that helps make the storage market hot and steamy for acquisitions, but helping enterprises actually cut costs might make for a beautiful love story that lasts past one passionate night.
Michael Morisy is the editorial director for ITKnowledgeExchange. He can be followed on Twitter or you can reach him at Michael@ITKnowledgeExchange.com. Image from Flickr user zigazou76 and licensed with Creative Commons.