We’ve written about the dangers of the consumerization of IT before (actually, again and again), but such progress has marched on, despite our earnest protestations, linked arm-in-arm with that golden child, cloud computing. At least, until last week, at which point both ate some serious crow in the form of an outage and raised awareness of privacy breaches.
Highest profile, of course, was Amazon’s EC2 outage that took out sites like Reddit, FourSquare and, according to one forum poster, cardiac monitoring tools. Lives, then, might literally have been at stake.
In another cloud/consumer blow, Dropbox updated its terms of service, making explicit its willingness to turn your data, hosted on their servers, over to the authorities. Not surprising, but another chip of control taken away from the data owner. Continued »
Storage Considerations for Virtual Desktops- Part 2: Sizing your VDI storage to ensure optimal performance
This is a sponsored guest post by Vikram Belapurkar, a solutions marketing manager at Dell focused on storage virtualization and consolidation solutions.
Welcome back folks. This is the second post in a series of three blog posts discussing storage considerations for VDI deployments. In my last post I highlighted the critical success factors for your VDI projects, and how your storage infrastructure plays a central role in ensuring successful VDI roll-out. I also mentioned that one way to meet the desired VDI performance is to increase the number of disks in your storage environment. While this can lead to overprovisioning of storage, it is important to understand how increasing the number of disks can help you meet your performance goals. In this post I will provide a simple example that illustrates how to size the storage for your VDI environment in order to ensure optimal performance and user experience.
Sizing storage for VDI can be especially tricky. On one hand, you need to ensure realistic performance. On the other hand, you are under a lot of pressure to minimize storage footprint in order to lower storage related costs. When designing a VDI environment, it is critical to ensure that your storage does not become a bottleneck. VDI is a highly transactional workload and its performance is bound by disk I/Os. One way storage administrators can meet the I/O demand is by increasing the number of storage disks in their VDI environment. It is important to look at the peak IOPS demand that will be placed on the underlying storage, and design your storage to handle those peak IOPS. There is no simple formula for assessing your IOPS needs. It will vary for every organization depending on their desktop user types, and boot and login patterns. I will take a simple example here and go about identifying the peak IOPS needs and further the size of the storage needed to support those IOPS.
VDI Storage Sizing Exercise
Let us say that we are designing a VDI environment for 100 desktops. We estimate that about 80% of those desktops will boot in the morning simultaneously and then stay online rest of the day. Remaining 20% of the desktops will boot and perform operations during the night time.
Let us assume that we estimate each desktop to produce a peak of 25 IOPS while booting up, and then an average of about 5 IOPS throughout the steady state operations before logging off. The logoff operation creates a peak of 15 IOPS per desktop.
We also estimate that we will grow our VDI environment by up to 50% in the next 5 years and we are designing the VDI environment to support this growth.
As it is evident, the boot operation is the most I/O intensive and thus, we should be sizing our storage to support the performance requirements during boot periods.
So the 80 desktops (80% of 100 desktops) will produce 80 X 25 = 2000 IOPS when they are booting up in the morning simultaneously.
To account for the anticipated growth, we will need to support up to 50% more IOPS in the next 5 years as we grow our VDI environment. So we need to design our storage to support 2000 X 1.5 = 3000 IOPS in order to effectively address the peak IOPS requirements for the next 5 years.
Let us assume that we will be using 15K RPM SAS drives to support this VDI environment, and that each 15K SAS drive can support 150 IOPS.
Hence we will need 3000 / 150 = 20 of the 15K SAS drives in our storage to handle the peak IOPS.
This example assumes RAID 0 policy and no spare disks.
This is a simple example, but should help illustrate the decision process. The key is to estimate, as accurately as possible, the IOPS patterns for your specific environment. It is important to note that the IOPS patterns vary drastically between organizations. Dell offers consulting services that can help you analyze your VDI needs to meet the required performance goals.
Inadequate storage performance in your VDI can lead to less than acceptable user experience. While sizing storage for your VDI, it is important to design it to deliver adequate performance and user experience. VDI storage needs to be sized to handle peak IOPS demand placed by the VDI workloads. The I/O patterns vary drastically from organization to organization, and a critical analysis of your end user profiles is essential to accurately estimate the peak demands placed on your storage infrastructure.
It is important to recognize that overprovisioning your storage in order to meet the performance goals is cost-inefficient. In my next and the last post in this series, I will discuss some critical features and considerations for your VDI storage that can help mitigate performance challenges while optimizing storage footprint and enabling a simple, streamlined desktop VM provisioning model. Stay tuned.
The federal Department of Energy cites data centers as three percent of U.S. electricity usage, “amounting to 120 billion kilowatt hours per year, at a cost of $7.4 billion.” As data creation increases year over year and thus increased need for storage space and support, greater efficiency is the best option for IT departments moving forward. From paper usage to data center energy reduction, there’s always a way that IT can lower costs and usage.
One recent green IT project of note is Facebook’s Open Compute Project, where the company revealed the specs and design of its custom-built data center in Prineville, OR. When compared to its leased data center, Facebook’s Prineville facility received an impressive PUE rating: 1.07 PUE versus 1.4 – 1.6 PUE for Prineville and 1.5 PUE for the national average.
For a company capable of building its own data center, the open-sourced project is a gold mine for green initiatives. For companies less capable, there is the hope that vendors will adopt some of the social network’s resource and energy-saving designs.
Other Green Options
Take advantage of technologies such as virtualization to reduce overall data center costs. Power consumption, server and CPU count can be reduced up to 50% in some cases.
Reuse old hardware or wipe it clean and donate to a recycling center to be used for parts or donated to a nonprofit organization. Upgrading to the latest version or overhauling your data center doesn’t mean you have to trash your old one. Research recycling options that can also serve as a tax write-off.
For further information on how to use virtualization to green your IT department, check out this guide from SearchServerVirtualization.
It’s another victory for virtualization and cloud computing’s diehard believers, and the technology industry in general. Trumping Wall Street projections, storage behemoth EMC’s reported earnings today rose 18 percent over last year, ringing in at $4.6 billion. EMC majority ownership in VMware helped boost those numbers, with a 33 percent increase in the startup’s revenue, announced Tuesday.
FBR Capital Markets analyst Daniel Ives told Reuters, “These guys are just in the right place at the right time. They are in the middle of what continues to be just a monster product cycle.”
EMC chairman and chief executive Joe Tucci issued a statement that EMC’s early and strong start positions the company well for “significant opportunity for long-term growth potential ahead.” The company points at its “market-leading virtualization and information infrastructure products and services,” such as its Symmetrix storage product and mid-tier storage products, as reasons for its tremendous growth in a down economy. Tucci also recognized EMC’s unique position “squarely at the intersection of two of the most sweeping trends in IT – cloud computing and Big Data.”
Is your company one of the reasons EMC and VMware are on the ups? Share your outlook or attitude toward virtualization in the comments section or send me an email directly at Melanie@ITKnowledgeExchange.com.
Since storage virtualization is kind of a division of a division of IT, there aren’t too many professionals out there who focus solely on the technology. Thus, our storage virtualization Twitter pros list consists of storage professionals and virtualization professionals, all with more than enough knowledge and experience to help answer your questions and foster further discussion and understanding.
When news spread that the Microsoft Office 365 public beta was out, I skipped the endless media slideshows, commentaries and punditizing to try it out for myself. So I went to the Office 365 site, clicked “Join the Beta” and … joined a waiting list.
Wading through the meandering press release that leads with some freelance writer and ends with not much, I realized what I should have known all along: Microsoft’s at its worst when it’s in a reactive mode, and Office 365 is nothing more than a weak return fire after years of heckling from Google Apps. The absolute loss of market share is minor, at least as far as I can tell from available data, but Microsoft is absolutely losing the edge among early adopters and young adults, who are getting hooked on the web way and aren’t too eager to go back, even if their corporate policies dictate they do. Continued »
Despite the age of 140-character-sided conversations, IT books are still one of the best ways to get the information you need in one place. We’ve compiled some of the top recent titles on storage virtualization as recommended by the community and professionals. Have a suggestion for a title that should be on our list? Send me an email at Melanie@ITKnowledgeExchange.com or leave it in the comments section! Continued »
Storage Considerations for Virtual Desktops- Part 1: Understanding VDI and its impact on storage (Sponsored Post)
This is a sponsored guest post by Vikram Belapurkar, a solutions marketing manager at Dell focused on storage virtualization and consolidation solutions.
Welcome folks! This blog post is first in the series of three posts that will discuss challenges related to virtual desktop deployments and explore ways to overcome these challenges. In the first post we will primarily discuss the impact a VDI workload can have on the supporting storage system, and how that affects your VDI project success.
With proliferation on many client devices that vary in form factor, desktop virtualization may be the most relevant technology for streamlining client device management and providing a consistent user experience. By separating operating system and applications from physical client devices, desktop virtualization helps streamline management, lower operational expenses and facilitate adherence to compliance and security requirements.
However, if not correctly designed, desktop virtualization can spell disaster. There are three primary factors that determine the success of your VDI project:
- User experience that is consistent and acceptable by the end users
- A positive ROI that justifies this shift in client computing model
- Simple and streamlined desktop VM provisioning
And storage is the critical element in addressing all of these factors. Storage plays a vital role in enabling successful VDI roll-out. Let us take a quick look at what it means for storage to host and service your virtual desktop workloads.
Why storage matters:
First of all, VDI is not just another enterprise workload. VDI workload is highly variable in terms of I/O demand it places on supporting storage infrastructure. Storage that supports a VDI environment essentially stores the virtual machines that power the client devices. Every time a client device accesses the OS, application or user data, it generates I/O requests for the storage infrastructure. As you can imagine, this access pattern is not evenly spread-out over the course of the day. There are periods of time when many client devices are accessing large amounts of data from the VDI infrastructure, and the supporting storage arrays. For example, in the morning when a large number of client devices boot simultaneously, they generate massive amounts of I/O requests for the storage devices. These are called boot storms. If the storage is not able to service these requests within acceptable latency, the client devices experience delays and the user experience is compromised.
Overprovisioning of storage can help solve this issue to some extent. By overprovisioning, you allocate more storage capacity, and thereby typically more disk drives, to your VDI than you really need. The ability of storage to handle certain number of Input Output Operations per Second (IOPS) is a function of the number of disk drives it contains. A larger number of disk drives in the storage system can support a higher IOPS demand. However, this is cost-inefficient. That leads us to our second challenge; the cost of your storage system.
It is important to realize that when you implement VDI, you are actually moving the entire client attached storage; all the local hard disk drives (HDD); over to your enterprise datacenter. The storage infrastructure that supports VDI deployments is more costly than the client attached HDDs. When you combine the need for lots of enterprise storage, with the possibility of overprovisioning capacity to meet performance goals, costs can become an issue. Carefully managing your storage footprint is critical in order to achieve the ROI.
Lastly, your VDI administrators are responsible for provisioning and managing a very large pool of desktop VMs. In order to achieve operational efficiency and eliminate sources of errors, they need ways to efficiently and rapidly provision multiple desktop VMs from predefined templates. Tightly integrated storage and hypervisor management plays a key role in enabling rapid VM deployments and simplifying VDI management.
I will discuss throughout this blog series the possible ways you can manage and mitigate these storage challenges.
A successful VDI implementation can help streamline desktop management, lower operational expenses, and facilitate security and compliance adherence. However, inadequate design considerations can lead to failure. While designing your VDI environment, it is critical that you ensure acceptable user experience, positive project ROI, and a rapid and streamlined VM provisioning model. Proper storage selection can help ensure optimal performance, reduce storage footprint and speed VDI provisioning. Storage is the key enabler in ensuring a successful VDI roll-out.
In the next blog post, I will go over a storage sizing illustration for delivering optimal user experience. Following that, in my third and final post in this series, I will explore key storage features that enable successful VDI implementations. Stay tuned.
Storage virtualization is still a pretty new technology, considering virtualization is still being worked and figured out in enterprise IT departments. Whether you want to diminish the amount of hardware or maximize your experience with server or desktop virtualization, storage virtualization may be the way to go. Check out these great resources that we’ve discovered (some through member recommendations!).
John Chambers is a fascinating character in the history of Silicon Valley. He’s one of the longest-serving tech giant CEOs (since 1995), helping grow Cisco from a billion in revenues to $40 billion, and at one point making it the most valuable company in the world.
But today, he very publicly addressed what he saw as one of his recent missteps: Betting big on the consumer market. In a statement sent to employees yesterday, and released publicly today, he wrote: