So let’s talk about some of that messaging and some of the ideas and what they mean for CA’s customers and prospects – as well as the industry as a whole because, clearly, the concepts that were discussed here this week have broad implications for all businesses and technology professionals around the globe.
IT’s About the Cloud
If you’d spent four minutes here rather than four days you’d still walk away with the clear unmistakable commitment CA Technologies is making to cloud computing. In all of the keynotes starting with the opening comments from CEO Bill McCracken, the company talked about the cloud in terms of a “transformative technology” and referred to cloud computing time and again as a new computing generation. The path they described is a shorter than the one I posted the other day: They talk about computing transformations as Mainframe to Distributed to Cloud.
Regardless of the language, the message is clear: If you are a technology professional and you are charged with moving your business forward you must begin embracing the cloud, whether private, public or hybrid. The reason was articulated in the theme of the event: IT At the Speed of Business.” Businesses now and forever more must be agile and they must quickly be responsive to the needs of customers, prospects, employees and partners. As McCracken noted more than once, IT’s about transforming the entire supply chain.
IT’s About the Consumerization of IT
Another major theme was consumer driven IT. The Techster had an enlightening conversation on this topic with George Watt, who is VP of Strategy, Enterprise & Cloud Solutions at CA Technologies. In his role at the company, Watt led the development of the company’s own private cloud initiative. One of the things Watt sees is a new paradigm in the skills, knowledge and values of people driving decisions. In past transformative technologies the drivers were often technophiles, who were driving technology for technology’s sake. Now, however, we have a generation of people who are comfortable with using technology and also capable of understanding the value of what technology can do – for them and for their businesses.
While the cloud is an enabling technology for consumer driven IT, it is also being driven by social media, new handheld devices and new expectations on the type of value and agility businesses must be able to deliver. We’d talked a little about IT overcoming some of the cultural barriers that can impact deployments, and he expressed three key points IT professionals should keep in mind:
1. It’s about the business model. You have to understand the business and what value the technology can bring to the business.
2. It’s here whether you like it or not.
3. Understand that it can be done.
Watt pointed us to a very nice and informative Web site set up by CA Technologies on Consumer Driven IT. It’s worth checking out and has everything you’d want on the topic, including infographics, IDC research, blogs and interactive polling.
IT’s About Speed and Agility
As noted, in case you missed the message, it was posted all over the place and articulated by just about every CA Technologies executive and employee: IT at the Speed of Business. David Dobson, Executive VP and Group Executive of the Customer Solutions Group at CA Technologies, talked about a couple of customers that have been able to use CA Technologies solutions to achieve dramatic results. When Sprint realized it was about to take on the Apple iPhone 4S it knew it would result in activation levels at least three times higher than any it had ever experienced. The business teams needed to improve the customer experience and minimize churn – while at the same time cutting costs. The company was able to virtualize thousands of applications, save $20 million a year in lower infrastructure and double utilization rates. At the same time, customer interfacing was improved.
Over the course of the few days, we heard many more similar stories. There is a CA Technologies partner called Skygone Inc. that provides cloud-based services for the geo-spatial location-based services industry, commonly referred to as GIS. By taking a cloud approach to GIS using CA’s AppLogic platform, Skygone was able to dramatically cut down the time required to deploy solutions for disaster recovery efforts. In a recent emergency situation involving flooding in North Dakota, it was able to set up a system in three to six hours using the cloud – rather than three to six weeks using premises-based technology. To say that the potential speed and agility engendered by the cloud can be used to save lives is not an understatement at all and, in fact, I think we will hear of many instances and circumstances where cloud technology has this type of dramatic impact. For its groundbreaking work, Skygone received a Partner Agility Award for Innovative Solutions from CA Technologies.
IT’s About An Integrated Strategy and Approach
OK, so it’s one thing to talk about transformative technology: It’s another to deliver it. I was definitely impressed with the way CA Technologies was able to create a framework for its solutions that actually made sense. There was a time when it seemed the company was doing a lot of acquisitions without a clear plan or strategy. Now, however, they are able to articulate an overall framework that puts each of its acquisitions in perspective and describes a much more holistic approach to helping customers make the transition to the cloud.
As described by Dobson during Monday’s keynote, the idea is not to eliminate complexity, but to simplify management of IT by taking it to a higher level. Customers are not going to want to get rid of all of their legacy apps and they are not going to all of a sudden go to a homogenous environment. The world doesn’t work today. So CA Technologies has created what it describes as a “Business Service Innovation Value Roadmap” that consists of these primary functions:
5. Secure and Manage
When you think about approaching next-generation services-centric computing, this type of model makes a lot of sense and it provides a strong rationale for the way in which CA Technologies has constructed its portfolio. One of the recent acquisitions that got a lot of buzz on the show floor was a company called ITKO. ITKO focuses on application lifecycle optimization and provides a solution for the “model” part of the CA Technologies framework. ITKO’s solution, now called the CA LISA solution, creates a virtualization capability that allows developers to simulate all of the dependencies they require during development. Application developers can dramatically speed to time market, reduce costs and improve performance. This is a technology to watch. Likewise, there were also a lot of very satisfied customers of AppLogic, which fits squarely into the “assemble” aspect of the framework.
Beyond the technology solutions and the framework, there are important considerations for companies moving in the direction of the cloud – which should be all companies, whether through private clouds, public clouds or, more frequently, hybrid environments. Standardization is critical. So is virtualization. And so is finding solutions that support multi-vendor environments while delivering real integration and real connective tissue among apps, infrastructure and business initiatives.
The Techster has been there before, though major paradigm shifts in technology. It can be disruptive, it can be scary, it can be accompanied by the usual predictions of gloom and doom and mass confusion. In the end, the right solutions always get their due and always win out over the prior way of doing things. If not, I’d be sitting in my hotel room writing this blog post on an Osborne portable computer or TRS-80 or something of that ilk. Or, I’d not be writing a blog post at all.
Anyway, it’s been a fun, exciting and highly informative four days and I’m glad I’ve been able to share my experiences with you. As always, if you have any comments, suggestions or questions, please, please, please feel free to post them here on this post. See you soon.
P.S. – If you’d like to catch up on any aspect of CA World 2011, you can do so virtually. The company just announced that a lot of the great content is available to experience from their virtual CA World 2011 event center powered by ON24. If you stop by now, you’ll see areas you can visit such as session, exhibition center and resource library. You may also want to visit the Broadcast Center here at the show in the Exhibition Center. They are conducting live interviews of CA Technologies leaders, partners and customers. Who knows, you may even catch a glimpse of The Techster himself.
Follow me at @The_Techster and follow @CAWorld2011 on Twitter for all the latest #CAWorld buzz.]]>
On gaining buy-in when speaking to a non-technical CEO:
The first thing, according to Capellas, is to establish the agenda of the company and find those two or three things that will drive the basic mission of the company: Defining the sweet spot of what the business is trying to do. “Prioritize what’s important and then do it with incredible speed.” Also, innovate and try to be at the leading edge. “No one ever said, ‘Gee, I’m glad you brought me yesterday’s technology.’”
On security, policy and the cloud:
One of the big challenges, Kundra noted, is the difficulty of managing a cloud that spans a global grid, particularly for the U.S. government. Different nations have different laws. Complicating that is the reality is that there are countries and organizations out there using the technology for malevolent purposes. McCracken noted that security is a pre-cursor to successful cloud deployments and pointed to the use of cryptographic intelligence.
On private clouds, public clouds or hybrid:
McCracken and Kundra noted that the cloud – private or public – potentially has the ability to be even more secure than traditional enterprise applications. How it is used will largely depend upon what applications and workloads your organization will want to run. Mission-critical apps, Capellas said, will likely be private, but many organizations will use public clouds to balance workloads and for backup. “Security policy will define what’s inside or outside the firewall,” Capellas said.
On the role resellers will play in the cloud:
Partners and resellers already have a large majority of the skills necessary to deploy new applications, McCracken said, and without the partner community the cloud market wouldn’t be anywhere near where it is today. Channel partners have the opportunity to take applications from end to end and can manage multiple platforms, which are critical skills in today’s environment, according to McCracken.
On the role of professional services providers:
The cloud will significantly disrupt the professional services market, Kundra said, and will force professional services companies to raise their game to a higher level. They must be thinking about the user experience, seamless applications and fundamentally re-engineering and re-architecting systems. They will also have to work at the speed of business: “The days of waiting five years to get any value from a professional services contract are over,” Kundra said. “This will be a painful transition for the professional services industry.”
On what keeps you up at night:
“I’m very concerned about the role of global terrorism as it relates to technology,” Capellas said. He said that, at some point, there is no doubt the Internet will get shut down and could create a “catastrophic disruption.” Kundra said he is concerned with three things: (1) Cyber warfare; (2) The potential use of technology for oppression, and (3) The growing gap in use of technology by developed and underdeveloped nations. McCracken said that the biggest challenge is keeping up with the growing demand in our businesses and marketplaces.
There you have it, live once more from CA World. What are some of the big issues on your mind? Post them here and we’ll see if we can answer them.
Follow me at @The_Techster and follow @CAWorld2011 on Twitter for all the latest #CAWorld buzz.
That was one of the clear messages from the opening round of keynotes and panel discussions at this year’s CA World 2011 in Las Vegas last night. One of the great benefits of getting out in the world and attending events such as these is the opportunity to step back from the day-to-day routine and think about the bigger picture. Not just think about the bigger picture, actually, but to listen to and talk to very smart people about the major trends affecting us all.
In his opening keynote presentation last night, CA Technologies CEO Bill McCracken talked in terms of another major shift in the computer industry. He didn’t specifically define the previous seismic shifts, so I’ll fill that in: From mainframe to mini to PC to network to Internet. And now to cloud. Having participated in the previous transformations, McCracken noted that there are always three important conditions that must be in place for the industry to change. They are:
2. The Economy
3. A Strong Business Need
Given the state of all three of these conditions today, McCracken said we’re “standing on the edge of a perfect storm.” I’ll save the technology piece for last. As for the economic conditions, we all know what’s going on and how things are tightening up. The underlying pressure point that is helping the shift to the cloud is the constant need to do more with less. On the business side, the demand is even stronger to change the model for the way businesses have to be run. In many cases, McCracken noted, CEOs and company executives have wanted to change – indeed have demanded change – but have been limited because their IT infrastructure and organization couldn’t change fast enough. That’s why the overarching theme of this year’s CA World 2011 is “IT at the speed of business.”
Want examples? McCracken cited a few that really bring the point home: Zipcar creating a new model and forcing market leader Hertz to make an acquisition to try to keep up; the whole category of e-books helping to drive Borders out of business; and, closest to home, he recalled the time when he was at IBM in the 1980s and Michael Dell created a whole new model for manufacturing and delivering PCs. “If we were not IBM, we probably would have gone bankrupt,” McCracken said.
So what are the technology drivers that are disrupting the status quo and creating this next great shift in the computing environment? McCracken cited five of them:
1. The growth in networks and bandwidth, specifically 4G broadband wireless networks.
2. Thousands of downloadable apps
3. The proliferation of inexpensive handheld devices
4. Social media
5. GPS devices
The confluence of these technologies, as McCracken noted, is “changing the world.” It’s changing the way we talk to one another, the way businesses talk to customers and the way businesses will deliver goods and services. The power is so great, in fact, that we’ve already seen these technologies being used as a mechanism to change governments.
What do you think? Are you ready for the next transition? What will it mean for you? What will it mean for your company? Stay tuned. We’ll have plenty more about that this week as we continue to blog live from CA World 2011.
Follow me at @The_Techster and follow @CAWorld2011 on Twitter for all the latest #CAWorld buzz.]]>
In addition to the overarching theme of IT at the Speed of Business, there are 10 supporting topics that will be the focus of many of the sessions, keynotes, customer and partner interviews and, in all likelihood, conversations in the breakouts as well as on the exhibit floor. They will also provide some of the fodder for our blog reports as well.
The main event (no not Pacquiao-Marquez – that was last night), gets underway at 5 p.m. Vegas time with a keynote panel featuring CA Technologies CEO William McCracken, along with Vivek Kundra, First Chief Information Officer of the U.S. Government; Michael Capellas, Chairman of VCE, the Virtual Computing Environment Company; and moderator Randi Zuckerberg, founder of R to Z Media. If you can make it there, we hope to see you. If not, don’t worry: The Techster is here to keep you fully informed of all the highlights of CA World this week. Stay tuned.]]>
[kml_flashembed movie="http://www.youtube.com/v/Re6scgGr1uk" width="425" height="350" wmode="transparent" /]
Trend Micro gives a demonstration at VMWorld 2011 on how they’re helping business tap into the advantages of the cloud while maintaining secure policies.]]>
I mentioned that storage can be a bottleneck in your VDI performance if not carefully designed. Allocating a large number of disks in your VDI storage can help mitigate this risk and meet the high I/O demand placed by the VDI workload on the underlying storage. However this can lead to overprovisioning of storage, introducing cost-inefficiencies in your VDI environment. Additionally, moving the entire client attached storage to datacenter is also cost prohibitive. The right storage solution for your VDI should help mitigate the performance challenges while minimizing storage footprint, and thus lower costs. In this post, we will explore a few key storage features that are critical to ensure your VDI project success.
Gaining efficiencies with Clones:
To combat the spiraling storage footprint, clones can be the key feature in VDI storage. Clones work by creating a few base or gold images, and clones linked to these base images. VMware Linked Clones use this approach. It is important to design and configure the base images carefully. The base images contain virtual machines comprising of the OS and the applications that the desktop clients need. Depending on your needs, you may want to create multiple base images that satisfy your user needs. Clones linked to these base images are typically very small in size. All virtual desktops read from the base images and any writes are captured in the clones which contain only the delta data. In order to leverage this VMware feature, the underlying storage must support clones. Dell EqualLogic supports clones and can effectively leverage Linked Clones to help reduce your storage needs.
EqualLogic also offers space efficient Thin Clones which are linked to EqualLogic Template Volumes. The Template Volumes typically contain multiple VM images which can be configured optimally and space efficiently to serve the user needs while minimizing storage requirements. You can learn more about EqualLogic Thin Clones in this whiteboarding session with Will Urban, a technical marketing engineer with Dell.
Tiered storage for right-sizing your VDI environment:
In the last post we saw that typically if you design your storage to satisfy your performance needs, you end up with a lot bigger storage footprint than your end users actually need. This of course is an issue, because then your storage becomes too costly and hurts your VDI project ROI. To overcome this challenge, using tiered storage may be the answer.
Tiered storage typically consists of two or more drive types in a single storage pool. All drive types can handle different number of IOPS. Solid State Disk (SSD) drives, for example, can handle an upwards of 5000 IOPS, far higher than any other spinning media can. But with performance comes the price. SSD drives are also quite a lot more expensive compared to SAS or SATA drives. To optimize the cost-performance ratio, it makes sense to auto-tier between SSD drives and SAS or SATA drives. By tiering your VDI workload you can be sure that during I/O storms, frequently accessed (hot) data pages will be automatically migrated to the higher performance drives, thus withstanding high I/O demand placed on your storage.
Dell EqualLogic PS 6000XVS and 6010XVS are multi-tiered hybrid arrays featuring SSD and SAS drives in a single enclosure with auto-tiering capabilities. You may have heard that InfoWorld awarded their 2011 Technology of the year award for “Best Storage System” to Dell EqualLogic PS6010XVS. In November 2010, Dell performed some tests using these hybrid arrays. The testing involved setting up a VDI environment using VMware vSphere and VMware View with Dell PowerEdge Servers and one Dell EqualLogic PS 6000XVS hybrid array. The tests used VMware Linked Clones. The results are quite impressive, successfully hosting about 1000 virtual desktops with latency well below 20ms, on a single XVS array. Here is a quick summary of the test results. You can also take a look at the sizing and best practices guide involving these tests.
Optimizing performance through storage-hypervisor integration:
Hypervisor technologies have matured quite a bit over the past few years. Virtualization platforms now offer integration points for storage that help enhance performance. Some storage arrays are better integrated with hypervisor or virtualization layer than others. This integration makes a tremendous difference in performance of your VDI environment. Through this integration, the hypervisor can offload certain storage related tasks to the storage arrays, thus substantially minimizing network traffic and host server overheads.
VMware introduced their vStorage APIs for Array Integration (VAAI) with vSphere 4.5. This integration helps offloading of tasks like hardware assisted locking, full-copy and block-zeroing to storage arrays. Tasks such as creation of template volumes and VM deployments from templates benefit substantially from offloading of these functions to storage. Additionally, in a typical VDI environment, many VMs share one volume and simultaneously access information from the shared volume. By utilizing hardware assisted locking, storage administrators can ensure that the entire volume is not locked by a single VM or a single VMware ESX host at a time, and that the performance of shared volumes does not deteriorate. Dell EqualLogic was one of the first storage solutions to implement VAAI and Dell lab tests have shown that the VAAI integration for Dell EqualLogic storage arrays reduced SAN traffic by up to 95%, and host CPU overheads by up to 75%.
Storage arrays that support multipath I/O can also help improve performance and availability of your VDI deployments. Dell EqualLogic Multipath Extension Module (MEM) for VMware integrates with VMware vStorage APIs for Multipathing. It provides fault-tolerant load balancing and helps improve storage performance and scalability while automating multipath configuration.
Storage that helps simplify VDI deployments:
I mentioned in my first blog post in this series that your VDI administrators are responsible for provisioning and managing a very large pool of desktop VMs. This is quite unlike virtualizing enterprise applications. Your VDI administrators need ways to simplify and streamline desktop VM provisioning in order to achieve operational efficiencies and eliminate any sources of errors. Some storage arrays offer ways to rapidly provision multiple desktop VMs. Dell EqualLogic, for example, offers Virtual Desktop Deployment Tool. This tool utilizes EqualLogic Template Volumes and Thin Clones to lower storage needs, and a process flow that substantially reduces complexities in rapid VM provisioning. This tool is integrated with VMware View and is available at no additional cost, just like all EqualLogic software features. You can see a quick demonstration of this EqualLogic Virtual Desktop Deployment Tool and learn how simple it is to rapidly provision multiple desktop VMs using this software feature.
In general, to simplify VDI deployments, it is important to choose a storage solution that automates administrative tasks and simplifies storage management. Not all storage solutions do that. Check out in these whiteboarding sessions to see how Dell EqualLogic offers tools that take the complexity out of storage management for your VDI.
Choosing the right storage connectivity and architecture:
When it comes to choosing the connectivity for your storage infrastructure, you can choose from Fibre Channel (FC) SAN, Internet SCSI (iSCSI) SAN, or Network Attached Storage (NAS). In the past, FC offered benefits by accommodating up to 4 or 8 Gbps speeds. FC however, requires specialized networking devices and fabric that are expensive compared to Ethernet alternatives. FC also requires your SAN administrators to be trained on specialized networking technologies. Today, with 10 Gb Ethernet connectivity, iSCSI SANs can offer network speeds comparable to or better than FC SANs. Through the use of standards-based Ethernet networking technology, iSCSI can help lower cost of your SAN and simplify networking.
Typically, most VDI deployments are implemented in phases. To accommodate this phased roll-out approach, you need a storage architecture that scales easily and non-disruptively. Most traditional SAN architectures are rigid, making it difficult to grow the storage as needs grow.
Dell EqualLogic arrays utilize virtualized iSCSI SAN scale-out architecture that easily scales as your needs grow, without service disruption. This can be invaluable since it simplifies planning and virtually eliminates the need for forklift upgrades, helping protect your investments.
In this series I highlighted the critical success factors for your VDI project success and explained how storage plays a key role in enabling this success. I also illustrated one way to size storage for your VDI so it does not become a bottleneck and provides optimal user experience. In this third post I reviewed key storage features to help optimize performance, lower costs and simplify VDI provisioning. Dell EqualLogic family of virtualized iSCSI SANs offers the features essential to ensure success of your VDI projects. You can find several resources that can help you through your VDI planning at www.dellenterprise.com/getmorevirtual/virtualdesktops
Wish you good luck with your VDI projects!
Welcome back folks. This is the second post in a series of three blog posts discussing storage considerations for VDI deployments. In my last post I highlighted the critical success factors for your VDI projects, and how your storage infrastructure plays a central role in ensuring successful VDI roll-out. I also mentioned that one way to meet the desired VDI performance is to increase the number of disks in your storage environment. While this can lead to overprovisioning of storage, it is important to understand how increasing the number of disks can help you meet your performance goals. In this post I will provide a simple example that illustrates how to size the storage for your VDI environment in order to ensure optimal performance and user experience.
Sizing storage for VDI can be especially tricky. On one hand, you need to ensure realistic performance. On the other hand, you are under a lot of pressure to minimize storage footprint in order to lower storage related costs. When designing a VDI environment, it is critical to ensure that your storage does not become a bottleneck. VDI is a highly transactional workload and its performance is bound by disk I/Os. One way storage administrators can meet the I/O demand is by increasing the number of storage disks in their VDI environment. It is important to look at the peak IOPS demand that will be placed on the underlying storage, and design your storage to handle those peak IOPS. There is no simple formula for assessing your IOPS needs. It will vary for every organization depending on their desktop user types, and boot and login patterns. I will take a simple example here and go about identifying the peak IOPS needs and further the size of the storage needed to support those IOPS.
VDI Storage Sizing Exercise
Let us say that we are designing a VDI environment for 100 desktops. We estimate that about 80% of those desktops will boot in the morning simultaneously and then stay online rest of the day. Remaining 20% of the desktops will boot and perform operations during the night time.
Let us assume that we estimate each desktop to produce a peak of 25 IOPS while booting up, and then an average of about 5 IOPS throughout the steady state operations before logging off. The logoff operation creates a peak of 15 IOPS per desktop.
We also estimate that we will grow our VDI environment by up to 50% in the next 5 years and we are designing the VDI environment to support this growth.
As it is evident, the boot operation is the most I/O intensive and thus, we should be sizing our storage to support the performance requirements during boot periods.
So the 80 desktops (80% of 100 desktops) will produce 80 X 25 = 2000 IOPS when they are booting up in the morning simultaneously.
To account for the anticipated growth, we will need to support up to 50% more IOPS in the next 5 years as we grow our VDI environment. So we need to design our storage to support 2000 X 1.5 = 3000 IOPS in order to effectively address the peak IOPS requirements for the next 5 years.
Let us assume that we will be using 15K RPM SAS drives to support this VDI environment, and that each 15K SAS drive can support 150 IOPS.
Hence we will need 3000 / 150 = 20 of the 15K SAS drives in our storage to handle the peak IOPS.
This example assumes RAID 0 policy and no spare disks.
This is a simple example, but should help illustrate the decision process. The key is to estimate, as accurately as possible, the IOPS patterns for your specific environment. It is important to note that the IOPS patterns vary drastically between organizations. Dell offers consulting services that can help you analyze your VDI needs to meet the required performance goals.
Inadequate storage performance in your VDI can lead to less than acceptable user experience. While sizing storage for your VDI, it is important to design it to deliver adequate performance and user experience. VDI storage needs to be sized to handle peak IOPS demand placed by the VDI workloads. The I/O patterns vary drastically from organization to organization, and a critical analysis of your end user profiles is essential to accurately estimate the peak demands placed on your storage infrastructure.
It is important to recognize that overprovisioning your storage in order to meet the performance goals is cost-inefficient. In my next and the last post in this series, I will discuss some critical features and considerations for your VDI storage that can help mitigate performance challenges while optimizing storage footprint and enabling a simple, streamlined desktop VM provisioning model. Stay tuned.]]>
Welcome folks! This blog post is first in the series of three posts that will discuss challenges related to virtual desktop deployments and explore ways to overcome these challenges. In the first post we will primarily discuss the impact a VDI workload can have on the supporting storage system, and how that affects your VDI project success.
With proliferation on many client devices that vary in form factor, desktop virtualization may be the most relevant technology for streamlining client device management and providing a consistent user experience. By separating operating system and applications from physical client devices, desktop virtualization helps streamline management, lower operational expenses and facilitate adherence to compliance and security requirements.
However, if not correctly designed, desktop virtualization can spell disaster. There are three primary factors that determine the success of your VDI project:
And storage is the critical element in addressing all of these factors. Storage plays a vital role in enabling successful VDI roll-out. Let us take a quick look at what it means for storage to host and service your virtual desktop workloads.
Why storage matters:
First of all, VDI is not just another enterprise workload. VDI workload is highly variable in terms of I/O demand it places on supporting storage infrastructure. Storage that supports a VDI environment essentially stores the virtual machines that power the client devices. Every time a client device accesses the OS, application or user data, it generates I/O requests for the storage infrastructure. As you can imagine, this access pattern is not evenly spread-out over the course of the day. There are periods of time when many client devices are accessing large amounts of data from the VDI infrastructure, and the supporting storage arrays. For example, in the morning when a large number of client devices boot simultaneously, they generate massive amounts of I/O requests for the storage devices. These are called boot storms. If the storage is not able to service these requests within acceptable latency, the client devices experience delays and the user experience is compromised.
Overprovisioning of storage can help solve this issue to some extent. By overprovisioning, you allocate more storage capacity, and thereby typically more disk drives, to your VDI than you really need. The ability of storage to handle certain number of Input Output Operations per Second (IOPS) is a function of the number of disk drives it contains. A larger number of disk drives in the storage system can support a higher IOPS demand. However, this is cost-inefficient. That leads us to our second challenge; the cost of your storage system.
It is important to realize that when you implement VDI, you are actually moving the entire client attached storage; all the local hard disk drives (HDD); over to your enterprise datacenter. The storage infrastructure that supports VDI deployments is more costly than the client attached HDDs. When you combine the need for lots of enterprise storage, with the possibility of overprovisioning capacity to meet performance goals, costs can become an issue. Carefully managing your storage footprint is critical in order to achieve the ROI.
Lastly, your VDI administrators are responsible for provisioning and managing a very large pool of desktop VMs. In order to achieve operational efficiency and eliminate sources of errors, they need ways to efficiently and rapidly provision multiple desktop VMs from predefined templates. Tightly integrated storage and hypervisor management plays a key role in enabling rapid VM deployments and simplifying VDI management.
I will discuss throughout this blog series the possible ways you can manage and mitigate these storage challenges.
A successful VDI implementation can help streamline desktop management, lower operational expenses, and facilitate security and compliance adherence. However, inadequate design considerations can lead to failure. While designing your VDI environment, it is critical that you ensure acceptable user experience, positive project ROI, and a rapid and streamlined VM provisioning model. Proper storage selection can help ensure optimal performance, reduce storage footprint and speed VDI provisioning. Storage is the key enabler in ensuring a successful VDI roll-out.
In the next blog post, I will go over a storage sizing illustration for delivering optimal user experience. Following that, in my third and final post in this series, I will explore key storage features that enable successful VDI implementations. Stay tuned.]]>