[kml_flashembed movie="http://www.youtube.com/v/JBEtPQDQNcI" width="425" height="350" wmode="transparent" /]
Videos such as this one spark a certain amount of nostalgia, not only for the clunky monster of a PC on Roger Fiddler’s desk but for the days before the Internet made everything so much easier and, well, so much harder. While Knight-Ridder had a little too much faith in man’s continuing loyalty to the newspaper, they pinpointed the crossroads where many vendors find themselves: “We may still use computers to create information but we’ll use the tablet to interact with information.”
This might account for the changing numbers as outlined by Tom Nolle at the Uncommon Wisdom blog:
PCs are not seeing the growth they once did…Some of the slowing is due to tablet encroachment, but most is likely due to people just not upgrading as often.
But the fruition of visions such as Knight-Ridder’s 1994 prediction about tablets marks another change, not only within the tech industry, but in the way the rest of the world interacts with the IT department. Interactions aren’t lassoed solely within a company’s in-house messaging or email system. Despite social networking services aimed at the enterprise such as Salesforce.com’s Chatter, users are still all over sites such as Facebook and Twitter, and not always for strictly business purposes. Sure, you could throw some policies at it and even make examples out of a few repeat offenders, but what happens when you’re the President or, more realistically, you work in a high-profile government office where quips on Facebook can have serious and reverberating effects?
Last week’s number one IT blog post covered whether it’s better to advance in IT via promotion or job change. Careers – and job security – are always on the brain, especially in an ever-evolving industry like IT. Add a high-profile case like the Obama administration’s cloud computing initiative and fears are exasperated. Continued »
Last week’s Amazon Elastic Compute Cloud (EC2) outage has served as quite the scare and reality check for IT departments and CIOs across the country. The main lesson that seems to have been learned is don’t put all your eggs in one basket.
To add insult to injuries, The Register reports on the confusion many customers are experiencing regarding their service level agreements post-downtime. Though the EC2 SLA states that users can receive credits if the service’s annual uptime percentage falls below 99.95%, many are finding that they fall through the loopholes in the fine print. As The Register reports, “[T]his only applies to users who have spread their applications across multiple ‘availability zones’ – subsections of Amazon’s regional services designed not to fail at the same time.” In other words, if your data isn’t spread across the EC2 service, and thus more downtime-proof, you most likely won’t be receiving a credit anytime soon. For those companies that did read the fine print and planned for disaster from day one, there was significantly less damage.
Whether you’re a cloud supporter or an anti-cloudie, you probably have an opinion on Amazon’s EC2 fiasco this past week. Bloggers around IT Knowledge Exchange took this opportunity to calm frightened users and learn valuable lessons. Take a look:
- TechStop’s Joshua Wood gave a quick rundown of what the outage meant for users and why Amazon’s Cloud Outage isn’t that big of a deal.
- Tom Nolle of Uncommon Wisdom thinks that cloud views need to get realistic.
- Storage Channel Pipeline’s Eric Slack teaches a valuable lesson to those concerned about preparing for cloud outages: Add a second provider.
- Always the voice of reason and optimism amidst cloud doubt, Ron Miller of View From Above chides cloud haters who jumped at the Amazon EC2 fiasco.
- Editor Michael Morisy wondered, in the wake of Amazon, iPhone and Dropbox, is there a new normal?
Also see IT forum questions on Cloud Computing.
It’s all fun and games until someone gets hacked. Literally.
I meant to link this in my earlier piece, but the timing of this Industry Standard article was just priceless: IT’s cloud resistance is starting to annoy businesses, published April 21, the same day as Amazon’s EC2 outage.
David Linthicum’s points are all valid and horribly timed:
The core issue is one of control and fear of the unknown. Although you’d think that many in IT would be innovative and fast-moving, I’ve found that most are in fact very conservative, risk-averse tactical thinkers. Cloud computing means loss of control, potential for risk, and an aggressive strategic shift.
There does need to be a balance between leveraging new technology willy-nilly without thinking about issues like security and lock-in versus digging in your heels. I hope that both business and IT find the balance. Otherwise, at least a few enterprises will find that the benefits of cloud computing have passed them by.
In the end, it’s a matter of risk management, not risk aversion and certainly not cloud religion. For more smart perspectives, take a look at the last post’s comments which pretty much nail it from both sides (including one from ITKnowledgeExchange’s very own Eric Hansen).
Storage Considerations for Virtual Desktops- Part 3: Key storage features to ensure VDI success (Sponsored Post)
This is a sponsored guest post by Vikram Belapurkar, a solutions marketing manager at Dell focused on storage virtualization and consolidation solutions.
I mentioned that storage can be a bottleneck in your VDI performance if not carefully designed. Allocating a large number of disks in your VDI storage can help mitigate this risk and meet the high I/O demand placed by the VDI workload on the underlying storage. However this can lead to overprovisioning of storage, introducing cost-inefficiencies in your VDI environment. Additionally, moving the entire client attached storage to datacenter is also cost prohibitive. The right storage solution for your VDI should help mitigate the performance challenges while minimizing storage footprint, and thus lower costs. In this post, we will explore a few key storage features that are critical to ensure your VDI project success.
Gaining efficiencies with Clones:
To combat the spiraling storage footprint, clones can be the key feature in VDI storage. Clones work by creating a few base or gold images, and clones linked to these base images. VMware Linked Clones use this approach. It is important to design and configure the base images carefully. The base images contain virtual machines comprising of the OS and the applications that the desktop clients need. Depending on your needs, you may want to create multiple base images that satisfy your user needs. Clones linked to these base images are typically very small in size. All virtual desktops read from the base images and any writes are captured in the clones which contain only the delta data. In order to leverage this VMware feature, the underlying storage must support clones. Dell EqualLogic supports clones and can effectively leverage Linked Clones to help reduce your storage needs.
EqualLogic also offers space efficient Thin Clones which are linked to EqualLogic Template Volumes. The Template Volumes typically contain multiple VM images which can be configured optimally and space efficiently to serve the user needs while minimizing storage requirements. You can learn more about EqualLogic Thin Clones in this whiteboarding session with Will Urban, a technical marketing engineer with Dell.
Tiered storage for right-sizing your VDI environment:
In the last post we saw that typically if you design your storage to satisfy your performance needs, you end up with a lot bigger storage footprint than your end users actually need. This of course is an issue, because then your storage becomes too costly and hurts your VDI project ROI. To overcome this challenge, using tiered storage may be the answer.
Tiered storage typically consists of two or more drive types in a single storage pool. All drive types can handle different number of IOPS. Solid State Disk (SSD) drives, for example, can handle an upwards of 5000 IOPS, far higher than any other spinning media can. But with performance comes the price. SSD drives are also quite a lot more expensive compared to SAS or SATA drives. To optimize the cost-performance ratio, it makes sense to auto-tier between SSD drives and SAS or SATA drives. By tiering your VDI workload you can be sure that during I/O storms, frequently accessed (hot) data pages will be automatically migrated to the higher performance drives, thus withstanding high I/O demand placed on your storage.
Dell EqualLogic PS 6000XVS and 6010XVS are multi-tiered hybrid arrays featuring SSD and SAS drives in a single enclosure with auto-tiering capabilities. You may have heard that InfoWorld awarded their 2011 Technology of the year award for “Best Storage System” to Dell EqualLogic PS6010XVS. In November 2010, Dell performed some tests using these hybrid arrays. The testing involved setting up a VDI environment using VMware vSphere and VMware View with Dell PowerEdge Servers and one Dell EqualLogic PS 6000XVS hybrid array. The tests used VMware Linked Clones. The results are quite impressive, successfully hosting about 1000 virtual desktops with latency well below 20ms, on a single XVS array. Here is a quick summary of the test results. You can also take a look at the sizing and best practices guide involving these tests.
Optimizing performance through storage-hypervisor integration:
Hypervisor technologies have matured quite a bit over the past few years. Virtualization platforms now offer integration points for storage that help enhance performance. Some storage arrays are better integrated with hypervisor or virtualization layer than others. This integration makes a tremendous difference in performance of your VDI environment. Through this integration, the hypervisor can offload certain storage related tasks to the storage arrays, thus substantially minimizing network traffic and host server overheads.
VMware introduced their vStorage APIs for Array Integration (VAAI) with vSphere 4.5. This integration helps offloading of tasks like hardware assisted locking, full-copy and block-zeroing to storage arrays. Tasks such as creation of template volumes and VM deployments from templates benefit substantially from offloading of these functions to storage. Additionally, in a typical VDI environment, many VMs share one volume and simultaneously access information from the shared volume. By utilizing hardware assisted locking, storage administrators can ensure that the entire volume is not locked by a single VM or a single VMware ESX host at a time, and that the performance of shared volumes does not deteriorate. Dell EqualLogic was one of the first storage solutions to implement VAAI and Dell lab tests have shown that the VAAI integration for Dell EqualLogic storage arrays reduced SAN traffic by up to 95%, and host CPU overheads by up to 75%.
Storage arrays that support multipath I/O can also help improve performance and availability of your VDI deployments. Dell EqualLogic Multipath Extension Module (MEM) for VMware integrates with VMware vStorage APIs for Multipathing. It provides fault-tolerant load balancing and helps improve storage performance and scalability while automating multipath configuration.
Storage that helps simplify VDI deployments:
I mentioned in my first blog post in this series that your VDI administrators are responsible for provisioning and managing a very large pool of desktop VMs. This is quite unlike virtualizing enterprise applications. Your VDI administrators need ways to simplify and streamline desktop VM provisioning in order to achieve operational efficiencies and eliminate any sources of errors. Some storage arrays offer ways to rapidly provision multiple desktop VMs. Dell EqualLogic, for example, offers Virtual Desktop Deployment Tool. This tool utilizes EqualLogic Template Volumes and Thin Clones to lower storage needs, and a process flow that substantially reduces complexities in rapid VM provisioning. This tool is integrated with VMware View and is available at no additional cost, just like all EqualLogic software features. You can see a quick demonstration of this EqualLogic Virtual Desktop Deployment Tool and learn how simple it is to rapidly provision multiple desktop VMs using this software feature.
In general, to simplify VDI deployments, it is important to choose a storage solution that automates administrative tasks and simplifies storage management. Not all storage solutions do that. Check out in these whiteboarding sessions to see how Dell EqualLogic offers tools that take the complexity out of storage management for your VDI.
Choosing the right storage connectivity and architecture:
When it comes to choosing the connectivity for your storage infrastructure, you can choose from Fibre Channel (FC) SAN, Internet SCSI (iSCSI) SAN, or Network Attached Storage (NAS). In the past, FC offered benefits by accommodating up to 4 or 8 Gbps speeds. FC however, requires specialized networking devices and fabric that are expensive compared to Ethernet alternatives. FC also requires your SAN administrators to be trained on specialized networking technologies. Today, with 10 Gb Ethernet connectivity, iSCSI SANs can offer network speeds comparable to or better than FC SANs. Through the use of standards-based Ethernet networking technology, iSCSI can help lower cost of your SAN and simplify networking.
Typically, most VDI deployments are implemented in phases. To accommodate this phased roll-out approach, you need a storage architecture that scales easily and non-disruptively. Most traditional SAN architectures are rigid, making it difficult to grow the storage as needs grow.
Dell EqualLogic arrays utilize virtualized iSCSI SAN scale-out architecture that easily scales as your needs grow, without service disruption. This can be invaluable since it simplifies planning and virtually eliminates the need for forklift upgrades, helping protect your investments.
In this series I highlighted the critical success factors for your VDI project success and explained how storage plays a key role in enabling this success. I also illustrated one way to size storage for your VDI so it does not become a bottleneck and provides optimal user experience. In this third post I reviewed key storage features to help optimize performance, lower costs and simplify VDI provisioning. Dell EqualLogic family of virtualized iSCSI SANs offers the features essential to ensure success of your VDI projects. You can find several resources that can help you through your VDI planning at www.dellenterprise.com/getmorevirtual/virtualdesktops
Wish you good luck with your VDI projects!
As April draws to a close, so does Storage Virtualization month. Not to worry; we’ve compiled all of our greatest resources from this past month for quick reference as we move forward. From IT questions to articles, blog posts, and Twitter lists, our wrap-up is a great beginner’s – or dummy’s – guide for moving forward in your storage virtualization implementation.
We’ve written about the dangers of the consumerization of IT before (actually, again and again), but such progress has marched on, despite our earnest protestations, linked arm-in-arm with that golden child, cloud computing. At least, until last week, at which point both ate some serious crow in the form of an outage and raised awareness of privacy breaches.
Highest profile, of course, was Amazon’s EC2 outage that took out sites like Reddit, FourSquare and, according to one forum poster, cardiac monitoring tools. Lives, then, might literally have been at stake.
In another cloud/consumer blow, Dropbox updated its terms of service, making explicit its willingness to turn your data, hosted on their servers, over to the authorities. Not surprising, but another chip of control taken away from the data owner. Continued »
Storage Considerations for Virtual Desktops- Part 2: Sizing your VDI storage to ensure optimal performance
This is a sponsored guest post by Vikram Belapurkar, a solutions marketing manager at Dell focused on storage virtualization and consolidation solutions.
Welcome back folks. This is the second post in a series of three blog posts discussing storage considerations for VDI deployments. In my last post I highlighted the critical success factors for your VDI projects, and how your storage infrastructure plays a central role in ensuring successful VDI roll-out. I also mentioned that one way to meet the desired VDI performance is to increase the number of disks in your storage environment. While this can lead to overprovisioning of storage, it is important to understand how increasing the number of disks can help you meet your performance goals. In this post I will provide a simple example that illustrates how to size the storage for your VDI environment in order to ensure optimal performance and user experience.
Sizing storage for VDI can be especially tricky. On one hand, you need to ensure realistic performance. On the other hand, you are under a lot of pressure to minimize storage footprint in order to lower storage related costs. When designing a VDI environment, it is critical to ensure that your storage does not become a bottleneck. VDI is a highly transactional workload and its performance is bound by disk I/Os. One way storage administrators can meet the I/O demand is by increasing the number of storage disks in their VDI environment. It is important to look at the peak IOPS demand that will be placed on the underlying storage, and design your storage to handle those peak IOPS. There is no simple formula for assessing your IOPS needs. It will vary for every organization depending on their desktop user types, and boot and login patterns. I will take a simple example here and go about identifying the peak IOPS needs and further the size of the storage needed to support those IOPS.
VDI Storage Sizing Exercise
Let us say that we are designing a VDI environment for 100 desktops. We estimate that about 80% of those desktops will boot in the morning simultaneously and then stay online rest of the day. Remaining 20% of the desktops will boot and perform operations during the night time.
Let us assume that we estimate each desktop to produce a peak of 25 IOPS while booting up, and then an average of about 5 IOPS throughout the steady state operations before logging off. The logoff operation creates a peak of 15 IOPS per desktop.
We also estimate that we will grow our VDI environment by up to 50% in the next 5 years and we are designing the VDI environment to support this growth.
As it is evident, the boot operation is the most I/O intensive and thus, we should be sizing our storage to support the performance requirements during boot periods.
So the 80 desktops (80% of 100 desktops) will produce 80 X 25 = 2000 IOPS when they are booting up in the morning simultaneously.
To account for the anticipated growth, we will need to support up to 50% more IOPS in the next 5 years as we grow our VDI environment. So we need to design our storage to support 2000 X 1.5 = 3000 IOPS in order to effectively address the peak IOPS requirements for the next 5 years.
Let us assume that we will be using 15K RPM SAS drives to support this VDI environment, and that each 15K SAS drive can support 150 IOPS.
Hence we will need 3000 / 150 = 20 of the 15K SAS drives in our storage to handle the peak IOPS.
This example assumes RAID 0 policy and no spare disks.
This is a simple example, but should help illustrate the decision process. The key is to estimate, as accurately as possible, the IOPS patterns for your specific environment. It is important to note that the IOPS patterns vary drastically between organizations. Dell offers consulting services that can help you analyze your VDI needs to meet the required performance goals.
Inadequate storage performance in your VDI can lead to less than acceptable user experience. While sizing storage for your VDI, it is important to design it to deliver adequate performance and user experience. VDI storage needs to be sized to handle peak IOPS demand placed by the VDI workloads. The I/O patterns vary drastically from organization to organization, and a critical analysis of your end user profiles is essential to accurately estimate the peak demands placed on your storage infrastructure.
It is important to recognize that overprovisioning your storage in order to meet the performance goals is cost-inefficient. In my next and the last post in this series, I will discuss some critical features and considerations for your VDI storage that can help mitigate performance challenges while optimizing storage footprint and enabling a simple, streamlined desktop VM provisioning model. Stay tuned.
The federal Department of Energy cites data centers as three percent of U.S. electricity usage, “amounting to 120 billion kilowatt hours per year, at a cost of $7.4 billion.” As data creation increases year over year and thus increased need for storage space and support, greater efficiency is the best option for IT departments moving forward. From paper usage to data center energy reduction, there’s always a way that IT can lower costs and usage.
One recent green IT project of note is Facebook’s Open Compute Project, where the company revealed the specs and design of its custom-built data center in Prineville, OR. When compared to its leased data center, Facebook’s Prineville facility received an impressive PUE rating: 1.07 PUE versus 1.4 – 1.6 PUE for Prineville and 1.5 PUE for the national average.
For a company capable of building its own data center, the open-sourced project is a gold mine for green initiatives. For companies less capable, there is the hope that vendors will adopt some of the social network’s resource and energy-saving designs.
Other Green Options
Take advantage of technologies such as virtualization to reduce overall data center costs. Power consumption, server and CPU count can be reduced up to 50% in some cases.
Reuse old hardware or wipe it clean and donate to a recycling center to be used for parts or donated to a nonprofit organization. Upgrading to the latest version or overhauling your data center doesn’t mean you have to trash your old one. Research recycling options that can also serve as a tax write-off.
For further information on how to use virtualization to green your IT department, check out this guide from SearchServerVirtualization.