RTFM Education – Virtualization, VMware, Citrix

Jul 27 2011   10:07AM GMT

Two Days with Dell in Nashua, New Hampshire

Michelle Laverick Michelle Laverick Profile: Michelle Laverick

 

For the first time in my life, I got turn left when board a plane. I actually turned right in attempt to locate my seat not realising I’d been upgrade to First Class on an internal flight. OK. It’s hardly business or 1st on a transatlantic. But I took a photo to record the strange and wonderful experience – I hope to get more of this treatment from United Airlines in the future. One day I will get my 1K stripes!

 

Dell arrange for a limo drive to pick me up from Boston, MA to take me to Nashua, NH. I was way too scared to pick up a hire car and try and navigate myself out of the airport. Those crazy Americans drive on the wrong side of the road don’t you know.  My driver – Bob, turned out to be a really interesting guy. In a previous life he was skydiver, and did over 4,000 jumps and became an instructor. He even met the British parachute guys “The Red Devils” who he said were the craziest guys he’d met – who invented new tricks to teach the instructors. Bob said the devils were of that age where you think your immortal. Anyway, I tried not behave like a rockstar – there was no wrap-around specs, class-A drugs or groupies in my limo…

This week I spent my first two days in the US with Dell over at their facility in Nashua, New Hampshire. Over the last six months I’ve got quite close to the Dell Equallogic team that’s based out there. They reached out to me late last year as I started to ramp up to write the 3rd edition of the VMware SRM book for the VMware Press. Anyway, the first day was mostly stuff I could write about, whereas the second day was pretty much futures stuff that will be under embargo for the next couple of weeks or months depending on the release schedule.

Me and the Dell Equallogic Team near the briefing center at Nashua.  I made a joke that the sign above our heads made it sound like we were modern day equivlant of the 7 dwaves. Will’s “Easy to Use”, Dylan’s “Flexible” and David’s “Data Proven” and I’m the dwarf called “Scalable”. To the left is William Urban, Dylan Locsin and to my right David Glynn. You can follow them on twitter @VirtWillU and @d_glynn

The first day was really useful to me as it in the main it was directly related to some of the issues associated with the SRM book, and a project that’s I hope to start at the back end of Q3 and Q4. I learnt a lot of stuff – in fact some of that stuff was VMware related. It’s funny how no matter how much time you spend with a technology (at times at the exclusion of all others!) and still you learn something new each day. Some of that stuff was Dell specific, and some of it would apply to any VMware shop on any storage.

Fast Failback on Dell Equallogic

So first up. I learned a lot more about how the “fast failback” feature works in Equallogic. This an option you can enable on a volume, and relates specifically to what happens when you carry out a failback option either manually or with SRM. Basically, when you carry out a fast failback of a volume that has been living in the recovery site for a while, all that needs to be copied back is the differences that have accrued during the outage. It speeds up the process of failback for obvious reason.  I asked why this option wasn’t a default for replication, but it quickly became clear. If you have a lot of volumes with a lot change – then you could potentially run out of free space for those deltas. I really didn’t consider the dangers when I was writing up the Dell Equallogic section for the SRM book. So I’ve made a note of this ready to make it 100% clear when my chapters come back from the copy-edit process.

I’ve also got another issue to follow through on. Something I need to verify in my own environment, and as consequence do an update to the book. That concerns the new “reprotect” feature. In case you don’t know a reprotect inverts the replication path back from the DR location to the Production location ready for a failback recovery plan to be execute. It’s a great new feature from VMware, and massively reduces the leg work required to prepare for a failback. Now. The question is this – does the reprotect process just send a one-off synch of from the recovery site, to the protected site? If not you could run this risk of the following. You run the reprotect – wait for the synch to complete – but then you walk away and don’t run the recovery plan to failback for 4-days. If the schedule of replication isn’t re-established you run risk of potentially forcing a synch of 4-days worth of data prior to the recovery. The reprotect process has two synchs as the plan is run. A synch with the VMs powered on, and synch when the VMs have been gracefully powered off – this double synch process is meant to ensure that your VMs when they are returned to the protect site, are in exactly the same state as recovered VMs that have been in use for hours, days, weeks, or potential months. So I need to do two things – validate this logic with the SRM team, and valid how the different storage vendors deal with the replication AFTER the reprotect process….

Using CHAP instead of IQN….

The next tidbit of information I got from the Dell Equallogic team concerned on the best method of allocating iSCSI volumes to ESX. Partly through ignorance and partly because of limited experience I’ve been missing out on an easy way to allocated iSCSI volumes to ESX. I’m not sure if this new way (to me) of doing things will apply to all vendors, but it certainly works with Equallogic. Here’s my ass-u-m-ption. I assumed that you HAD to use an IQN to allocate an iSCSI volume to a host, and that CHAP was merely yet another level of security you add to the IQN access-control. Most of the people I’ve met don’t bother with CHAP, they see it as additional level of security they don’t need. Indeed in early versions of SRM, if you enabled CHAP on the array and host – recoveries would fail. Why? Because the SRA (Storage Recovery Adapter) that backs SRM, wasn’t CHAP aware. Literally, there was no way to configure the CHAP “shared secret” in the SRA.  I assume customers who really needed CHAP found away to make this work with their storage vendor – after all an SRA is just a bunch of scripts in most cases. Anyway, so much for the background – here’s what I learned. I could dispense with the IQN altogether, and use CHAP ONLY as the authentication method. This way I don’t have to edit the ACL on the volume every time to allocate the volume. Let say you have 10 hosts and 10 volumes, in some systems that would be 100 entries for each and every volume – what if the host had combo of IQNs at the physical level, as well as software initiator? So the way to do CHAP method is to set a CHAP label which is applied to the volumes – say “Cluster1”. Any host you add to cluster1 on vSphere if given the CHAP password of “Cluster1” with no need to either set or reconfigure the hosts IQN, or add that IQN on a host-by-host, volume-by-volume way. Simplez. When I learned this I got a big fat stupid smile on my face because I could see how it would simplify the ACL process – I think its even easier than doing a ACL by the IP Range of vmkernel ports. The next thing I need to work out is if other iSCSI vendors support similar method….

The ESX iSCSI Initiator and Static Entries

The next bit of info I learned was more to do with ESX than Equallogics. This was the case where VMware vExpert got to learn something about VMware ESX that I didn’t know. It turns out the functionality of the iSCSI initiator concerning “dynamic” and “static” entries changed sometime ago – a fact that went under my radar. Many moons ago that “static” tab was just for the hardware iSCSI initiators. That’s no longer true. What happens now on a rescan, is that initiator uses the “dynamic” entry to interrogate the array, and then automagically populates the static list. This improves the speed of rescans and I think helps in the ESX reboot process, as it mounts the VMFS volume based on this autogenerated static list. Apparently, this has been the case for sometime – and isn’t an ESX5 feature. Knowing this explains some SRM funkiness from the past.

In the past it was common after a test of a recovery plan for a snapshot of an iSCSI volume to hang around in the datastore view and else where for sometime. This happened for a couple of reasons. Firstly, back then TCP was being handled different and it was difficult for these temporary snapshots to be easily removed by the ESX iSCSI initiator – sometimes it would take up to 10 rescans before it would be removed. Secondly, the SRA’s job was all to remove these automatically discovered static entries. No all of them did. Chatting with the guys from Equallogic they explained they used to get these ghostly iSCSI snapshots left behind after demos, and they had to go through quite a convoluted process to eventually clear them – ready for the next demo. As for me I used to just wait until the session to these temporary iSCSI snapshots just disappear of their own accord. Now, the good news is that has all been fixed in the ESX5, and I believe also ESX 4.1. The iSCSI stack changed as have the SRAs, so this situation should never happen again. For me means that means section of my book can be highlighted and deleted – because the issue has been resolved. I must admit I hadn’t seen this issue of sometime, but I assumed I just got lucky. And ever determined to show things working & not working in equal measure, I was keeping this experience in the book. Now I know I get rid of it – problem solved.

MEM and ASM

In the afternoon we spent sometime with the Equallogic guys explaining how their architecture works – groups, pools and members – and then linking that back to their multi-pathing technology and their HIT-VE (Host Integration Tools VMware Edition). I’ve done a little work with Equallogic plug-ins as I have done with the NetApp VSC and EMC VSI. But I have yet to look at their mulit-pathing set-up for iSCSI (something I have done with EMC PowerPath). I also covered the HIT-VE capacity to create new datastores for ESX, and create new virtual desktop replicas (something these storage vendors plug-ins all share). But I rather shied away from going into detail on their capacity to do smart copies and manage replicas from vCenter. I was a bit weary that a little knowledge can be dangerous thing – and given how many folks follow my blog – I didn’t want to encourage the use of technology that I myself wasn’t 100% comfortable with. Anyway, after these sessions I think I’ve got enough info to take that on. So probably when this SRM book is put to bed, I will come back to the next generation of these plug-ins for vSphere5 – and cover the bits I skirted over and try to tease out any new features or functionality. One issue I did raise with Equallogics (as I do with the other storage vendors) is wonderful these plug-ins are, the average VMware Admin in a large company probably couldn’t get them approved by the storage team. Part of that is politics – storage guys just don’t like the idea of VMware guys provisioning their own storage. Some of it is RBAC (role based access controls). A lot of storage vendors don’t have very robust permissions models that allow you to restrict the access to storage – such that only the VMware guy can touch the raid groups, aggregates or pools that that contain their stuff. Even when they can, and even when they can convince their storage guys that these plug-ins are a good idea – the way work currently is on a shared-user account model. So its not in anyway linked to the login to vCenter. In fact multiple VMware Admins could log in, and provision a new vSphere datastore (NFS, iSCSI, FC), and they would all use the same credentials to the plug-in. Of course that cause problems with track and trace, and audit trails.  Problems don’t stop there – where your storage vendor allows you to manage replicas and snapshots from their vCenter plug-in someone else could be rolling back your environment – and you would have no way of stopping that or hunting down the offender. The ironic thing is that if you storage vendor supports PowerShell. You could be actually authenticating to vCenter via PowerCLI and authenticating to your storage array with the same credentials. So the integration is kind of there at a PowerShell level, but not at the vCenter level.  Anyway,  I guess what’s needed is some kind of improved API by which vCenter credentials could be passed-through to other 3rd party plug-ins which are marked as being “trusted”. Then we would have per-user authentication token that could be used in ACLs and filters elsewhere.  Until that happens I have feeling that storage vendors will need work out methods of their own. After all no one really wants to give away their root, nasadmin, or grpadmin account details to another…

Replication and the WAN with Magi Kapoor (Technical Marketing Sr. Advisor)

Dell EqualLogic Auto-Replication: Best Practices and Sizing Guide

Later in the afternoon we did sessions. One on replication and impact on the WAN, and another about storage, VDI and the use of hybrid arrays that have a combo of SSD and HHD.  Ostensibly this content was similar to the presentations done at the Dell Storage Forum (an event that brought together the different storage platforms they have) earlier this year, and that content was built around some internal tests that Dell carried out with their own equipment and documented in their white papers. So this stuff is firmly in the public domain. In their tests the isolated various variables and measure their impact on replication. I think for the most part this could be applied to any environment – because the key constraints (the network) apply more or less evenly to all vendors. Firstly, Dell did not see significant difference in the performance of replication based on the RAID level selected. There was a difference between different levels but these variances were in the sub-5% range that would make them insignificant. Secondly, if you have multiple volumes being replicated (say 100GB worth of data changes) there was little difference be seen between 10Gps and 1Gps pipes, but once you went to levels of bandwidth experienced with OC3 it became significant – which is kind of understandable.  Thirdly, if you have multiple Equallogic members in multiple pools – there wasn’t a huge difference unless you got to OC3 levels of bandwidth…

Here’s the interesting stuff. Latency and lost packets. Lost packets make a HUGE difference in the time taken to complete a replication cycle. Even a pack loss of only 1% was in some cases capable of doubling the time needed to complete the cycle. The same was true of latency. The evidence proved something I’ve been saying for a while. When it comes to replication the size of the pipe is only one consideration – just like with the experience of virtual desktops – latency and lost packets not only equally import, but more important. Excessive latency and lost packets can be enough to actually degrade the quality of the service to such a degree that the extra bandwidth (and the extras cost associated in getting that bandwidth) makes no difference whatsoever. I think this will have a big impact on people considering stretched clusters – as well conventional replication with something like SRM.

 

Virtual Desktops and SSD with Chhandomay Mandal (Technical Marketing Consultant, Dell)

Sizing and Best Practices for Deploying VMware View 4.5 on VMware vSphere 4.1 with Dell EqualLogic Storage

Those of you have been looking at virtual desktops for a while will know one of the biggest barriers to scaling up the numbers of virtual desktops is the IOPS capabilities of the array to accommodate events like AV scans, boot storms and  log in/out storms. Of course much can be done to optimize the windows environment to reduce these penalities of virtual desktops but there does come a point when the IOPS exceed the capabilities of the array. For this reason many array vendors (and Dell certainly isn’t alone in this) after developed arrays that include a combination of SSD and HDD in effort to take the spindle out of the equation.

Personally, when I see these systems I have trouble not wetting myself in the expectation of improved performance. You know I just hate watching status bars. But I also feel somewhat ambivalent.  I love SSD. I’ve got in my MBP at home – and that’s fine because as individual I’m prepare to pay the premium for the experience. But then I start thinking about these Dilbert users or evening superhuman, power-users – and I’m wondering why the heck do they need SSD to make a half-dozen applications perform well. It’s like in some respects VDI has created a rod for its own back. Let’s centralize the desktop images on series of arrays (that clearly has some benefits…) and then we go “Oh dear, there’s a lot IOPS there… How do we fix that… I know lets chuck some SSD at it…”. Excuse me but I don’t recall Microsoft RDS or Citrix XenApp requiring SSD to make it scale. Anyway, coming off my high-horse for a moment – I am beginning to see the economic point of these arrays. Let say you are doing virtual desktops – and your facing scalability issues as you had out away from the PoC and the early adopters. You have a choice buy truckload of spindles to absorb those IOPS, or buy less spindles; less arrays – and go SSD instead. If the choice is between 1 array with SSD, or 3 arrays with HHD only – then you do the math. Anyway, all this leaves me wondering whether virtual desktops will EVER become the mainstream way folks the get their desktop. Forget about maturity and the miss opportunity that was Windows 7 (according to Brian Madden) – it’s the storage and IOPS that are the barrier. Always has been, and always will be. If its not the space consumed (answered by linked clones, streaming or de-duplication) then it’s the IOPS generated.

Incidentally, this is just me musing – it’s not a line from Dell. One interesting aspect of the study was the sizing implications for RAM on the host. In the study they ran out of physical RAM on the hosts, before they ran out IOPS on the array. In fact the array was only 30% utilized – leaving plenty of spare capacity from an IOPS perspective for increasing the workload. This is classic example of sizing or bottleneck issue. Once you have SSD the bottleneck shifts away from the area – and returns back to the ESX hosts. Have you bought an enough RAM on the ESX hosts such that you saturate the both the server and array asset – with as many VMs as possible without degrading performance?

Summary

There’s plenty of other stuff that I want to look at that came out from my meeting with Dell. A quick look at that SAN HQ, tells me that I need to be talking more about the various vendors tools for monitoring replication with respect to SRM. There’s also some stuff around vStart and VIS that I would like to learn more about. I suggested a ride-along with Dell at their next vStart deployment in the UK, as I would love to see how one of those plays out…

Well that’s about it from me for now. At the moment I’m on a plane to Indy to speak at the Regional VMUG there tomorrow – and I have another article to write for TechTarget before we touch down. Something tells me I will be working on that on the flight back to the UK on Friday. I hope to do more of these big vendor style briefings in the future. I’ve done a couple in the UK – mainly with VMware, and one with EMC at around the time of the vSphere4 launch. Perhaps with me coming across to the US nearly one week every month this year (and perhaps next) it will become part of my pilgrimage around the states….

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: