Replacing or upgrading a SAN is no trivial task. There are a few tried-and-true steps to take when replacing a SAN which I’ll outline in this blog post, including a key step to the process that will ensure a successful switch.
I recently upgraded from an HP MSA 1000 to an IBM DS3400 because I wanted to improve performance and lower my overall energy costs. One of the reasons I decided to replace my old SAN is because it is much cheaper to have a single 2U device running than the three devices for the old SAN. In addition, I dropped from 42 drives to 12 drives with more storage. Minimally my SAN power costs should drop to just 1/3 the original. I also have gone from 2.5 TB to 3 TBs of storage. Not a huge increase in storage capability.
I will know next month if my energy cost reductions have been realized and will report back then.
The steps for replacing a SAN are not all that tricky, but there was a single gotcha that could be avoided with careful planning. To replace a SAN, follow these steps:
1. Plug in the new SAN to your existing fabric. Luckily I had a pair of unused fibre connections and Gbics available else this would have been another expense and a delay until the cables and Gbics arrived.
2. Find a system on which to install the management console. For the IBM DS3400 I chose my VirtualCenter and VMware Consolidated Backup (VCB) server to be the management console for the SAN. There are two methods to manage the IBM DS3400: in-band or over the fibre channel fabric, or out of band using Ethernet — even a VM would suffice given networking is connected to the SAN. Software exists for both 64-bit Linux and Microsoft Windows.
3. Create the LUNs on the new SAN. This is a good chance to correct any problems you may have with the LUN configuration on the old SAN. I did a one-to-one mapping, except I slightly increased the size of the LUNs.
4. Present the LUNs to your VMware ESX host(s) and VCB server(s).
5. Rescan the storage adapters for new LUNs using the VMware Infrastructure Client (VI Client) for the first VMware ESX host. Once this is completed, you can then add as many Virtual Machine File Systems (VMFSs) as required.
6. Rescan the storage adapters for new LUNs and VMFSs using the VI Client on all the other ESX hosts.
7. Employ Storage VMotion via the VI Client to migrate VMs from one LUN to another LUN. This works if you have the patience to move all the VMs one by one. If not you can employ other measures. If you do this, however, you will end up having to edit the VMX files for each VM migrated to change the location of the virtual disk files. There are scripts to do this for you as well. This second option, however, also requires you to power off all VMs. Use of Storage VMotion does not require any VM downtime. Be sure to move all files from the LUNs in use.
8. For a LUN with an RDM (mine was a Linux file server), use Storage VMotion to move any VMDKs related to the VM. Then map the new RDM to the VM. You will have to reboot the VM to complete. Then create a new filesystem on the new RDM mount the file system. Then you must copy all the files from the old RDM to the new RDM. I used the following command to complete this task to copy all files from /files to /files2.
- cd /files; rsync -ravlpog * /files2
9. Then I modified the mount point for /files within /etc/fstab to be the correct new location. Finally I powered off the VM, deleted the old RDM from the VM and powered on the VM picking up the new data.
Here is the gotcha. I missed it, but it will be extremely useful for you (and me) going forward: Remove the old SAN’s LUNs from each VMware ESX host. If you miss this step when you finally disconnect the old SAN, the ESX hosts will go into a state of constantly attempting to failover the old LUNs. This will spew massive failures into the log files. If this happens there is no recourse but to reboot the VMware ESX hosts.
Now the SAN has been replaced. With the exception of dealing with any RDMs, it is possible to migrate to a new SAN without any downtime.