I'm glad I came across this question because it reminded me to document and review the steps we took. Most of the tasks were a verification that existing systems were fully functional before the storm hit. These included ensuring all off-site backups we run successfully, a live copy of any important documentation was hosted off-site and up to date, temperature sensors and reporting tools were functional and also tweaked to allow more time for action due to the extra travel time needed.
Being a small department we were able to just discuss things such as who can be where and do what if a major problem hits, but in a larger organization I would have documented it.
It's important to walk through the process of redirecting traffic if a location goes down. We have four main sites that usually funnel all traffic through the main office, if one goes down then that traffic has to be sent elsewhere to keep everything functional. Things to note... how much time do you have to complete the job if need be, is the alternate connection you rely on functioning properly, is touch services needed to make the switch?
Also, for anyone working with end users, any employees with passwords about to expire were sent additional emails to prevent additional work, reminders of phone system features were sent out that would help users work from home seamlessly. emails to remind users that if they have VPN issues speak to IT before the storm hits if possible. Additional laptops were available to lend out for critical personnel.
The most important task in my mind is making sure everything is running smoothly before the storm though, you don't want to worry about preparing for disaster and neglect the critical server that has been crashing, have an issue with that server when touch services aren't available and making all other planning to avoid disaster in vein because of an unrelated issue
We are somewhat limited in reactive process. Our fabric is our fabric so we are limited in what resources we can add. We do have virtualization resources that can be used to deploy servers and resources as needed, and we can leverage some unused resources to accomadate traffic spikes at the edge. I think planning ahead and providing resources for worst case scenarios allow you to “right size” capacity for emergency situations.