Posted by: Beth Pariseau
Auto Deploy, vCenter, VMware
One example of the growing criticality of vCenter comes with vSphere 5’s Auto Deploy feature. In certain disaster scenarios, its dependency on vCenter can send users down a ‘rabbit hole’ of availability issues if the environment is not designed correctly, experts say.
Auto Deploy can be used to deliver host and VM configuration information across the network, while an Auto Deploy server manages state information for each host. It stands to be most appealing in large environments where quick deployment of new VMs is a must.
In an environment where vCenter, the vCenter database and the Auto Deploy server are all virtualized in the same vCenter datacenter, and all hosts and VMs are totally reliant on Auto Deploy for their state information, Auto Deploy cannot set up vSphere Distributed Switches (vDS) if vCenter Server is unavailable. If the host can’t connect to vCenter Server, it remains in maintenance mode and virtual machines cannot start.
In other words, according to a recent blog post by Forbes Guthrie, a vExpert, and infrastructure architect specializing in virtualization,
…here’s the scenario. Everything powers off, all at once. You hit the power button on the servers. The hosts boot up, but stay in Maintenance Mode because they can’t hit the vCenter VM or Auto Deploy VM for their Host Profile. In Maintenance Mode the VMs won’t power on. The vDS switch cannot be created. You can’t power on your vCenter VM. You can’t power on your Auto Deploy VM.
There are a lot of “ifs” necessary to create this scenario. However, Guthrie wrote in a separate email, “Auto Deploy is best suited to large and rapidly changing environments, and it is just those sorts of progressive designs that are likely to virtualize their management servers such as a vCenter, a vCenter database and an Auto Deploy server.”
One way to design around the potential availability issue is to have a separate management cluster, which could be restarted prior to the rest of the environment to ensure the availability of vCenter. “Currently, most VMware customers who are considering Auto Deploy are likely to be large enough that they can absorb the overhead associated with an additional single-purpose cluster,” Guthrie wrote in the email. A separate management cluster running on dedicated hardware can also protect vCenter from performance issues caused by contention from other workloads.
The downside of running a separate management cluster comes in the form of increased costs, for separate hardware as well as ongoing management of multiple vSphere environments.
A remote secondary Auto Deploy instance, or a remote vCenter instance connected with vCenter Heartbeat are other potential design approaches that can mitigate a circular dependency scenario.
Ephemeral ports on the vDS can ensure vCenter always has connectivity to the network, Webster suggested. “[Ephemeral ports are] an option that you can use with a distributed switch when you’ve only got two 10-Gig NICs on the host, and that’s really where most of these problems potentially come in,” he said. “If you’ve got lots of NICs on your hosts, you’re probably going to have a few more vSwitches around, and you might still have your vCenter server connected on a standard switch port group, so you’re not going to run into the dependencies that exist with the VMware distributed switches.
“You’ve also got the option, of course, of using the Cisco Nexus 1000V, which doesn’t have any of these dependencies either,” Webster added.