In discussions with other administrators, determining the number of hosts per cluster gets a wide range of response. Although there are a number of factors, the unfortunate simple answer for every installation is “it depends.” Let’s tour of some of the reasoning behind various hosts per cluster configuration.
Similar generations of hardware
Basing cluster configurations on similar hardware is a common logical type of cluster separation. This is because VI3 does not currently support dissimilar migrations to different processor systems with VMotion technology. Another common cluster configuration is to separate clusters following the internal proof of concept, a cluster separation that makes the business and internal IT totally comfortable with VMware virtualization. This generally has the ‘pilot’ equipment in a cluster because it was possibly taking on a less important role due to new resources being added.
HA and DRS configurations and standby capacity
Depending on the granularity of the VMware HA and DRS configuration, more or less hosts may be required when considering separation and failures permitted values. These generally make the difference between one or two more hosts in clusters with less than ten hosts.
Many administrators implement VI3 as part of the disaster recovery (DR) plan, and there may be a cluster with excess host capacity planned for use in a DR situation. This configuration will either mirror the primary cluster’s host capacity or be at a percentage needed to sustain the DR workload.
Separating development, test, QA, and live workloads
Depending on the requirements to define the process for internal systems to go from the conceptual stages to a live workload, different clusters may hold these stages well. Separating the workloads between clusters is a natural protection from a resource perspective and can make the process more likely to be enforced. On the same token, the various clusters should be configured respective to their roles. For example, the development cluster should not have access to the live network segment or live storage system.
Internal and external-facing systems
Over on SearchServerVirtualization.com, I posted a blog about putting external-facing VMs on the same hosts that hold internal workloads. Within VI3, these workloads would be better suited on a separate cluster. This can lower the risk of a hypervisor vulnerability affecting an internal workload.
Some clusters may be configured and populated exclusively by funding situations. Various large customer projects, consulting entities, or departmental chargeback may dictate how the clusters are configured. This granular approach may effectively create a large amount of underutilized capacity, but may be unavoidable when it comes to the financial traits of an organization.
More powerful hosts or more hosts?
This is a tough one to call as there are benefits either way to having a large number of very capable systems or a smaller number of lesser expensive systems. Both systems are comparatively large systems, but in terms of an ESX host the smaller system may be a dual socket, dual core system with 32 GB of RAM and the larger system may be a quad socket, quad core system with 128 GB of RAM. Both are good ESX host candidates, and there are many upfront cost factors with each configuration.
What is your strategy on cluster configuration?
As you can see, there are many approaches to this topic. VI3 is quite adaptive to most configurations, but upfront planning remains important with this important configuration topic. Share your comments below on your cluster configuration.