VMWare’s ESXi product is a perfect fit for blade servers and, given that most blade solutions can boot from SAN, it makes sense to use a blade center as a large ESX cluster. Remember, in aggregate, a VMWare cluster is more powerful than a single server, especially if you use Resource Groups coupled with Dynamic Resource Scheduling (DRS) and VMotion.
I design systems around this all the time.
VMware will work in the blade infrastructure, the reality is as you start to implement more VMware features and functions in the architecture, you may run out of I/O room in any Blade Implementation.
Let’s take for example the Ethernet Architecture. If you are still doing traditional backups with agents, the VMware server will need to have it’s onw ether channel to do that on as you don;t want to step on your user network. You also need 2 ether channels for your managment infrastructure, You also need 2 to 4 ether channels for your user traffic, you will need another etherchannel for FT when it becomes available. So as you can see we are at about 6 to 8 etherchannels. This means 6 to 8 Ethenet switches in one blade chassis. Now since you need all of these networks to be VMotion aware, you will also need to manage those switches.
You next worry on a blade chassis is the amount of supported RAM.
Depending how large your environment and what kind of network management infrasturcture you have, the BladeChassis may not be a best fit. If you have 20-100 servers to virtualize, I think that blades may be a good case for you, but if it is larger I would suggest another route.
Now if you put your VMware on an x3850M2 with 24 cores and 32 available RAM slots (before you scale this server into an x3950M2) then your I/O and memory choices are limitless compared to a blade architecture.
Just my 2 cents.