Posted by: Rick Vanover
hardware, Rick Vanover, Servers, Storage, Virtualization
I was recently asked, “do you have any visibility of the storge utilization you provide your virtual machines?” I stopped, thought about it and said “no”. However, in my situation, this is not yet a problem.
A pitfall for most enterprise server virtualization strategies is in a reservation for storage, regardless of what the virtual machine has written on the virtualized filesystem to the defined maximums. For example, if I have a base installation of a Windows Server 2003 system, the footprint as I do my server builds will be around 5 GB. My standard build allocation is 32 GB. This makes this system only 15.6% utilized from inception. This rule of thumb applies to most servers, and a standard build has 32 GB as an accepted footprint per system.
Excluding backend storage virtualization and de-duplication strategies, what about systems that have a storage footprint larger than 32 GB? Well, luckily we’ve been down this path before:
The storage is the storage, virtual or physical.
Managing the percentage of utilization for shared storage should be a task of continuing diligence. I don’t (yet) have a large number of virtual servers with a footprint above the standard build, these systems face the same battles we have had for years with general purpose servers. As an example, take a main file and print server that is 2 TB on a general purpose server: It will be about 2 TB on a virtual server as well from the storage perspective. For large storage footprints using iSCSI or storage-area network (SAN) technologies, the difference in configuration is minimal.
However, how do we address the first question about under-utilized storage footprints for the virtualized systems? Is it best to look only at operating system metrics? That may be an adequate solution for each operating system, but the aggregation will be from different sources and outputs. What are you doing to address storage utilization when you are not using storage virtualization?