Posted by: Eric Siebert
Eric Siebert, ESX, Java, Java Virtual Machine, JVM, VMware, VMware ESX
VMware recently released a new white paper on best practices for running a Java virtual machine (JVM) on an ESX virtual machine. This includes any product that utilizes a JVM, such as Web application servers Websphere, Weblogic and Tomcat. There are many applications that utilize these types of JVM servers. Often times there may be a JVM running inside your application and you may not know it as the JVM is often renamed to match the application. Example: On vCenter server there is a Windows service called VMware Infrastructure Web Access. It’s actually a Tomcat application server or JVM.
The white paper mentioned the usual recommendations that deal with memory, CPU and disk I/O, but I was surprised to see a whole section on timekeeping (which we will talk about later). JVMs are often memory hogs depending on how you set your minimum and maximum JVM heap size and they also read and write very often. JVMs also tend to be very multi-threaded and the disk I/O will vary based on the type of applications that they are running. A summary of the best practices for each resource is below:
• As JVMs are very memory intensive, make sure your JVM has access to physical memory at all times by using memory reservations. If a JVM is forced to use its disk swap file (vswp) for memory on an over-committed host its performance will be affected. Set the memory reservation for a VM running a JVM equal to the amount of memory assigned to the VM. The VM won’t be able to utilize the transparent page sharing (TPS) feature to save memory on your host, but you save memory on a VM running a JVM anyway due to the its nature.
• Make sure to give your VM enough memory based on the maximum heap size of your JVM. JVM’s have a minimum and maximum heap size value and will quickly grow to their maximum size. If you do not have enough memory assigned to it then it will not be able to grow and performance will suffer. Check your max size and allocate another 512 MB for Linux virtual machines (VMs) and an additional 1 GB for Windows VMs.
• Use large memory pages if supported by the JVM and OS. See the following hyperlinked white paper for info on how to do this on the OS and how to enable in JVMs use the option –Xlp for IBM JVMs and –XX:+UselLargePages for Sun JVMs.
• Many JVMs will run well with one vCPU depending on how many garbage collection (GC) threads are running and may not benefit from using virtual symmetric multiprocessing. GC is the process that reclaims memory inside the JVM for objects that are no longer used. Tuning this can be tricky and relies on specific Java resource monitoring tools to see how often GCs are taking place. Check to see how many GC threads are running on your JVM and either adjust this to match the number of vCPUs in the VM or increase the number of vCPUs to match the number of GC threads. Often times its best to start with on vCPU and see how the application performs and then add another vCPU to see if it improves performance.
• You want to watch the disk I/O of your application running on the JVM for potential bottleneck issues. A JVM that is waiting to write to disk won’t perform as well as it could.
As mentioned previously, timekeeping is very important to a JVM. First you should make sure you sync the clock on your VM using either VMware Tools, W32Time or another network time protocol time source. What’s important here is the affect timer interrupts have on a JVM. Higher resolution timer interrupts cause more work to be done by ESX on behalf of the VM then lower resolution timer interrupts. The guest OS determines the timer interrupt of a VM. Most Linux guests allow you to configure the timer interrupt in the OS but Windows guests must rely on a JVM setting. Due to a weird bug in the JVM the –XX:+ForceTimeHighResolution option in the JVM actually has the opposite effect of lowering the time resolution.
For more information check out VMware’s white paper on Java virtual machines and be sure to check out the documents referenced at the end of it.