Posted by: Michael Khanin
ESX Server 3.5 Update 3, Virtualization, vmware, VMware ESX Server
- What’s New
- Prior Releases of VMware Infrastructure 3
- Before You Begin
- Installation and Upgrade
- Patches Contained in this Release
- Resolved Issues
- Known Issues
- Using Language Packs on the ESX Server Host
Note: In many public documents, VMware ESX Server 3.5 is now known as VMware ESX Server 3.5 and VMware ESX Server 3i version 3.5 as VMware ESXi 3.5. These release notes continue to use the previous convention to match the product interfaces and documentation. A future release will update the product names.
The following information provides highlights of some of the enhancements available in this release of VMware Infrastructure 3:
Note: Not all combinations of VirtualCenter and ESX Server versions are supported and not all of these highlighted features are available unless you are using VirtualCenter 2.5 Update 3 with ESX Server 3.5 Update 3. See the ESX Server, VirtualCenter, and Virtual Infrastructure Client Compatibility Matrixes for more information on compatibility.
New features and supported IO devices:
- Increase in vCPU per Core Limit — The limit on vCPUs per core has been raised from 8 (or 11 for VDI workloads) to 20. This change only raises the supported limit but does not include any additional performance optimizations. Raising the limit allows users more flexibility to configure systems based on specific workloads and to get the most advantage from increasingly faster processors. The achievable number of vCPUs per core will depend on the workload and specifics of the hardware. It is expected that most deployments will remain within the previous range of 8-11 vCPUs per core. For more information, see VI3 Performance Best Practices and Benchmarking Guidelines.
- HP BL495c support — This release adds support for the HP Blade Server BL495c with all Virtual Connect and IO Options allowing 1 or 10Gb connection to the network (upstream) and 1Gb connections only to the servers (downstream).
- Newly Supported NICs — This release adds support for the following NICs:
- Broadcom 5716 1Gb
- Broadcom 57710 10Gb Adapters
- Broadcom 57711 10Gb Adapters at 1Gb speed only
Note: iSCSI/TOE hardware offloads available with these adapters are not supported by VMware with ESX 3.5.
- Newly Supported SATA Controllers— This release adds support for the following SATA controllers:
- Broadcom HT1000 (supported in native SATA mode only with SATA hard drives and Solid State Disk devices)
- Intel ICH-7 (supported in IDE/ATA mode only with SATA CD/DVD drives)
Note: Storing VMFS data stores on drives connected to these controllers is not supported
- Newly Supported Guest Operating Systems — Support for the following Guest Operating Systems have been added by VMware during the ESX 3.5 Update 3 release cycle:
- Solaris 10 U5
- Ubuntu 8.04.1
- RHEL 4.7
- Internal SAS networked storage controllers — This release adds experimental support for Intel Modular Server MFSYS25 SAS Storage Control Modules (SCMs). For known issues with this platforms and workaround see SAS Link and Port Failovers with the Intel Modular Server Running Update 3 and Later Versions of ESX 3.5 and ESXi 3.5 (KB 1007394).
- Interrupt Coalescing (IC) for Qlogic 4Gb FC HBAs — Introduced in this release, the feature reduces CPU utilization (and CPU cost per IO) and improves throughput of IO intensive workloads by generating a single interrupt for a burst of Fibre Channel frames, when received in a short period of time, rather than interrupting the CPU each time a frame is received. The feature is enabled by default.
- Experimental Support for the VMDK Recovery Tool — This release adds support for the VMDK Recovery tool, a script intended to help customers to recover VMFS/vmdk data stores from accidental deletion of VMFS/vmdk data store or physical disk corruption. For more information, see VMDK Recovery Tool (ESX 3.5 Update 3) ( KB 1007243).
- Small Footprint CIM Broker — Updated SFCB to version 1.3.0
- IBM SAN Volume Controller — SVC is now supported with Fixed Multipathing Policy as well as MRU Multipathing Policy.