VMware

VMware ESX Server 3.5 Update 4 Release Notes

 

VMware ESX Server 3.5 Update 4 | 30 Mar 2009 | Build 153875

Last Document Update: 13 Apr 2009


These release notes include the following topics:

Note: In many public documents, VMware ESX Server 3.5 is now known as VMware ESX 3.5, and VMware ESX Server 3i version 3.5 as VMware ESXi 3.5. These release notes continue to use the previous convention to match the product interfaces and documentation. A future release will update the product names.

What's New

Notes:

  1. Not all combinations of VirtualCenter and ESX Server versions are supported and not all of these highlighted features are available unless you are using VirtualCenter 2.5 Update 4 with ESX Server 3.5 Update 4. See the ESX Server, VirtualCenter, and VMware Infrastructure Client Compatibility Matrixes for more information on compatibility.
  2. This version of ESX Server requires a VMware Tools upgrade.

The following information provides highlights of some of the enhancements available in this release of VMware ESX Server:

Expanded Support for Enhanced vmxnet Adapter This version of ESX Server includes an updated version of the vmxnet driver (vmxnet enhanced) for the following guest operating systems:

  • Microsoft Windows Server 2003, Standard Edition (32-bit)
  • Microsoft Windows Server 2003, Standard Edition (64-bit)
  • Microsoft Windows Server 2003, Web Edition
  • Microsoft Windows Small Business Server 2003
  • Microsoft Windows XP Professional (32-bit)

The new vmxnet version improves virtual machine networking performance and requires VMware tools upgrade.

Enablement of Intel Xeon Processor 5500 Series – Support for the Xeon processor 5500 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

QLogic Fibre Channel Adapter Driver Update – The driver and firmware for the QLogic fibre channel adapters have been updated to version 7.08-vm66 and 4.04.06 respectively. This release provides interoperability fixes for QLogic Management Tools for FC Adapters and enhanced NPIV support.

Emulex Fibre Channel Adapter Driver Update The driver for Emulex Fibre Channel Adapters has been upgraded to version 7.4.0.40. This release provides support for the HBAnyware 4.0 Emulex management suite.

LSI megaraid_sas and mptscsi Storage Controller Driver Update – The drivers for LSI megaraid_sas and mptscsi storage controllers have been updated to version 3.19vmw and 2.6.48.18 vmw respectively. The upgrade improves performance and enhance event handling capabilities for these two drivers.

Newly Supported Guest Operating Systems – Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the Guest Operating System Installation Guide: http://www.vmware.com/pdf/GuestOS_guide.pdf.

  • SUSE Linux Enterprise Server 11 (32-bit and 64-bit).
  • SUSE Linux Enterprise Desktop 11 (32-bit and 64-bit).
  • Ubuntu 8.10 Desktop Edition and Server Edition (32-bit and 64-bit).
  • Windows Preinstallation Environment 2.0 (32-bit and 64-bit).

Furthermore, pre-built kernel modules (PBMs) were added in this release for the following guests:

  • Ubuntu 8.10
  • Ubuntu 8.04.2

Newly Supported Management Agents – See VMware ESX Server Supported Hardware Lifecycle Management Agents for the most up-to-date information on supported management agents.

Newly Supported I/O Devices – In-box support for the following on-board processors, IO devices, and storage subsystems:

SAS Controllers and SATA Controllers:

The following are newly supported SATA Controllers.

  • PMC 8011 (for SAS and SATA drives)
  • Intel ICH9
  • Intel ICH10
  • CERC 6/I SATA/SAS Integrated RAID Controller (for SAS and SATA drivers)
  • HP Smart Array P700m Controller
  • Notes:

    1. Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5 (KB 1008673).
    2. Storing VMFS datastores on native SATA drives is not supported.

Network Cards: The following are newly supported network interface cards:

  • HP NC375i Integrated Quad Port Multifunction Gigabit Server Adapter
  • HP NC362i Integrated Dual port Gigabit Server Adapter
  • Intel 82598EB 10 Gigabit AT Network Connection
  • HP NC360m Dual 1 Gigabit/NC364m Quad 1 Gigabit
  • Intel Gigabit CT Desktop Adapter
  • Intel 82574L Gigabit Network Connection
  • Intel 10 Gigabit XF SR Dual Port Server Adapter
  • Intel 10 Gigabit XF SR Server Adapter
  • Intel 10 Gigabit XF LR Server Adapter
  • Intel 10 Gigabit CX4 Dual Port Server Adapter
  • Intel 10 Gigabit AF DA Dual Port Server Adapter
  • Intel 10 Gigabit AT Server Adapter
  • Intel 82598EB 10 Gigabit AT CX4 Network Connection
  • NetXtreme BCM5722 Gigabit Ethernet
  • NetXtreme BCM5755 Gigabit Ethernet
  • NetXtreme BCM5755M Gigabit Ethernet
  • NetXtreme BCM5756 Gigabit Ethernet

Expanded Support: The E1000 Intel network interface card (NIC) is now available for NetWare 5 and NetWare 6 guest operating systems.

Onboard Management Processors:

  • IBM system management processor (iBMC)

Storage Arrays:

  • Sun StorageTek 2530 SAS Array
  • Sun Storage 6580 Array
  • Sun Storage 6780 Array

Top of Page

Prior Releases of VMware Infrastructure 3

Features and known issues from prior releases of VMware Infrastructure 3, which include ESX Server 3.x and VirtualCenter 2.x releases, are described in the release notes for each release. To view release notes for prior releases of VMware Infrastructure 3 components, click one of the following links:

Top of Page

Before You Begin

ESX Server, VirtualCenter, and VMware Infrastructure Client Compatibility

The ESX Server, VirtualCenter, and VMware Infrastructure Client Compatibility Matrixes document provides details on the compatibility of current and previous versions of VMware Infrastructure 3 components, including ESX Server, VirtualCenter, and the VI Client.

Hardware Compatibility

•   Learn about hardware compatibility:

         The Hardware Compatibility Lists are now available on the Web-based Compatibility Guide at
         http://www.vmware.com/resources/compatibility. This new format is a single point of access for all VMware
         compatibility guides. The previous PDF versions will no longer be updated. The Web-based Compatibility
         Guide provides the option to search the guides, and save the search results in PDF format.

         Subscribe to be notified of Compatibility Guide updates via This is the RSS image that serves as a link to an RSS feed.

•   Learn about VMware Infrastructure compatibility:

         VMware Infrastructure Compatibility Matrixes (PDF)

Documentation

All the documentation for ESX Server 3.5 Update 3 applies to ESX Server 3.5 Update 4 as well. For a complete list of manuals and other documentation, see VMware Infrastructure 3 Documentation.

Installation and Upgrade

Read the Installation Guide for step-by-step guidance on installing and configuring ESX Server and VirtualCenter.

Although the installations are straightforward, several subsequent configuration steps are essential. In particular, read the following:

Upgrading or Migrating to ESX Server 3.5 Update 4

This release of ESX Server 3.5 Update 4 allows upgrades only from previously supported versions. The ESX Server 3.5 Update 4 Installer prompts you to perform an upgrade only when a previously supported version of ESX Server is found. See the Installation Guide for installation requirements.

To upgrade your ESX Server host to ESX Server 3.5 Update 4 , follow one of these supported upgrade paths:

Upgrade type ESX Server 2.5.4
and 2.5.51
ESX Server 3.0.1 2 ESX Server 3.0.22 ESX Server 3.0.3 ESX Server 3.5 ESX Server 3.5
Update 1
ESX Server 3.5
Update 2
ESX Server 3.5
Update 3
Tarball No No No No Yes Yes Yes Yes
ISO image Yes Yes Yes Yes Yes Yes Yes Yes
  1. For ESX Server 2.5.1, ESX Server 2.5.2, and ESX Server 2.5.3, upgrade first to ESX Server 2.5.4 and then upgrade to ESX Server 3.5 Update 4. Alternatively, upgrade to ESX Server 3.5 and then upgrade to ESX Server 3.5 Update 4.
  2. For ESX Server 3.0.0, upgrade first to ESX Server 3.0.1 or higher, and then upgrade to ESX Server 3.5 Update 4 by using an ISO image. Alternatively, upgrade to ESX Server 3.5 and then upgrade to ESX Server 3.5 Update 4.

For more information on installation and upgrade methods, see the Upgrade Guide.

Updated RPMs and Security Fixes

For a list of RPMs updated in ESX Server 3.5 Update 4 , see Updated RPMs and Security Fixes. This document does not apply to the ESX Server 3i products.

Upgrading VMware Tools

This version of ESX Server requires a VMware Tools upgrade.

Top of Page

Patches Contained in this Release

This release contains all patches for the ESX Server software that were released prior to the release date of this product. See the ESX Server 3.5 Patches download page or click on the name of a patch for more information on the individual patches.

ESX Server 3.5 Update 4contains the fixes contained in all of the following patches:

New Patches in Update 4 (30 Mar 2009 | Build 153875)

Previously Released Patches

Top of Page

Resolved Issues

This release resolves issues in the following subject areas:

Backup

  • Updated: If the catalog file is renamed, restoring the virtual machine using vcbResAll -a option fails
    If you rename the catalog file and try to restore virtual machines by using the -a option with the vcbResAll command, VCB fails to restore and displays the following error message on the console:
    Error: Cannot read catalog file

    Workaround
    Use the vcbRestore command to restore virtual machines. For more details, refer the section "Using the vcbRestore Utility to Restore Virtual Machines" of the Virtual Machine Backup Guide.
    Using vcbResAll command with -a option is not supported from this release forward.

CIM and API

  • Mandatory property MaxReadable, NominalReading, NormalMax, NormalMin and PollingInterval in class CIM_NumericSensor shows incorrect values
    The instances of CIM_NumericSensor have this set of properties:
    MaxReadable, NominalReading, NormalMax, NormalMin. When the actual sensor does not support values of these properties, they are displayed as 0, in CIM responses; instead of unspecified.

    This issue is resolved in this release.
  • On ESX Server 3.5, starting with Update 2, instances of VMware_Account (a subclass of CIM_Account) show a SystemName property with a value of "0"
    The SystemName property should contain the BIOS UUID of the server.

    This issue is resolved in this release.
  • On IBM athena hardware, some OMC_DiscreteSensor instances are found to have incorrect device IDs (with -1 as the last segment of the device ID)

    This issue is resolved in this release.
  • The OpenIPMI driver has been enhanced to operate the HP IPMI controller through the PCI bus and interrupts, and to support OEM message channels
    These changes are necessary for HP G6 servers, which use OEM messages to report memory problems.
  • Several false "checksum failed" messages are sent to the logs.
    In certain situations, messages such as the following "IpmiIfcFruBoard: checksum failed ..." are sent to the logs. However, this does not indicate an actual error. These messages can be ignored.

    This issue is resolved in this release. Such false "checksum failed" messages are no longer being logged.
  • Log is filled with "storelib Physical Device Device ID" messages
    In some cases, providers using LSI Logic drivers (megaraid2, megaraid_sas, mptscsi) repeatedly log messages such as "... cimprovagt: storelib Physical Device Device ID" and fill up the /var/log/messages log. These messages are part of debug code and should not be logged at this location.
    This release resolves this issue.

Guest Operating Systems

  • Updated: For ESX Server 3.5 Update 4, the default virtual NIC used for Solaris 10 32-bit and Solaris 9 32-bit has changed

    To provide better performance, the default virtual network interface card (NIC) for Solaris 10 32-bit and Solaris 9 32-bit has been changed from vlance (Solaris pcn driver) to Intel E1000 (Solaris e1000g driver).
  • For specific guest operating systems, BusLogic SCSI Adapter is inaccurately listed as supported.
    When using the VI client to view available adapters for the Windows Vista 32-bit or Windows Server 2008 32-bit guest operating systems, the BusLogic SCSI Adapter is listed as supported. In fact, this adapter is not recommended.

    This issue is resolved in this release. Now, when you edit the SCSI controller settings of a virtual machine using Windows Vista 32-bit or Windows Server 2008 32-bit, BusLogic is listed as "Not Recommended."
  • Prior to ESX Server 3.5 Update 4, for 64-bit SMP guests, certain multithreaded applications might experience instability.
    This issue is resolved in this release.
  • Upgrading the Japanese version of VMware Tools from ESX 3.0.2 U1 to ESX Server 3.5 U1 on the guest operating system might fail

    The result of the upgrade issue is in an error message, such as the following:

    An error occurred while the conversion. Check if the path is available

    This issue is fixed in ESX Server 3.5 update 4.
  • PAE-enabled Windows 2000 virtual machines stop responding
    A Physical Address Extension (PAE) enabled Windows 2000 virtual machine might stop responding on reboot, or fail randomly.

    This issue is resolved in this release.
  • VMotion operation might freeze momentarily on RVI-enabled AMD hosts
    When using VMotion to migrate virtual machines on RVI-enabled AMD hosts, the VMotion operation might appear to freeze momentarily before it is completed.
    This issue is resolved in this release.

Networking

  • New: PXE booting a virtual machine fails when rebooting a guest configured to use an e1000 virtual device
    A virtual machine configured to use the e1000 virtual device fails to obtain an IP address upon reboot. This issue has been resolved in ESX Server 3.5 Update 4.
  • Prior to ESX Server 3.5 Update 4, Broadcom bnx2 NICs are sometimes not recognized as having Wake on LAN (WOL) support
    In some cases, even though an onboard Broadcom bnx2 NIC exists, ESX Server 3.5 does not recognize the NIC as being WOL enabled. This issue is resolved in this release.
  • When the watchdog timer resets a NIC using the tg3 driver, the NIC might become non responsive
    When the watchdog timer resets a BCM5703 card with HP NC7782 Gigabit Ethernet Adapter, using the tg3 driver, the reset process might not be initialized. The NIC becomes unresponsive and the following message appears in the vmkernel log file:
    tg3_init_rings failed, err -12 for device vmnicx

    This issue is resolved in this release.

Server Configuration

  • Opening an FTP client firewall service does not enable FTP transfers
    This issue is resolved in this release in that the FTP client firewall service supports passive mode.
  • ESX Server host sensor information is not displayed in the VI Client UI
    The ESX Server host sensor information might not be displayed in the Configuration > Health State tab in the VI Client UI. If the information is not displayed after a few minutes, restart Pegasus using the command service pegasus restart from the service console.
  • New: In a LabManager environment, when the copy-on-write (COW) heap is nearly exhausted, virtual machines are powered on but not necessarily with all their disks attached
    This situation occurs when virtual machines are being powered on using LabManager and the COW heap is nearly exhausted. If, at some point, not enough COW heap exists to power on a virtual machine with all of its disks opened, that virtual machine is still powered on, however, the virtual machine is powered on with only as many disks as the remaining COW heap can accommodate.
    This issue is resolved in this release. Therefore, if all the disks attached to a particular virtual machine cannot be opened because the cow heap cannot accommodate the request, then that virtual machine is not powered on.

Storage

  • New: Prior to this release, changes to LUN properties or attributes on the storage array side are undetected by ESX Server
    The following example demonstrates how this issue can occur. A VMFS volume is built on the Symm7 LUN, but Symm 7 is then reconfigured to Symm 6. In such a scenario, the change to the LUN is detected, but the LUN UUID is reported incorrectly.
    This issue is resolved in this release.
  • ESX Server 3.5 displays incorrect VMkernel warning messages
    ESX Server 3.5 might display incorrect VMkernel warning messages indicating that a device that does not support SCSI-3 protocol has been detected. However, the device would still work with ESX Server 3.5. The messages might look similar to the following:

    vmkwarning:Dec 12 14:57:11 [host name] vmkernel: 0:00:00:18.539 cpu5:1048)
    WARNING: ScsiUid: 550: Path 'vmhba2:C0:T0:L12' : supports ANSI version 'SCSI-2'
    (0x2). In order to be used with ESX a device must support the SCSI 3 protocol.

    This issue is resolved in this release. The fix provides for logging a warning message similar to the following in the VMkernel logs, if a non-SCSI 3 device is detected:
    Support for SCSI-2 devices may be deprecated in a future release

  • The ESX Server 3.5 Update 3 Emulex driver is, in some cases, incompatible with the Emulex fibre channel adapters

    When Emulex LPe1150 fibre channel adapter with firmware version 1.00a5 or older is used with the Emulex driver shipped in ESX Server 3.5 Update 3, the driver fails to claim the Emulex fibre channel adapter.

    This issue is resolved in this release. The VMware driver for this issue has been updated in ESX Server 3.5 Update 4. The new driver is now also compatible with Emulex fibre channel adapters that use firmware versions 1.00a5 and older. In short, the updated driver works irrespective of the Emulex fibre channel adapter firmware version.
  • VMware ESX Server 3.5 Update 4 introduces an adaptive queue depth algorithm that adjusts the LUN queue depth in the VMkernel I/O stack.
    When congestion is detected, VMkernel throttles the LUN queue depth. VMkernel will attempt to gradually restore the queue depth when congestion conditions subside. By default, this algorithm is disabled. For more information about enabling this algorithm, see Controlling LUN queue depth throttling in VMware ESX for 3PAR Storage Arrays (KB 1008113).
  • During Storage VMotion, moving .vmx file might fail with "Insufficient disk space on datastore" error
    When attempting to move a .vmx file from one datastore to another, temporary snapshots are created for all disks on the source datastore. In some cases, the source datastore might not have room for all of the temporary snapshots. As a result, the move will fail with an "Insufficient disk space" error even though the destination disk has sufficient space. This occurs even though only one disk is moved by the Storage VMotion command.
    This issue is resolved in this release. Temporary snapshots are now made only for the disks being moved by the Storage VMotion command.
  • Guest Operating Systems and Storage Might Not Communicate Properly in RDM
    In Raw Device Mapping (RDM) mode, if the amount of data sent back from storage to the guest operating system is greater than 36 bytes in one SCSI inquiry, then guest operating systems and storage might not communicate properly. Products such as Microsoft Virtual Shadow Copy Service (VSS) and NetApp SnapDrive might experience issues. This fix resolves this issue.
  • ESX server host might stop responding when Emulex fibre channel adapters are used
    ESX Server hosts that use Emulex fibre channel adapters might stop responding. Entries similar to the following are logged in the vmkernel file:
    WARNING: SCSI: 2897: CheckUnitReady on vmhba1:1:0 returned Storage initiator error 0x0/0x0 sk 0x0 asc 0x0 ascq 0x0
    This release updates the Emulex driver to resolve the issue.
  • New: In ESX3.5 Update 3, vmkernel logs report a large number of reservation conflict errors
    This issue is resolved in this release. Reservation conflict errors are no longer logged in vmkernel log files. When the system is experiencing heavy reservation conflicts, the following message is still issued as seen in previous releases, Sync CR <count>.

Upgrade and Installation

  • Prior to ESX Server 3.5 Update 4, in specific situations, an installation using a kickstart file could lead to interruptions
    Installation of ESX server using a kickstart file containing the initlabel command combined with the ignoredisk keyword did not function properly. Specifically, uninitialized drives specified with the ignoredisk keyword were not actually ignored. This resulted in the interruption of the automated install and the following prompt was issued:

    The partition table on device [DISK] was unreadable. To create new partition it must be initialized, causing the loss of ALL DATA

    This issue is resolved in ESX Server 3.5 Update 4 and any drives specified for ignoredisk will be ignored without any interruption to the scripted install.
  • Prior to ESX Server 3.5 Update 4, the installer accepts invalid subnet mask and continues with the installation
    This issue applies to versions of ESX server prior to 3.5 Update 4. When installing ESX Server using the text mode of installation, if you enter an invalid subnet mask, the installer proceeds with the installation without throwing an error message. In the GUI mode of installation, the installer will display an error message only if the numeric value of the entered subnet mask exceeds 255.

    This issue is resolved in this release (ESX Server 3.5 Update 4). With the fix, all network settings are now validated. Therefore, error messages are issued in the following situations:

    • The subnet mask value is invalid. For example, 255.255.255.253.
    • The IP address and gateway reside on a different subnet (with valid submask).
    • The gateway address is the same as the IP address (with valid submask).
    • The IP address is the same as the broadcast address (with valid submask and gateway).
    • The gateway address is the same as the broadcast address (with valid submask and IP address).
  • Prior to ESX Server 3.5 Update 4, the installer stalls on systems using Intel Xeon processor 5500 series. This issue is resolved in this release.
  • nForce Ethernet NIC Loses Connectivity on ESX Server 3.5
    The NVIDIA nForce ethernet chipset might fail to issue interrupts upon completion of packet transmission, resulting in loss of connectivity. Rebooting the host might restore connectivity, but after the server runs for some time, connectivity might be lost again. Error messages similar to the following might be logged in the VMkernel log file:

    Jan 1 12:00:00 esx vmkernel: 9:01:01:10.001 cpu1:1200)WARNING: LinNet: 4288: Watchdog timeout for device vmnic0
    Jan 1 12:00:00 esx vmkernel: 9:01:01:10.001 cpu1:1200)<6>vmnic0: Got tx_timeout. irq: 00000020
    Jan 1 12:00:00 esx vmkernel: 9:01:01:10.001 cpu1:1200)<6>vmnic0: Ring at 6a02800: get 6a02a10 put 6a02a10
    Jan 1 12:00:00 esx vmkernel: 9:01:01:10.001 cpu1:1200)<6>vmnic0: Dumping tx registers


    This issue is resolved in this release.
  • Pegasus error messages after ESX Server upgrade or install

    After you upgrade an ESX Server 2.x or 3.x system to ESX Server 3.5 Update 1, Update 2 or Update 3, or after you perform a new installation of these ESX 3.5 update versions, the Pegasus service might fail with an error message similar to the following:
     
    Processing /var/pegasus/vmware/install_queue/1 [FAILED]
    ERROR: See log - /var/pegasus/vmware/install_queue/1.log
    Processing /var/pegasus/vmware/install_queue/1 [FAILED]
    ERROR: See log - /var/pegasus/vmware/install_queue/1.log
    Starting Pegasus CIMOM (cimserver)... [ OK ]

     
    The startup status of the service might also display other install_queue failure messages during the boot process.

    This issue is resolved in this release.
  • New: During installation, previous updates are listed as missing
    After the VMware Update Manager (VUM) installation of ESX Server 3.5 Update 3, VUM will report Update 1 and 2 as non-compliant. In fact, Update 3 obsoletes Update 1 and 2. This scenario does not present a problem and can be safely ignored. If you decide to install Update 1, Update 2, or both on top of an Update 3 system, the patch database is updated but no ESX Server software changes occur on the system.

    This issue is resolved in this release in that previous updates are no longer listed as missing.
  • New: While installing a patch bundle, the hostd service fails to start
    The following events trigger this issue:
    1. You apply a patch bundle that requires the ESX Server host to be rebooted but you do not reboot the host.
    2. You install a patch bundle that requires the hostd service to be restarted.
      The result is that the hostd service fails to start while issuing an error message similar to the following:
      Signature mismatch between VmkCtl (Jan 10 2009 18:47:26) and VMKernel (_Unknown_)

    This issue is resolved in this release.
  • ESX Server Edition Shows ESX as Unlicensed When Unserved License File is Available With the Kickstart File for ESX Script Installation
    After installing ESX through scripted installation, VI Client shows the ESX as "Unlicensed", though the /etc/vmware directory shows valid license file.
    This is resolved in this release.

Virtual Machine Management

  • ESX hosts cannot be registered in a VirtualCenter server

    Networking issues might prevent an ESX Server host from being registered in a VirtualCenter server which is running in a virtual machine with VMware Tools version 3.5, on an ESX 3.0.x Server host. ESX Server hosts can be registered once you downgrade VMware Tools to match the version of ESX Server host on which the virtual machine is running. An error message similar to the following might be written to the vpxd logs:

    [2008-06-14 18:13:45.407 'App' 2296 error] [VpxVmdbCnx] Failed to connect to [host name]. Check that authd is running correctly (lib/connect error 11)

    This issue is resolved in this release.

  • The default memory size for the Windows EBS 2008 64-bit guest is insufficient
    Prior to ESX Server 3.5 Update 4, the guest operating system for Windows Essential Business Server (EBS) 2008 64-bit does not, by default, provide enough memory. At 2048MB, the default memory size is insufficient, resulting in the virtual machine failing to load. This issue is resolved in this release. For ESX Server 3.5 Update 4, the default memory size for the Windows EBS 2008 64-bit guest operating system is 4096MB.

Top of Page

Known Issues

This section describes previously identified known issues in the following subject areas:

Backup

  •   VMware Consolidated Backup is not updated in this release
    ESX Server 3.5 Update 4 does not include an updated version of VMware Consolidated Backup. This release is shipped with version 1.5.0 and contains no changes to VMware Consolidated Backup
     since the release of ESX Server 3.5 Update 2.

CIM and API

  • VI Client displays incorrect name for power supply redundancy sensor on HP servers

    When you connect to an ESX Server installation on an HP server system by using the VI Client, the VI Client incorrectly displays the power supply redundancy sensor on the server as a physical power supply source. For example, for an HP server with redundancy sensor that has two physical power supplies, the VI Client displays the redundancy sensor as Power Supplies for Power Supply 3.
  • Executing a CallMethod query on a CIM_RecordLog instance might fail
    In ESX Server 3.5 Update 2 or higher, executing a CallMethod query on a CIM_RecordLog instance might not succeed at all times. You can, however, clear the system event log through a remote management console or interface.
  • Some CIM classes do not work properly on IBM Multinode systems
    The following anomalies are seen. For the following classes, the EnumerateInstances() operation returns one instance fewer than the EnumerateInstanceNames() operation:
    • CIM_AssociatedSensor
    • CIM_MemberOfCollection
    For the following classes, the GetInstance operation fails for some instances. However, the EnumerateInstances() operation succeeds:
    • CIM_HostedService
    • CIM_Sensor
    • CIM_SystemDevice
    • CIM_Slot
    • CIM_ElementConformsToProfile
    For the following classes, the EnumerateInstances() and EnumerateInstanceNames() operations fail to return any results:
    • CIM_OwningCollectionElement
    • CIM_RedundancySet
  • Changes to the sensor threshold are not reflected immediately
    If you change the sensor threshold through CIM, enumeration of the sensor might not return the new property values immediately. The changes take effect about a minute later.
  • Operation RequestStateChange(RestoreDefaultThresholds) results in error
    In the ESX Server 3.5 release, the operation RequestStateChange(RestoreDefaultThresholds) results in the following error message for some sensors:
    CIM_ERR_FAILED: index out of bounds

    In spite of the error message, the CIMOM does restore the thresholds.
  • Firewall on ESX Server 3.5 interferes with CIM indication support

    Outgoing HTTP connections are blocked by the firewall on ESX Server 3.5. This prevents indications from reaching the indication consumer.

    Resolution: In the Service Console, open an outgoing port for connections to the indication consumer, using the following command:
    esxcfg-firewall -o <port-number>,tcp,out,http

    To close a port for http in the firewall:
    esxcfg-firewall -c <port-number>,tcp,out,http

  • In the ESX Server 3.5 release, invoking the operation Reset() on numeric power supply sensors results in the following error message:
    CIM_ERR_FAILED: index out of bounds
    As a workaround, you can use the RequestStateChange(Reset) operation.
  • Indications do not work on ESX Server 3.5 Update 4 when you use the WS-Man protocol.
  • The ModifyInstance() call to change sensor threshold fails when you use the WS-Man protocol.
  • The chassis intrusion indication is not available for IBM Athena servers.

  • An ESX Server 3.5 host that has been upgraded to ESX Server 3.5 Update 4 does not report CIM_AssociatedSensor instances correctly
    The EnumerateInstance() and Association() queries for CIM_AssociatedSensor do not return any instances.
    Workaround: Do a clean installation of ESX Server 3.5 Update 4.
  • On some Dell MLK hardware, the NumberOfBlocks property for OMC_Memory instance has a value of 0. This issue is under investigation.
  • InvokeMethod(RequestPowerStateChange) and InvokeMethod(RequestStateChange) fail when you use the WS-Man protocol

Guest Operating Systems

  • Windows Guest operating systems might fail to resume from standby or hibernation state
    Virtual machines running Windows Server 2008 and Windows Server 2003 based guest operating system in the standby or hibernation state might stop responding when resumed from standby or hibernation states.
    See KB 946331 on the Microsoft support Web site.
  • x64-based versions of Windows Vista and Windows Server 2008 guest operating systems require Microsoft hotfix
    x64-based versions of Windows Vista and Windows Server 2008 guest operating systems without Microsoft hotfix http://support.microsoft.com/kb/950772 might encounter a situation where the guest operating system stops responding and returns the following error:
    MONITOR PANIC: vcpu-3:ASSERT vmcore/vmm/cpu/segment.c:430
  • Linux guest operating systems lose network connectivity after automatic tools upgrade
    If the version of VMware tools in a Linux guest operating system becomes out of date and an automatic tools upgrade is performed, the guest operating system loses network connectivity. After an automatic tools upgrade, the guest operating system stops the network service and does not restart the service automatically after tools upgrade.
    Workaround: Restart the network service in the guest operating system manually or reboot the guest operating system after an automatic tools upgrade.
  • Failure to Import vmx_svga Driver Into Component Designer of Microsoft Windows Embedded Studio

    vmx_svga driver cannot be successfully imported into Component Designer of Microsoft Windows Embedded Studio, and generates a warning message C:\Program Files\VMware\VMware Tools\Drivers\video\vmx_svga.inf:An error occurred while getting the vendors list section name .

    Workaround:
    1. Open C:\Program Files\VMware\VMware Tools\Drivers\video\vmx_svga.inf.
    2. Delete ", NTamd64.5.1, NTx86.6.0, NTamd64.6.0, NTia64" from the [Manufacturer] section of the vmx_svga.inf file.
    3. Import the vmx_svga driver into Component Designer.The vmx_svga driver is imported successfully into Component Designer.
    4. Save the imported driver as vmx_svga.sid.
    5. Import vmx_svga.sid into Component Database.
  • The cursor disappears after VMware Tools has been configured

    This issue has been observed on ESX Server 3.5 Update 4 with SUSE Linux Enterprise Server 8 (with or without SP4) guests. Immediately after you configure VMware Tools, the mouse cursor is not visible.

    Workaround: Reboot the virtual machine.
  • New: A virtual machine using the Virtual Machine Interface (VMI) might stop responding

    Workaround: When such a virtual machine stops responding, perform the following steps.
    1. Issue the vm-support -x command to determine that virtual machine's World ID.
    2. Issue the vmkload_app -k 9 wid command to kill the virtual machine entirely, allowing it to be restarted. Where wid represents the World ID.

Internationalization

All fields in the VI Client and VI VMware Web Access support non-ASCII character input, except for the following limitations:

Non-ASCII Character Entry Limitations

  • Specifying a non-ASCII value as an input string is not supported by the remote command line interface (RCLI).
  • The name of the computer on which VMware Infrastructure 3 or any of its components are installed must not contain non-ASCII characters.
  • The name of the computer or virtual machine where VirtualCenter Server is installed must not have a non-ASCII computer name. Otherwise, the installation of VirtualCenter Server will fail.
  • Use the default installation path names specified in the installer for all components. Do not change the install path to include installation path names containing non-ASCII characters and extended-ASCII characters.
  • Datastore names, virtual network names, and image file names (CD, DVD, and floppy drive) are restricted to ASCII characters only.
  • Message of the Day must use only ASCII characters.
  • Logging in to the VirtualCenter Server is supported for user names with ASCII characters only (login account name on Windows).
  • Image customization might fail if non-ASCII characters are used.
  • Custom attribute names and values must use only ASCII characters.
  • To conform to the general Internet practice and protocols, host names, workgroup names, domain names, URLs, email addresses, SMTP server names, and SNMP community strings cannot contain non-ASCII characters.
  • Guest operating system customizations that use ASCII encoding are supported but customizations using UTF-8 encoded native characters of Japanese, Chinese, or German have limited support. For customizations with non-ASCII owner, organization, username, or password, VirtualCenter and the sysprep tool must be hosted in the same locale as that of the guest OS. This includes the scenario to use UTF-8 encoded answer file.
Non-ASCII Character Display Limitations
  • When managing a VirtualCenter Server with the VI Client running on different languages of Windows, you might see some characters displayed incorrectly because of the difference in language-specific support on Windows.
  • If an error message includes log locations or user names containing non-ASCII characters, it does not display correctly in the localized environment.
  • When you use the Import wizard of VMware Converter, the date and time format is sometimes inconsistent with the current locale.
  • UNICode characters are displayed as '???' under the Status column and the Task Details of the Task View tab in the Japanese locale.
  • The Commands section on the Summary tab is not displayed properly.

Guided Consolidation Limitations

  • The Guided Consolidation tab is available only in the en_US locale.

Translation Issues

The following are known issues with translation in this release:

  • The Upgrade wizard is not translated.
  • Some messages originating from the ESX Server host are not translated.
  • Some localized interface layouts are not yet completed.
Other Internationalization Issues
The following additional issues have been identified:
  • Values used in the Run a Script action for an alarm might not be displayed properly after restarting the VirtualCenter server if the Virtual Infrastructure Client host OS language and VirtualCenter Server/database host OS languages are different.
  • In the Simplified Chinese version of VI Web Access, the Cancel button does not have the correct text, and the text on the button is displayed incorrectly.
  • Connecting to an ESX Server 3.5 Update 2 host through VirtualCenter will not upgrade a localized VI Client
    If you connect to an ESX Server 3.5 Update 2 host through VirtualCenter, the localized VI client in use will not be upgraded to the Update 2 version.
  • The VI Client might override the language preference setting
    The VI Client might override the language preference setting and display some messages in a language other than its primary language. It might display messages that are dynamically generated by the server (VirtualCenter Server and ESX Server) in the primary language set on the server. This issue does not occur if all the software is running in the language that corresponds to the locale of the operating system.
  • The wrong text appears on the reinstall wizard of the German VI Client
    The reinstall wizard displays the wrong text in the German VI Client.
    The reinstall wizard shows the following text:
    Der Installations-Assistent ermöglicht Ihnen, Virtual Infrastructure Client 2.5 zu reparieren oder zu entfernen. Instead the wizard should present the following message: The installation wizard will allow you to remove Virtual Infrastructure Client 2.5.
  • Links containing machine-generated virtual machine names do not work
    When you use VMware Web Access to browse the datastore by clicking a link containing machine-generated virtual machine name (typically starting with a plus sign and ending with slash, for example +5paw55qE5qih5p2,/), the Web browser displays a blank page or returns a page not found error. You can, however, access such virtual machines by using the VI Client.

Migration with VMotion

  • Storage VMotion of a Large Number of Virtual Disks Fails
    Migrating a large number of virtual disks at the same time with Storage VMotion might fail with the following error message:
    Received an error from the server: A general system error occurred: failed to reparent/commit disk(s) (vim.fault.Timedout)
    Workaround:

    To migrate a virtual machine with a large number of virtual disks, migrate the disks in batches as follows:
    1. Migrate the virtual machine configuration file and a subset of the virtual disks (no more than five at a time) from the source location to the destination.
    2. Migrate the virtual machine configuration file back to the source location.
    3. Repeat steps 1 and 2 until all the virtual machine disks and the virtual machine configuration file have been migrated to the destination
  • Migration with VMotion during high memory usage might fail with "Operation Timeout"
    Performing a migration with VMotion on a virtual machine with memory overcommitment and with the swapfile located on an alternative datastore (other than the one on which virtual machine home resides) might not succeed all the time and might fail with VirtualCenter showing error message Operation Timeout.

Miscellaneous

  • The QLogic driver RPM name includes a misleading driver version number
    Driver versions follow a naming convention that usually enables you to determine the RPM version number, as such: VMware-esx-driver-<driver>-<version #>

    However, in ESX Server 3.5 Update 4, the QLogic FC driver RPM package uses the wrong driver version value. The RPM name for the Qlogic FC driver is as follows: VMware-esx-drivers-scsi-qla2300-v707_vmw-350.7.8.65-151188.i386.rpm

    This inaccurately indicates that the driver version is 7.8.65. Note that the actual driver version is 7.8.66.
  • Converter Enterprise Client Plugin Must be Installed and Enabled After Upgrading
    VirtualCenter Server 2.5 Update 2 does not support the earlier versions of the Converter Enterprise Client plugin. You must, therefore, install and enable the Converter Enterprise Client plugin after upgrading to VirtualCenter Server 2.5 Update 2. To install and enable the Converter Enterprise Client plugin, click Manage Plugins on the Plugins menu. In the Plugin Manager window, select the Available tab and click Download and Install.
  • On ESX 3.5, harmless warnings might be issued about ACPI IRQ resource settings.
    While booting, the ESX server parses the ACPI tables in the BIOS to determine the current and possible IRQ resource settings for interrupt link devices on the platform. On certain servers, the current resource setting (_CRS) is not one of the possible resource settings (_PRS). This scenario results in warnings such as the following being issued:

    "WARNING: VMKAcpi: 291: Interrupt (5) from _CRS is not one of _PRS values"

    "WARNING: VMKAcpi: 399: IRQ from _CRS is bad or not in _PRS, will use from _PRS"


    This scenario is harmless. This issue is resolved in this release in that the warning messages are no longer issued. Instead, this information is now logged.
  • Health Status for a Few IPMI Sensors Shows "Unknown" When Multinode IBM System x3950 M2 Server Functions Under Heavy CPU Usage
    If a multinode IBM System x3950 M2 Server under heavy CPU usage hosts more than 80 virtual machines, the Health Status for a few IPMI sensors like Processors, Memory, Storage, Power, System, Chassis, and Watchdog sensors shows "Unknown" for a few minutes.
    To view the Health Status page, click the Health Status link under the Configuration tab in the VI Client.
    Workaround
    To update the sensor status, click Refresh, available on the Health Status page. This update takes approximately 10 minutes.
  • VI Client Shows the Health Status of IBM x Series Servers With LSI 1078, as "Alert" When the Status of the Sensors Is Not Displayed as "Alert"
    VI Client displays the health status of IBM System x3850 M2/x3950 M2 Servers with LSI 1078 IR SAS controller, in red ("Alert") when the sensors and subcomponents are not shown in red.
    To view the Health Status page, click the Health Status link under the Configuration tab in the VI Client.
    Workaround
    Install the latest firmware (version 01.25.82.00 or later) for the LSI 1078 IR SAS controller that is available from IBM Corporation.
  • An ESX Server Web Interface Might Fail to Display the Latest IPMI System Event Log Record
    When the IPMI System Event Log (SEL) is cleared, the IPMI SEL entries available through the ESX Server web interface at https://<IP address of ESX Server host>/host/ipmi_sel, might not be the latest IPMI SEL record.

Networking

  • Updated: NetQueue is enabled by default in the unm_nic driver, however, VMware does not currently support NetQueue

    Workaround:
    1. Unload the driver using the following command:
      vmkload_mod -u unm_nic
    2. Disable NetQueue using the following command:
      esxcfg-module -s multi_ctx=0 unm_nic
    3. Load the driver again using the following command: vmkload_mod unm_nic
      Note: The following command, instead of esxcfg-module, loads the driver with NetQueue disabled and does not persist:
      vmkload_mod unm_nic multi_ctx=0
    4. (Optional) To check if NetQueue is disabled for the driver, use the following command:
      esxcfg_module -g unm_nic
      The following string should be displayed: multi_ctx=0
  • ESX Server hosts become unresponsive during a network broadcast storm

    When a network broadcast storm occurs, ESX Servers might become unresponsive due to an issue with the tg3 network driver. During this time, service console or virtual machines that use the tg3 NIC might lose network connectivity. Rebooting the machine or unloading/loading the driver restores connectivity, but does not resolve the issue.

    ESX hosts with tg3 port cannot send or receive packets after being subjected to a broadcast storm. The following error messages might be logged in VMkernel:

    1. WARNING: Net: 1082: Rx storm on 1 queue 3501>3500, run 321>320
    2. VMNIX: WARNING: NetCos: 1086: virtual HW appears wedged (bug number 90831), resetting
  • Duplicate Packets Are Created When Beacon Probing is Used With a VLAN of 4095
    During a virtual machine network operation where the VLAN ID is set to 4095 and the associated vSwitch is configured with beacon probing, duplicate packets are generated.
    Workaround: When using a VLAN ID of 4095, set Network Failover Detection to Link Status only instead of Beacon Probing. ( KB 1004373)
  • Microsoft Windows Virtual Machine Fails While Receiving Jumbo Frames
    Microsoft Windows virtual machines fail and display a blue screen while receiving jumbo frames if the ESX Server host has booted in the Debug mode.
  • Extended netperf testing for IPv6 fails
    Running high levels of stress using netperf on Internet Protocol Version 6 (IPv6) enabled virtual machines over 12 hours might result in the shutdown of sockets. The following sockets are known to be affected: TCP_STREAM, UDP_STREAM, TCP_RR, and UDP_RR. Error messages similar to the following might be displayed in the virtual machine console:

    send_tcp_rr: data recv error: Connection timed out
    netperf: cannot shutdown tcp stream socket: Transport endpoint is not connected

    This is a known issue.
  • Wake on LAN (WOL) Is Not Supported for Some NetXen NIC Cards

    Wake on LAN (WOL) is not supported for some NetXen NIC cards. However, executing the # ethtool vmnic* command in the service console displays WOL as supported for all NetXen NIC cards.

    Workaround:Use a different NIC card that supports WOL.
  • New: Even when a NetXen device has no traffic, it shows its speed as 65536 and its duplex as half
    This issue occurs when a corrupt Tx packet is received: NetXen returns a -1 status error and the hardware, the firmware, or both abort. This is a known issue in the NetXen driver.

    Workaround: Unload and load the NetXen driver.
  • Network connectivity is lost when the NIC speed on a NetXen driver is configured manually

    Configuring the speed manually through esxcfg-nics or any other command on a NetXen 3031 NIC might cause loss of network connectivity. NetXen NICs do not support manual speed setting. The speed must be set through auto-negotiation.
  • ESX Server 3.5 using the Broadcom bnx2x NIC might stop responding while booting up in debug mode

    With ESX Server 3.5 in debug mode, while loading the Broadcom bnx2x driver, a message similar to the following is displayed:

    VMware ESX Server [BETAbuild-144117]
    Spin count exceeded (portsetGlobalLock) - possible deadlock


    Workaround:
    While this workaround reduces the frequency of this issue, it does not eliminate the issue entirely.

    This workaround (changing LinkStatePollTimeout configuration option) can adversely affect other features, such as NIC teaming. Therefore, this workaround should be used only if the system stops responding while being booted in debug mode.

    1. Set the /Net/LinkStatePollTimeout configuration parameter to a value between 30000 and 60000.
         For example, the following command sets this value to 30000:
         & esxcfg-advcfg -s 30000 /Net/LinkStatePollTimeout

      The exact value to use for this configuration option depends on the number of bnx2x NICs on the system. If a large number of bnx2x NICs are present on the system, choose a higher value.
    2. Ensure that the value of this parameter has been set properly:
      For example, the following command displays the current setting of this parameter:
      $ esxcfg-advcfg -g /Net/LinkStatePollTimeout

      A message such as the following is displayed: Value of LinkStatePollTimeout is 30000
    3. Boot the system in debug mode.
    4. After the reboot, revert the LinkStatePollTimeout configuration option to the default value of 5000.
      Reverting the LinkStatePollTimeout configuration option back to the default value of 5000 is necessary for when the system is booted in normal mode.

  • Ethtool displays firmware version incorrectly

    Ethtool might display the firmware version of NIC incorrectly if the version number ends in double digits. For example, if the version number is 4.4.14 (two digits), ethtool displays the firmware version as 4.4.>.

Server Configuration

  • IBM Systems Director 6.1 cannot be installed on ESX Server hosts
    Installation of IBM Systems Director 6.1 Server on ESX Server hosts fails with a message similar to the following:
    Could not find a supported operating system for IBM Systems Director.

    IBM Systems Director 6.1 Server is currently not supported on ESX Server 3.5. IBM Director Server can be installed on any Windows machine, and the IBM Director common agent can be installed on the ESX Server 3.5 host.
  • Error Message on IBM x3850 M2 and x3950 M2 Servers During Boot
    A SysAlert error message similar to the following might be displayed on IBM x3850 M2 and x3950 M2 servers during the boot sequence:
    0:00:01:02.384 cpu19:1291)USB Control/bulk timeout; usb device may not respond
    You might observe a message from each slave node.
    This is a known issue. The errors that are generated during boot will not affect the normal functioning of the system, and can be safely ignored.
  • Configuration Data is Lost Upon Reboot

    If the "/" partition of the ESX Server service console is full, any command that modifies esx.conf results in the loss of all configuration data when the server is rebooted. ESX server does not come up in the network after the reboot.

    Attempts to modify the vswitch configuration, result in a message that the "/" partition is full and there is no space to write the networking configuration information. However, for other configuration changes, you might not be informed whether "/" partition is full or not.
  • In Troubleshooting mode, the poweroff command shuts down ESX Server but does not remove server power
    This issue applies to all versions of ESX Server, including the current release, ESX Server 3.5 Update 4. When the host is booted into Troubleshooting (Service Console only) mode, the poweroff command is not able to remove power from the host. It performs its normal function of shutting down the host operating system, then displays the message "Power down." However, the Service Console lacks the ACPI support that would allow it to complete the power off operation.

    Workaround:
    After the "Power down" message is displayed, you can safely remove power manually through normal methods, such as by using a remote management console, turning off power switches, or pulling the plug.
  • New: Installing on an HP ProLiant BL 280c G6 server results in a timeout message
    The following message is issued when ESX Server 3.5 Update 4 is installed on HP Proliant BL 280c G6 server:
    USB control / bulk_msg: timeout

    This message comes from the Linux service console and indicates that a USB host controller has detected a timeout on an attached USB device. The host controller could be either a high speed (ehci) or a full/low speed controller. The cause could be a loss of connection with any attached device; for example, an ILO cdrom, a keyboard, or a USB flash drive. After this message is issued the device will not function until after a device power event such as a device detach and reattach or an overall system power off and power on. A reboot may or may not clear such a condition. Alternatively, the ehci host controller can be unloaded using rmmod, although this will only clear conditions detected by the high speed host controller.

    This message can be safely ignored. If a device, for example a USB keyboard, is non-responsive a device power event can be implemented by detaching the device and reattaching it (for ILO storage devices this is done via the remote media tab). No other system impact has been observed in relationship with this message.

Storage

  • Core dumps are lost when multiple ESX hosts share a dump partition
    If multiple ESX hosts that share a dump partition fail and save core dumps simultaneously, the core dumps might be lost.
  • LSI jobs and nonconcrete storage pools do not persist between boots
    The persistence scheme implemented by LSI creates a new file on the host operating system for each job and non-concrete storage pool (a storage pool that is not associated with a storage volume). These files will not persist between boots. As a result, non-concrete storage pools will no longer be available after the host operating system is rebooted. In addition, any jobs executed before the reboot will not be visible.
    Support for 10GbE IP storage (iSCSI and NFS) with the update releases is for connectivity. Performance levels can vary.
  • Creating a large file on a spanned VMFS datastore might fail if the first datastore extent is smaller than 1GB
    If you try to create a large virtual disk file in a spanned VMFS datastore, this operation might fail. Generally, this problem occurs when the first datastore extent is small, less than 1GB in size, and is due to a lack of pointer blocks.
    Workaround: If possible, recreate the datastore using the larger partition first and adding the smaller extents later.
  • Serial Attached SCSI (SAS) Disk Arrays Cannot Be Added as Raw Device Mapping Disks Using the VI Client
    When using the VI Client to create virtual machines on raw device mapping disks or add a hard disk to a virtual machine, the Raw Device Mapping option is disabled in the New Virtual Machine Wizard, and in the Add Hardware Wizard.
    Workaround: To add SAS disk arrays as raw device mapping disks:
    1. Run a command with the following syntax in the ESX Server service console:
    # vmkfstools -r <raw_device_path> -a <bus_type> <RDM_file_name>
    2. Use the Add Hardware Wizard to add the newly created virtual disk as an existing disk, to the virtual machine.
  • Capacity of a LUN added to a datastore might not be visible when VI Client is connected to an ESX Server host
    The Storage panel in the Configuration tab of the VI Client allows you to modify the properties of a datastore. When the VI Client is connected to an ESX Server host, if you add LUNs to a datastore by clicking Add Extent in the Properties window of the datastore, the capacity for the added LUN might not be displayed in the Properties window.
    Workaround: Close the Properties window, click the Storage link in the Configuration tab, and re-open the Properties window.
  • ESX host displays benign error message while accessing an optical device

    During mounting or copying from an optical device, the host running the ESX 3.5 Update 4 release displays a "hda: lost interrupt" message on the screen. This error message might also be written to /var/log/messages. This issue does not cause any data corruption, and can be ignored.

Upgrade and Installation

  • Not able to recognize local VMFS volumes after upgrading to ESX Server 3.5 Update 3.
    On servers mounting ICH-7 SATA controller configured in ATA mode, after upgrading to ESX Server 3.5 Update 3, the ESX Server host might not be able to recognize local VMFS volumes and virtual machines located on the volumes will be inaccessible.Note: storing local VMFS volumes is unsupported with ICH7 and HT1000 controllers). See Upgrading to ESX Server 3.5 Update 3 Causes Local VMFS Volume to be Unrecognized (KB 1007724) for more information.
  • System becomes unusable if esxupdate runs out of disk space
    When esxupdate is used to upgrade to ESX Server 3.5 Update 3, the system will become unusable if esxupdate runs out of space on either the /, /boot, or /usr partitions.
    Workaround: Before applying a patch make sure you have enough space in your /, /boot, or /usr partitions. To determine if there is enough space for the Upgrade, you can perform a test installation as described in the ESX Server 3 Patch Management Guide.

ESX Server Upgrade and Installation

  • The esxupdate -l query Command Lists VMware-vpxa As a Newly Installed RPM Package
    When an ESX Server host that is connected to the VirtualCenter Server is upgraded to an ESX Server 3.5 release version using the esxupdate utility, the esxupdate - l query command lists VMware-vpxa as a newly installed RPM package, because the VirtualCenter Client installs the VMware-vpxa rpm package when it connects to the ESX Server host for the first time.
  • ESX Server Installer Accepts Invalid Subnet Mask and Continues with the Installation
    When you install ESX Server using the text mode of installation, if you enter an invalid subnet mask, the installer proceeds with the installation without throwing an error message. In the GUI mode of installation, the installer will display an error message only if the numeric value of the entered subnet mask exceeds 255.

Other Upgrade and Installation Issues

  • Power Off Button Sometimes Does Not Work in Web Access
    In some cases, the power off button for a virtual machine is not available or does not respond when clicked.
    Workaround: Refresh the browser window and the power off button should work properly.
  • Upgrading VirtualCenter and VMware Update Manager May Fail to Upgrade the Update Manager Database
    You can use the Unified Installer to simultaneously upgrade VirtualCenter and VMware Update Manager, but problems with custom database configurations might occur. VirtualCenter and VMware Update Manager can store information in a single database or in separate databases. If your deployment includes separate databases, and you do not use the Custom option during upgrade, the VMware Update Manager database might not be upgraded. Instead, one of two things can happen:
    • If there is no Update Manager database in the VirtualCenter database instance, a new Update Manager database is created.
    • If there is an existing but unused Update Manager database, this is upgraded. Unused Update Manager databases can occur when an initial installation is completed and then subsequently, a separate Update Manager database is established.
    In both cases, the custom Update Manager database that was being used is not updated. After the upgrade, the system uses the incorrect database that the unified installer has either updated or created.

    To avoid this issue, select the Custom type for the installation and specify the Update Manager database your deployment is using.

Virtual Machine Management

  • VMware Tools icon is missing from the Control Panel for 64-bit Windows Vista and Windows 2008 guest operating systems
    Only the 32-bit control panel is available for VMware tools on Windows Vista and Windows 2008 guest operating systems and so VMware Tools does not appear in the control panel of the 64-bit operating systems.

    Workaround: Use the 32-bit control panel applet that is available from the VMware Tray or from C:\Program Files (x86)\VMware\VMware Tools\VMControlPanel.cpl or <VMware tools install path>\VMControlPanel.cpl.
  • I/O Might Stall on Virtual Machines During Firmware Upgrade
    When Virtual machines are running on a shared LUN that has heavy I/O workload, and if the firmware is upgraded using the storage management utility or if the storage controller is restarted, I/O might stall on any of the virtual machines.
    Messages similar to the following might appear in the vmkernel.log file:
    1:01:05:07.275 cpu2:1039)WARNING: FS3: 4785: Reservation error: Not supported
    SCSI: 4506: Cannot find a path to device vmhba1:0:125 in a good state. Trying path vmhba1:0:125.
    1:01:05:10.262 cpu3:1039)ALERT: SCSI: 4506: Cannot find a path to device vmhba1:0:125 in a good state. Trying path vmhba1:0:125.
    1:01:05:40.748 cpu1:1083)<6>mptbase: ioc0: LogInfo(0x30030108): Originator={IOP}, Code={Invalid Page}, SubCode(0x0108)
    1:01:05:40.930 cpu0:1024)Log: 472: Setting loglevel (via VSI) for module 'SCSI' to 5
  • Virtual Machines Might Not Power on After a Failover When the Host Is Isolated
    Virtual machines might not start after a failover when a host is isolated and the isolation response is set to Guest Shutdown which is the default configuration for a cluster. This might occur on clusters with fewer than five nodes and can happen on virtual machines that take more time to complete the guest shutdown.
    Workaround
    Set the Isolation Response to either Leave powered on or Power off for clusters which have fewer than five nodes.
    To set the Isolation Response for a virtual machine, select the cluster, click the Edit Settings link, and select Virtual Machine Options under VMware HA. From the Isolation Response pop-up menu, select either Leave powered on or Power off options for the specific virtual machine.

VirtualCenter, VI Client, and Web Access Issues

  • VirtualCenter Server Fails to Initialize Immediately If the Latest Version of VMware Capacity Planner Service is Not Used
    If the VMware Capacity Planner Service for the VirtualCenter Server 2.5 Update 2 release is not installed, the VirtualCenter Server takes a long time to initialize, and during this time the VI Client cannot connect to the VirtualCenter Server. In addition, the consolidation feature is not available in the VI Client.
    To use the consolidation feature, uninstall any earlier version of the VMware Capacity Planner Service, install the VMware Capacity Planner Service for the VirtualCenter 2.5 Update 2 release, and restart the VirtualCenter Server.
  • A time zone label is different depending on the way ESX Server 3.5 Update 4 was installed
    When creating a kickstart file, ks.cfg, using Web Access, a time zone name available in the drop-down menu is Asia-Calcutta. However, the same item is listed as Asia-Kolkata when the installation is performed by using either the GUI or text. This presents no issue other than providing inconsistent naming

VMware High Availability (HA)

  • Virtual Machine Migrations Are Not Recommended When the ESX Server Host Is Entering the Maintenance or Standby Mode
    VMware HA will not recommend or perform(when in fully automated mode) virtual machine migrations off of a host entering maintenance or standby mode, if the VMware HA failover level would be violated after the host enters the requested mode. This restriction applies whether strict HA admission control is enabled or not.
  • VMware HA Health Monitoring Does Not Show Rebooting in the Console After a Host Failover
    After a host failure, the VMware console displays an empty window when Health Monitoring is enabled for an HA cluster. The console does not display the virtual machines rebooting.

    Workaround
    You must open a new console to see the virtual machines restarting after the failover.
  • HA network compliance check
    During the configuration of HA in VirtualCenter 2.5 Update 2, the Task & Events tabs might display the following error message and recommendation:

    HA agent on <esxhostname> in cluster <clustername> in <datacenter> has an error Incompatible HA Network:
    Consider using the Advanced Cluster Settings das.allowNetwork to control network usage.


    Starting with VirtualCenter 2.5 Update 2, HA has an enhanced network compliance check to increase cluster reliability. This enhanced network compliance check helps to ensure correct cluster-wide heartbeat network paths. See KB 1006606 for details.

Top of Page

Using Language Packs on the ESX Server Host

For German, Japanese, or Simplified Chinese language support when you use Virtual Infrastructure Web Access or the VI Client with your ESX Server host, the language pack must be on the host and the default locale set to your desired language. Starting with ESX Server 3.5 Update 3, the language pack files are installed on all hosts and do not need to be copied to the host before changing the locale.

To set the default locale on the host.

  1. Edit the /etc/vmware/hostd/config.xml file to enable the correct default locale. Find the following lines in config.xml:
       <locale>
          <DefaultLocale>en_US</DefaultLocale>
       </locale>

    For German, replace the lines with the following:

       <locale>
          <DefaultLocale>de_DE</DefaultLocale>
       </locale>

    For Japanese, replace the lines with the following:

       <locale>
          <DefaultLocale>ja_JP</DefaultLocale>
       </locale>

  2. For Simplified Chinese, replace the lines with the following:

       <locale>
          <DefaultLocale>zh_CN</DefaultLocale>
       </locale>

  3. Type the following commands to restart VI Web Access and host agent services:

    service mgmt-vmware restart
    service vmware-VMware Web Access restart

Top of Page