VMware

VMware ESX 4.1 Update 1 Release Notes

ESX 4.1 Update 1 | 10 February 2011 | Build 348481
VMware Tools | 10 February 2011 | Build 341836

Last Document Update: 18 July 2011

These release notes include the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESX:

  • Improvement in scalability — ESX 4.1 Update 1 supports up to 160 logical processors.
  • Support for additional guest operating systems ESX 4.1 Update 1 adds support for RHEL 6, RHEL 5.6, SLES 11 SP1 for VMware, Ubuntu 10.10, and Solaris 10 Update 9 guest operating systems. For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.
  • Inclusion of additional drivers ESX 4.1 Update 1 includes the 3ware SCSI 2.26.08.036vm40 and Neterion vxge 2.0.28.21239-p3.0.1.2 drivers. For earlier releases, these drivers were only available as separate downloads.

Resolved Issues In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Top of Page

Earlier Releases of ESX 4.1 Update 1

The earlier release of ESX 4.1 Update 1 is ESX 4.1. Features and known issues of ESX 4.1 are described in the release notes available at VMware vSphere 4.1 Release Notes—ESX Edition.

Top of Page

Before You Begin

ESX, vCenter Server, and vSphere Client Version Compatibility

The VMware vSphere Compatibility Matrixes provide details of the compatibility of current and earlier versions of VMware vSphere components, including ESX, vCenter Server, the vSphere Client, and other VMware products.

ESX, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 1.2 adds support for ESX 4.1 Update 1 and vCenter Server 4.1 Update 1 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility

  • Learn about hardware compatibility

    The Hardware Compatibility Lists are available in the Web-based Compatibility Guide. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides, and provides the option to search the guides, and save the search results in PDF format. For example, with this guide, you can verify whether your server, I/O, storage, and guest operating systems, are compatible.

    Subscribe to be notified of Compatibility Guide updates through This is the RSS image that serves as a link to an RSS feed.

  • Learn about vSphere compatibility:

    VMware vSphere Compatibility Matrixes (PDF)

Installation and Upgrade

Read the ESX and vCenter Server Installation Guide for step-by-step guidance about installing and configuring ESX and vCenter Server.

After successful installation, you must perform several configuration steps, particularly for licensing, networking, and security. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.

If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide for instructions about installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.

Management Information Base (MIB) files related to ESX are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0.x. You can download all MIB files from the VMware Web site at http://www.vmware.com/download.

Upgrading VMware Tools

VMware ESX 4.1 Update 1 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESX related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

Upgrading or Migrating to ESX 4.1 Update 1

ESX 4.1 Update 1 offers the following options for upgrading:

Supported Upgrade Paths for Host Upgrade to ESX 4.1 Update 1:

Upgrade Type

Upgrade Tools Supported
Supported Upgrade Paths to ESX 4. 1 Update 1

ESX 3.5 Update 5a

ESX 4.0
Includes:
ESX 4.0 Update 1
ESX 4.0 Update 2
ESX 4.0 Update 3

ESX 4.1

ESX-4.1.0-update01-348481.iso

  • VMware vCenter Update Manager with ESX host upgrade baseline
  • esxupgrade.sh

Yes

No

No

upgrade-from-esx4.0-to-4.1-update01-348481.ZIP

  • VMware vCenter Update Manager with host upgrade baseline
  • esxupdate
  • vihostupdate
    Note: Install the pre-upgrade bundle (pre-upgrade-from-esx4.0-to-4.1-update01-348481.ZIP) and run the vihostupdate utility or the esxupdate utility to perform the upgrade.

No

Yes

No

update-from-esx4.1-4.1_update01.ZIP
  • VMware vCenter Update Manager with patch baseline
  • esxupdate
  • vihostupdate
No
No
Yes

ESX 4.1 to 4.1.x using the patch definitions downloaded from the VMware portal (online)

VMware vCenter Update Manager with patch baseline

No

No

Yes

Notes:

Updated RPMs and Security Fixes

For a list of RPMs updated in ESX 4.1 Update 1, see Updated RPMs and Security Fixes. This document does not apply to the ESXi products.

Patches Contained in this Release

This release contains all bulletins for ESX that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESX410-Update01 contains the following individual bulletins:

ESX410-201101201-SG Updates ESX 4.1 Core and CIM components, krb5, openldap, and pam-krb5
ESX410-201101202-UG Updates ESX 4.1 VMware-webCenter-esx
ESX410-201101203-UG Updates ESX 4.1 vmware-esx-esxupdate
ESX410-201101204-UG Updates ESX 4.1 mptsas device driver
ESX410-201101206-UG Updates ESX 4.1 bnx2xi device driver
ESX410-201101207-UG Updates ESX 4.1 bnx2x device driver
ESX410-201101208-UG Updates ESX 4.1 sata device driver
ESX410-201101211-UG Updates ESX 4.1 VMware-esx-remove-rpms
ESX410-201101213-UG Updates vmware-esx-drivers-net-enic
ESX410-201101214-UG Updates vmware-esx-drivers-scsi-qla4xxx
ESX410-201101215-UG Updates ESX 4.1 vmware-esx-net-nx-nic
ESX410-201101216-UG Updates ESX 4.1 vmware-esx-vaai
ESX410-201101217-UG Updates vmware-esx-drivers-net-e1000e
ESX410-201101218-UG Updates net-cdc-ether, net-usbnet driver
ESX410-201101219-UG Updates vmware-esx-drivers-net-e1000
ESX410-201101220-UG Updates net-igb, net-tg3, scsi-fnic
ESX410-201101221-UG Updates ESX 4.1 HP SAS Controllers
ESX410-201101222-UG Updates mptsas, mptspi device drivers
ESX410-201101225-UG Updates vmware-esx-pam-config library
ESX410-201101226-SG Updates glibc packages

ESX 4.1 Update 1 also contains all fixes in the following previously released bundles:

ESX410-201010401-SG Updates vmkernel64, VMX, CIM

ESX410-201010402-SG Updates GnuTLS, NSS, and openSSL
ESX410-201010404-SG Updates NSS_db package
ESX410-201010405-BG Updates VMware Tools
ESX410-201010409-SG Updates Tar package
ESX410-201010410-SG Updates Curl RPM
ESX410-201010412-SG Updates Perl RPM
ESX410-201010413-SG Updates GNU cpio package
ESX410-201010414-SG Updates vmware-esx-pam-config
ESX410-201010415-BG Updates Cisco fnic driver
ESX410-201010419-SG Updates likewisekrb5, likewiseopenldap

See the documentation listed on the download page for more information on contents of each patch.

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

Resolved issues previously documented as known issues are marked with the † symbol.

Backup

  • Cannot take quiesced snapshots of Microsoft Windows Server 2008 R2 virtual machine running vCenter Server 4.1
    When creating a snapshot of a Microsoft Windows Server 2008 R2 virtual machine that has vCenter Server 4.1 installed, the snapshot operation might fail to complete. This issue occurs on Microsoft Windows Server 2008 R2 virtual machines when the ADAM database is installed. The issue is resolved in this release.

CIM and API

  • vCenter Server incorrectly reports the service tag of the blade chassis
    For blade servers that are running ESX, vCenter Server incorrectly reports the service tag of the blade chassis instead of that for the blade. On a Dell or IBM blade server that is managed by vCenter Server, the service tag number is listed in the System section of the vCenter Server under vCenter Server > Configuration tab under processors. This issue occurs due to the incorrect value for the SerialNumber property of the Fixed CIM OMC_Chassis instance. The issue is resolved in this release.

Guest Operating System

  • Windows guest operating system might fail with vmx_fb.dll error
    Windows guest operating systems installed with VMware Windows XP display driver model (XPDM) driver might fail with a
    vmx_fb.dll error and display a blue screen. The issue is resolved in this release.
  • CPUID information returned by virtual hardware differs from CPUID of physical hardware
    Software running on guest operating systems might use CPUID information to determine characteristics of underlying (virtual or physical) CPU hardware. In some instances, CPUID information returned by virtual hardware differs from that for physical hardware. Based upon these differences, certain components of guest software might malfunction. In this release, the fix causes certain CPUID responses to more closely match the ones that physical hardware would return.
  • Installation of VMware Tools Operating System Specific Packages (OSPs) on SUSE Linux 9 results in screen resolution error
    When you install the VMware Tools OSP in SUSE Linux 9, the operating system configures the /etc/X11/XF86Config file of the X Window System and replaces the vesa video driver with the VMware video driver. If you are using a video driver other than vesa, the VMware Tools OSP does not replace it. Attempts to change the screen resolution by using the GUI or the Linux xrandr command fail. The issue is resolved in this release
  • Hot add feature might fail if ESX hosts has less available physical memory
    When you try to hot-add memory or CPU to a virtual machine whose reserved memory or CPU is more than half the available physical memory or CPU of the host machine, the operation might fail. After applying this fix, hot-add might fail sometimes when the available physical memory on an ESX host is less than twice the overhead memory of the virtual machine.


    • To view the available physical memory or CPU in vSphere Client, select an ESX host and click the Resource Allocation link. The available physical memory is displayed under Memory > Available Capacity. The available CPU is displayed under CPU > Available Capacity.
    • To view the overhead memory of a virtual machine, select the virtual machine and click the Resource Allocation link. The overhead memory is displayed under Memory > Overhead.

  • RPM installer for VMware Tools for RHEL3, RHEL4, and SLES9 fails to verify the package signature
    When you use the RPM Installer to install VMware Tools for RHEL3, RHEL4, or SLES9, signature verification fails with a
    V3 RSA/MD5 signature: NOKEY, key ID 66fd4949 warning message. This condition occurs because later than versions of the RPM cannot verify RSA signatures created by newer versions of the RPM. To perform the signature verification, download the VMWARE-PACKAGING-GPG-DSA-KEY pub key file from http://www.vmware.com/download/packages.html. After you import the key file, the warning message does not appear.

  • Updates for the WDDM driver
    In this release, the Windows Display Driver Model (WDDM) driver is updated to fix some infrequent issues where a Windows virtual machine fails and displays a blue screen.


  • Memory sizes in the virtual machine default settings of RHEL 32-bit and 64-bit guest operating systems updated
    The minimum, default, and maximum recommended memory sizes in the virtual machine default settings for RHEL 32-bit and 64-bit guest operating systems are updated as per the latest RHEL 6 operating system specifications at http://www.redhat.com/.


  • DOS-based client software might fail to boot a virtual machine after upgrading to ESX 4.1
    When a DOS-based client software (for example, Altiris Deployment Solution) uses PXE to boot a DOS image, the PXE boot sequence might fail while bootstrapping the image from the server, and the boot loader might display a Status: 0xc0000001 error. The issue is resolved in this release
    .

  • SCSI WRITE_SAME CDB issued from guest operating systems fails even though storage array supports CDB
    Applications running on guest operating systems on an ESX 4.1 host fail after displaying error messages. This issue occurs only on ESX 4.1 when applications use the SCSI WRITE_SAME CDB. CDB reports a degraded performance when applications use an alternate write command. The issue is resolved in this release.

Miscellaneous

  • Warning displayed in /var/log/vmkwarning when ESX starts
    Warning messages similar to the following might appear in
    /var/log/vmkwarning when an ESX host starts:
    WARNING: AcpiShared: 194 SSDTd+: Table length 11108 > 4096
    This warning is generated when an ESX host reads an ACPI table from the BIOS where the table size is more than 4096 bytes. This warning is harmless and you can ignore it. This fix downgrades the warning to a log.
  • lokkit command might cause ESX hosts to fail
    Do not run the
    lokkit command. If you do run it, click Cancel ensure that it does not harm the service console. If you run the lokkit command from the service console of an ESX host, a Firewall Configuration window containing security level or network options with three buttons is displayed: OK, Customize, and Cancel. If you click OK without changing the security level or network options, a setenforce: SELinux is disabled message might appear on the service console. The ESX host fails and does not reboot. This issue occurs due to the presence of the lokkit command in an obsolete system-config-securitylevel-tui package. This package was installed on all ESX hosts earlier to ESX 4.1 Update 1. The issue is resolved in this release.
  • Performance chart data for networking displays incorrect information
    The stacked per-virtual machine performance chart data for networking displays incorrect information. You can access the chart from Chart Options in the Advanced Settings on the Performance tab. The network transmit and receive statistics of a virtual machine connected to the Distributed Virtual Switch (DVS) are interchanged reversed and incorrectly displayed. The fix in ESX 4.1 Update 1 ensures that the host agent on ESX hosts collects the correct statistics and passes them to the performance charts UI. This fix also resolves receive, transmit, and usage network statistics at the host level. Before this fix, the values reported for each of these statistics were zero.
  • Asynchronous drivers added to ESX 4.1 Update 1 patches
    Asynchronous drivers included in ESX 4.1 Update 1 patches are integrated into ISO and patches.

  • vSphere Client displays incorrect BIOS version and release date
    The vSphere Client displays incorrect BIOS version and release date on the Processors page in the Configuration tab. The issue is resolved in this release.

Networking

  • ESX hosts might not boot or cause some devices not to be accessible when using more than 6 bnx2 ports
    An error message similar to the following is displayed on the service console of the ESX host:
    CPU10:4118 - intr vector: 290:out of interrupt vectors. Before applying this fix, bnx2 devices in MSI-X mode and jumbo frame configuration support only 6 ports. The issue is resolved in this release. In this release, the bnx2 driver allocates only 1 RX queue in MSI-X mode, supports 16 ports, and saves memory resources.
  • ESX might fail on HP systems with loading 32.networking-drivers error
    ESX fails on some HP systems such as DL 980 G7 containing HP NC522SFP Dual Port 10GbE Gigabit Server Adapters, and displays a message similar to
    loading 32.networking-drivers. This issue occurs when ESX starts the NetXen driver usually during the boot of ESX hosts or during the installation of network drivers. This is dependent on some HP system configurations. After applying this fix, you can use NetXen NIC for Gigabit Ethernet connectivity with ESX hosts.
  • e1000e 1.1.2-NAPI driver added
    In earlier releases, Intel e1000e 1.1.2-NAPI driver was not bundled with ESX but provided separately for download. In this release, e1000e 1.1.2-NAPI driver is bundled with ESX.
  • ESX hosts might fail with bnx2x
    If you use VMware ESX 4.1 with Broadcom bnx2x (in-box driver version 1.54.1.v41.1-1vmw), you might see the following symptoms:
    • The ESX host might frequently disconnect from the network.
    • The ESX host might stop responding with a purple diagnostic screen that displays messages similar to the following:
      [0x41802834f9c0]bnx2x_rx_int@esx:nover: 0x184f stack: 0x580067b28, 0x417f80067b97, 0x
      [0x418028361880]bnx2x_poll@esx:nover: 0x1cf stack: 0x417f80067c64, 0x4100bc410628, 0x
      [0x41802825013a]napi_poll@esx:nover: 0x10d stack: 0x417fe8686478, 0x41000eac2b90, 0x4
    • The ESX host might stop responding with a purple diagnostic screen that displays messages similar to the following:
      0:18:56:51.183 cu10:4106)0x417f80057838:[0x4180016e7793]PktContainerGetPkt@vmkernel:nover+0xde stack: 0x1
      0:18:56:51.184 pu10:4106)0x417f80057868:[0x4180016e78d2]Pkt_SlabAlloc@vmkernel:nover+0x81 stack: 0x417f800578d8
      0:18:56:51.184 cpu10:4106)0x417f80057888:[0x4180016e7acc]Pkt_AllocWithUseSizeNFlags@vmkernel:nover+0x17 stack: 0x417f800578b8
      0:18:56:51.185 cpu10:4106)0x417f800578b8:[0x41800175aa9d]vmk_PktAllocWithFlags@vmkernel:nover+0x6c stack: 0x1
      0:18:56:51.185 cpu10:4106)0x417f800578f8:[0x418001a63e45]vmklnx_dev_alloc_skb@esx:nover+0x9c stack: 0x4100aea1e988
      0:18:56:51.185 cpu10:4106)0x417f80057918:[0x418001a423da]__netdev_alloc_skb@esx:nover+0x1d stack: 0x417f800579a8
      0:18:56:51.186 cpu10:4106)0x417f80057b08:[0x418001b6c0cf]bnx2x_rx_int@esx:nover+0xf5e stack: 0x0
      0:18:56:51.186 cpu10:4106)0x417f80057b48:[0x418001b7e880]bnx2x_poll@esx:nover+0x1cf stack: 0x417f80057c64
      0:18:56:51.187 cpu10:4106)0x417f80057bc8:[0x418001a6513a]napi_poll@esx:nover+0x10d stack: 0x417fc1f0d078
    • The bnx2x driver or firmware sends panic messages and writes a backtrace with messages similar to the following messages in the /var/log/vmkernel log file:
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3379(vmnic0)]MC assert!
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3384(vmnic0)]driver assert
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_panic_dump:634(vmnic0)]begin crash dump


    The issue is resolved in this release.
  • ESX hosts might not boot or might cause some devices to become inaccessible when using NetXen 1G NX3031 devices or multiple 10G NX2031 devices
    When using a NetXen 1G NX3031 or multiple 10G NX2031 devices, you might see an error message similar to the following on the service console of ESX 4.1 hosts after you upgrade from ESX 4.0:
    Out of Interrupt vectors. On ESX hosts where NetXen 1G and NX2031 10G devices do not support NetQueue, the ESX host might run out of MSI-X interrupt vectors. This issue can render ESX hosts unbootable or other devices (such as storage devices) inaccessible. The issue is resolved in this release.

Security

Storage

  • Creation of large .vmdk files on NFS might fail
    When you create a virtual disk (.vmdk file) with a large size, for example, more than 1TB, on NFS storage, the creation process might fail with an error:
    A general system error occurred: Failed to create disk: Error creating disk. This issue occurs when the NFS client does not wait for sufficient time for the NFS storage array to initialize the virtual disk after the RPC parameter of the NFS client times out. By default the timeout value is 10 seconds.
    This fix provides the configuration option to tune the RPC timeout parameter using the
    esxcfg-advcfg -s <Timeout> /NFS/SetAttrRPCTimeout command.
  • Not supported SCSI warning messages logged in vmkernel
    SCSI warnings similar to the following are written to
    /var/log/vmkernel.
    Apr 29 04:10:55 localhost vmkernel: 0:00:01:08.161 cpu0:4096)WARNING: ScsiHost: 797: SCSI command failed on handle 1072: Not supported. You can ignore the messages. Such messages appear because certain SCSI commands are not supported in the storage array. In this release, the warning messages are suppressed in /var/log/vmkwarning to reduce support calls.
  • Messages logged in VMkernel log files when storage arrays are rescanned from vSphere Client
    ESX hosts might log messages similar to the following in the VMkernel log files for LUNs not mapped to ESX hosts: 0:22:30:03.046 cpu8:4315)ScsiScan: 106: Path 'vmhba0:C0:T0:L0': Peripheral qualifier 0x1 not supported. Such messages are logged either when ESX hosts start, or when you initiate a rescan operation of the storage arrays from the vSphere Client, or every 5 minutes after ESX hosts boot. In this release, the messages are no longer logged.
  • Error messages logged when scanning for LUNs from iSCSI storage array
    ESX hosts might fail with a
    NOT_REACHED bora/modules/vmkernel/tcpip2/freebsd/sys/support/vmk_iscsi.c:648 message on a purple screen when you scan for LUNs from iSCSI storage array by using the esxcfg-swiscsi command from the service console or through vSphere Client (Inventory > Configuration > Storage Adapters > iSCSI Software Adapter). This issue might occur if the tcp.window.size parameter in /etc/vmware/vmkiscsid/iscsid.conf is modified manually. This fix resolves the issue and also logs warning messages in /var/log/vmkiscsid.log for ESX if the tcp.window.size parameter is modified to a value lower than its default.
  • ESX hosts might fail when using LSI SAS HBAs connected to SATA disks
    Data loss might occur on ESX hosts using LSI SAS HBAs connected to SATA disks. This issue occurs when the maximum I/O size is set to more than 64KB in mptsas driver and LSI SAS HBAs are connected to SATA disks. The issue is resolved in this release
    .
  • VMW_PSP_RR set as default path selection policy for NetApp storage arrays that support SATP_ALUA
    The VMW_PSP_RR policy is set as the default path selection policy for NetApp storage arrays that support SATP_ALUA. You can set this policy by using vCenter Server or through the command-line interface (CLI).
    To set this policy through vCenter Server:
    1. Click the Configuration tab.
    2. In the left panel under Hardware Adapters, select Storage Adapters.
    3. On the right panel, select the vmhba that connects to the NetApp LUNs.
    4. Right-click the LUN whose path policy you want to change, and select Manage Paths.
    5. In the resulting dialog box, under Policy, set Path Selection to Round Robin.

    To set this policy through the CLI, run the following commands at the service console:
    # esxcli nmp satp addrule --satp="VMW_SATP_ALUA" --psp="VMW_PSP_RR" --claim-option="tpgs_on" --vendor="NETAPP" --description="NetApp arrays with ALUA support"
    # esxcli corestorage claimrule load
    # esxcli corestorage claimrule run

  • VMW_PSP_RR set as default path selection policy for IBM 2810XIV storage arrays
    The VMW_PSP_RR policy is set as the default path selection policy for IBM 2810XIV storage arrays. You can set this policy by using vCenter Server or through the command-line interface (CLI).
    To set this policy through vCenter Server:
    1. Click the Configuration tab.
    2. In the left panel under Hardware Adapters, select Storage Adapters.
    3. On the right panel, select the vmhba that connects to the IBM LUNs.
    4. Right-click the LUN whose path policy you want to change, and select Manage Paths.
    5. In the resulting dialog box, under Policy, set Path Selection to Round Robin.

    To set this policy through the CLI, run the following commands at the service console:
    # esxcli nmp satp addrule --satp="VMW_SATP_ALUA" --psp="VMW_PSP_RR" --claim-option="tpgs_on" --vendor="IBM" --model="2810XIV" --description="IBM 2810XIV arrays with ALUA support"
    # esxcli nmp satp addrule --satp="VMW_SATP_DEFAULT_AA" --psp="VMW_PSP_RR" --claim-option="tpgs_off" --vendor="IBM" --model="2810XIV" --description="IBM 2810XIV arrays without ALUA support"
    # esxcli corestorage claimrule load

    # esxcli corestorage claimrule run

  • Target information for some LUNs is missing in the vCenter Server UI
    Target information for LUNs is sometimes not displayed in the vCenter Server UI.
    In releases earlier than ESX 4.1 Update 1, some iSCSI LUNs do not show the target information.To view this information in the Configuration tab, perform the following steps:

    1. Click Storage Adapters under Hardware.
    2. Click iSCSI Host Bus Adapter in the Storage Adapters pane.
    3. Click Paths in the Details pane.

  • txtsetup.oem file on floppy disk points to incorrect location of PVSCSI driver
    Installation of Microsoft Windows Server 2003 guest operating systems on VMware Paravirtual SCSI (PVSCSI) virtual hard disk fails with the following error:
    Insert the disk labeled:
    VMware PVSCSI Controller Disk
    into drive A:
    The error occurs because
    txtsetup.oem file on a floppy disk points to the incorrect location of the PVSCSI driver. In this release, the location is corrected.
  • Some virtual machines stop responding during storage rescan operation when any LUN on the host is in an all-paths-down (APD) state
    During storage rescan operation, some virtual machines stop responding when any LUN on the host is in an all-paths-down (APD) state. For more information, see KB 1016626 at http://kb.vmware.com/kb/1016626. To work around the problem in the KB, manually set the advanced configuration option /VMFS3/FailVolumeOpenIfAPD to 1 before issuing the rescan and then reset it to 0 after the completion of the rescan operation. The issue is resolved in this release. You need not apply the workaround of setting and not setting the advanced configuration option while starting the rescan operation. Virtual machines on non-APD volumes will no longer fail during a rescan operation, even if some LUNs are in an all-paths-down state.
  • Consistent decrease in the size of the available memory when a virtual machine is powered on and powered off
    Powering on and the powering off a single virtual machine or running I/O on the virtual machine causes a consistent decrease in the available memory. The VMkernel log contains the memory allocation error messages. The issue is resolved in this release.

Supported Hardware

  • Power consumption graph does not display information on few ESX hosts
    The Power consumption graph displayed from vSphere Client for ESX host 4.1 hosts does not appear on few ESX hosts from certain vendors. The chart shows consumption as 0 watts. To view the Power consumption graph from vSphere Client, you can click Host, click the Performance tab, and select Power from the drop-down menu. In this release, the Power consumption graph is updated to support additional hosts from Bull, Dell, HP, Mitsubishi, NEC, and Toshiba.


  • Additional support for Dell iDRAC devices
    In this release, iDRAC Dell device id: 413C:a102 is supported.

Upgrade and Installation

  • esxupdate dependency resolution is not consistent with bulletin policy
    In the
    esxupdate utility, the dependency resolution does not match with the bulletin building policy. While installing ESX hosts, you do not see any errors or warning messages. After the installation, users can check the installed VIB information by running the esxupdate --vib-view query command in the service console. The bulletin delivery policy should use the lowest version of VIB needed so that you can control the fixes you install and avoid unknown or unexpected updates to ESX hosts.

Virtual Machine Management

  • Virtual machines fail to power on in some cases even when swap space exists on ESX 4.1 hosts
    Powering on virtual machines running on an ESX 4.1 host fails and logs an Insufficient COS swap to power on error message in /var/log/vmware/hostd.log
    though service console has 800MB of free space and swap enabled. Also, running the free -m command on the service console shows more than 20MB free. After applying this fix, you can power on virtual machines when the swap space exists on ESX 4.1 hosts.
  • After a virtual machine is migrated, USB devices on the destination host might incorrectly show up as assigned to the virtual machine
    After you migrate a virtual machine to a destination host that contains USB devices and then add additional USB devices to the migrated virtual machine on the destination host, USB devices on the destination host might show up as assigned to the virtual machine even though they have not been assigned to it.


  • Cannot perform pass-through of certain devices
    A warning message similar to the following is logged in the VMkernel log:
    x:x:x.x: Cannot change ownership to PASSTHRU (non-ACS capable switch in hierarchy) where x:x:x.x is pci device address
    This warning message is logged because certain devices cannot perform pass-through when you perform a direct assignment of a device to a virtual machine. Access Control Services (ACS) is introduced by PCI SIG to address potential data corruption with direct assignment of devices. In this release, pass-through of devices that are behind PCI Express (PCIe) switches and without ACS capability is not allowed.

  • Resuming a 64-bit Windows virtual machine from suspended state might cause the applications running on the virtual machine to stop responding
    If a 64-bit Windows virtual machine is resumed from the suspended state or is migrated to an ESX 4.1 host, the applications running on the virtual machine might stop responding, and the Microsoft Windows Event Logs might display error messages similar to the following:
    .NET Runtime
    .NET Runtime version * Fatal Execution Engine Error *
    Application Error:
    Faulting application name: oobe.exe *
    Faulting module name: mscorwks.dll *
    Exception code: 0xc00000005

    The issue is resolved in this release.

vMotion and Storage vMotion

  • Cannot revert to snapshots created on ESX 3.5 hosts
    ESX hosts cannot revert virtual machines to an earlier snapshot after you upgrade from ESX 3.5 Update 4 to ESX 4.1 Update 1. The following message might be displayed in vCenter Server:
    The features supported by the processor(s) in this machine are different from the features supported by the processor(s) in the machine on which the checkpoint was saved. Please try to resume the snapshot on a machine where the processors have the same features. This issue might occur when you create virtual machines on ESX 3.0 hosts, perform vMotion and suspend virtual machines on ESX 3.5 hosts, and resume them on ESX 4.x hosts. In this release, you can revert to snapshots created on ESX 3.5 hosts, and resume the virtual machines on ESX 4.x hosts.
  • Swap file of virtual machine increases in size after completion of storage vMotion
    After you move a virtual machine running with memory reservation is moved to a different datastore by using storage vMotion, after the completion of storage vMotion, the virtual machine is seen to have a swap file equal in size to the configured memory. Messages similar to the following might be logged in the
    vmware.log file of the virtual machine:
    May 25 16:42:38.756: vmx| FSR: Decreasing CPU reservation by 750 MHz, due to atomic CPU reservation transfer of that amount. New reservation is 0 MHz.FSR: Decreasing memory reservation by 20480 MB, due to atomic memory reservation transfer of that amount. New reservation is 0 pages. CreateVM: Swap: generating normal swap file name.
    When ESX hosts perform storage vMotion, the swap file size of virtual machines increases to memsize. With this release, the swap file size remains the same after storage vMotion.
  • ESX hosts might fail when storage vMotion task is cancelled when relocating a powered on virtual machine
    Cancelling a storage vMotion task when relocating a powered-on virtual machine containing multiple disks on the same datastore to a different datastore on the same host might cause the ESX 4.1 hosts to fail with the following error:
    Exception: NOT_IMPLEMENTED bora/lib/pollDefault/pollDefault.c:2059. The issue is resolved in this release.

VMware Tools

  • Error displayed when VMware Tools is installed with Print Spooler service is stopped
    If you install VMware Tools on a virtual machine on which the Print Spooler service is stopped (Administrative Tools > Services > Print Spooler), and if you select the Thin Print feature (Install VMware Tools > Typical or Custom and select Thin Print under Custom Setup > VMware Device Drivers), uninstalling VMware Tools results in the following error message:
    Runtime Error! Program: C:\Program Files\VMware\VMware Tools\TPVCGateway.exe. This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Click OK to remove the error message and uninstall VMware Tools. In this release, the error message does not appear.
  • Update Tools button in VMware Tools Properties window disabled for performing VMware Tools upgrade
    The Update Tools button for performing VMware Tools upgrade from a Windows guest operating system is disabled for non-administrators. The Update Tools button is available under the Options tab of the VMware Tools Properties window. Also, options in the Shrink and Scripts tabs in the VMware Tools Properties window are disabled for non-administrators. To block VMware Tools upgrades for all users, set the isolation.tools.autoinstall.disable parameter to TRUE in the VMX file. This release contains only a UI change that disables the Updates Tools button for non-administrators, and does not block upgrades from custom applications.

  • Installation of vmware-open-vm-tools-xorg-utilities might fail
    If the /usr and /directories are mounted on different devices and you are installing vmware-open-vm-tools-xorg-utilities on guest operating systems, an error similar to the following is displayed:

    failed: ln: creating hard link
    `/usr/lib/vmware-tools/libconf/etc/fonts/fonts.conf' =>
    `/etc/fonts/fonts.conf': Invalid cross-device link error
    .
    For example, if you run zypper (SLES package manager) to install vmware-open-vm-tools-utilities, you might see the error on the screen. When vmware-open-vm-tools-xorg-utilities tries to create a hard link to
    /etc/fonts/fonts.conf, a cross-device link issue might occur if the /usr and /directories are mounted on different devices. After applying this fix, you can install vmware-open-vm-tools-xorg-utilities.
  • Creation of quiesced snapshots might not work on non-English versions of Microsoft Windows guest operating systems
    The issue occurs when a Windows known folder path contains non-ASCII characters, for example, in the case of the application data folder in Czech Windows guest operating systems. This issue causes the snapshot operation to fail. The issue is resolved in this release
    .  

  • Creation of quiesced snapshots might fail on some non-English versions of Windows guest operating systems
    Quiesced snapshots might fail on some non-English versions of Windows guest operating systems, such as French versions of Microsoft Windows Server 2008 R2 and Microsoft Windows 7 guest operating systems. This issue occurs because the VMware Snapshot Provider service does not get registered as a Windows service or as a COM+ application properly on some non-English versions of Microsft Windows guest operating systems. This issue causes the whole snapshot operation to fail, and as a result, no snapshot is created. The issue is resolved in this release.

    Top of Page

Known Issues

This section describes known issues in this release in the following subject areas:

Known issues not previously documented are marked with the * symbol.

Backup

  • VCB service console commands generate error messages in ESX service console
    When you run VCB service console commands in the service console of ESX hosts, an error message similar to the following might be displayed:
    Closing Response processing in unexpected state:3.You can ignore this message.The message does not impact the results of VCB service console commands.

    Workaround: None.

CIM and API

  • SFCC library does not set the SetBIOSAttribute method in the generated XML file
    When Small Footprint CIM Client (SFCC) library tries to run the
    SetBIOSAttribute method of the CIM_BIOSService class through SFCC, a XML file containing the following error will be returned by SFCC: ERROR CODE="13" DESCRIPTION="The value supplied is incompatible with the type". This issue occurs when the old SFCC library does not support setting method parameter type in the generated XML file. Due to this issue, you cannot invoke the SetBIOSAttribute method. SFCC library in ESX 4.1 hosts does not set the method parameter type in the socket stream XML file that is generated.

    A few suggested workarounds are:
    • IBM updates the CIMOM version
    • IBM patches the CIMOM version with this fix
    • IBM uses their own version of SFCC library

Guest Operating System

  • Guest operating system might become unresponsive after you hot-add memory more than 3GB *
    Redhat 5.4-64 guest operating system might become unresponsive if you start with an IDE device attached, and perform a hot-add operation to increase memory from less than 3GB to more than 3GB.

    Workaround: Do not use hot-add to change the virtual machine's memory size from less than or equal to 3072MB to more than 3072MB. Power off the virtual machine to perform this reconfiguration. If the guest operating system is already unresponsive, restart the virtual machine. This problem occurs only when the 3GB mark is crossed while the operating system is running.
  • Windows NT guest operating system installation error with hardware version 7 virtual machines *
    When you install Windows NT 3.51 in a virtual machine that has hardware version 7, the installation process stops responding. This happens immediately after the blue startup screen with the Windows NT 3.51 version appears. This is a known issue in the Windows NT 3.51 kernel. Virtual machines with hardware version 7 contain more than 34 PCI buses, and the Windows NT kernel supports hosts that have a limit of 8 PCI buses.

    Workaround: If this is a new installation, delete the existing virtual machine and create a new one. During virtual machine creation, select hardware version 4. You must use the New Virtual Machine wizard to select a custom path for changing the hardware version. If you created the virtual machine with hardware version 4 and then upgraded it to hardware version 7, use VMware vCenter Converter to downgrade the virtual machine to hardware version 4.
  • Installing VMware Tools OSP packages on SLES 11 guest operating systems displays a message stating that the packages not supported *
    When you install VMware Tools OSP packages on a SUSE Linux Enterprise Server 11 guest operating system, an error message similar to the following is displayed:
    The following packages are not supported by their vendor.

    Workaround: Ignore the message. The OSP packages do not contain a tag that marks them as supported by the vendor. However, the packages are supported.
  • Compiling modules for VMware kernel is supported only for the running kernel *
    VMware currently supports compiling kernel modules only for the currently running kernel.

    Workaround: Boot the kernel before compiling modules for it.


  • No network connectivity after deploying and powering on a virtual machine
    If you deploy a virtual machine created by using the Customization Wizard on an ESX host, and power on the virtual machine, the virtual machine might lose network connectivity.

    Workaround:
    After deploying each virtual machine on the ESX host, select the Connect at power on option in the Virtual Machine Properties window before you power on the virtual machine.

Miscellaneous

  • Running resxtop or esxtop for extended periods might result in memory problems *
    Memory usage by
    resxtop or esxtop might increase over time depending on what happens on the ESX host being monitored. That means that if a default delay of 5 seconds between two displays is used, resxtop or esxtop might shut down after around 14 hours.

    Workaround: Although you can use the -n option to change the total number of iterations, you should consider running resxtop only when you need the data. If you do have to collect resxtop or esxtop statistics over a long time, shut down and restart resxtop or esxtop periodically instead of running one resxtop or esxtop instance for weeks or months.
  • Group ID length in vSphere Client shorter than group ID length in vCLI *
    If you specify a group ID using the vSphere Client, you can use only nine characters. In contrast, you can specify up to ten characters if you specify the group ID by using the
    vicfg-user vCLI.

    Workaround: None


  • Warning message appears when you run esxcfg-pciid command
    When you try to run the esxcfg-pciid command in the service console to list the Ethernet controllers and adapters, you might see a warning message similar to the following:
    Vendor short name AMD Inc does not match existing vendor name Advanced Micro Devices [AMD]
    kernel driver mapping for device id 1022:7401 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7409 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7411 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7441 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown


    This issue occurs when both the platform device-descriptor files and the driver-specific descriptor files contain descriptions for the same device.

    Workaround: You can ignore this warning message.

Networking

  • Network connectivity and system fail while control operations are running on physical NICs *
    In some cases, when multiple X-Frame II s2io NICs are sharing the same PCI-X bus, control operations, such as changing the MTU, on the physical NIC cause network connectivity to be lost and the system to fail.

    Workaround: Avoid having multiple X-Frame II s2io NICs in slots that share the same PCI-X bus. In situations where such a configuration is necessary, avoid performing control operations on the physical NICs while virtual machines are doing network I/O.
  • Poor TCP performance might occur in traffic-forwarding virtual machines with LRO enabled *
    Some Linux modules cannot handle LRO-generated packets. As a result, having LRO enabled on a VMXNET2 or VMXNET3 device in a traffic-forwarding virtual machine that is running a Linux guest operating system can cause poor TCP performance. LRO is enabled by default on these devices.

    Workaround: In traffic-forwarding virtual machines running Linux guest operating systems, set the module load time parameter for the VMXNET2 or VMXNET3 Linux driver to include disable_lro=1.
  • Memory problems occur when a host uses more than 1016 dvPorts on a vDS *
    Although the maximum number of allowed dvPorts per host on vDS is 4096, memory problems can start occurring when the number of dvPorts for a host approaches 1016. When this occurs, you cannot add virtual machines or virtual adapters to the vDS.

    Workaround: Configure a maximum of 1016 dvPorts per host on a vDS.
  • Reconfiguring VMXNET3 NIC might cause virtual machine to wake up *
    Reconfiguring a VMXNET3 NIC while Wake-on-LAN is enabled and the virtual machine is asleep causes the virtual machine to resume.

    Workaround: Put the virtual machine back into sleep mode manually after reconfiguring (for example, after performing a hot-add or hot-remove) a VMXNET3 vNIC.
  • Recently created VMkernel and service console network adapters disappear after a power cycle *
    If an ESX host is power cycled within an hour of creating a new VMkernel or service console adapter on a vDS, the new adapter might disappear.

    Workaround: If you need to power cycle an ESX host within an hour of creating a VMkernel or service console adapter, run
    esxcfg-boot -r in the host's CLI before starting the host.

Storage

  • Cannot configure iSCSI over NIC with long logical-device names
    Running the command
    esxcli swiscsi nic add -n from a remote command line interface or from a service console does not configure iSCSI operation over a VMkernel NIC whose logical-device name exceeds 8 characters. Third-party NIC drivers that use vmnic and vmknic names that contain more than 8 characters cannot work with iSCSI port binding feature in ESX hosts and might display exception error messages in the remote command line interface. Commands such as esxcli swiscsi nic list, esxcli swiscsi nic add, esxcli swiscsi vmnic list from the service console fail because they are unable to handle long vmnic names created by the third-party drivers.

    Workaround: The third-party NIC driver needs to restrict their vmnic names to less than or equal to 8 bytes to be compatible with iSCSI port binding requirement.
    Note: If the driver is not used for iSCSI port binding, the driver can still use up to names of 32 bytes. This will also work with iSCSI without the port binding feature.


  • Large number of storage-related messages in VMkernel log file *
    When ESX starts on a host with several physical paths to storage devices, the VMkernel log file records a large number of storage-related messages similar to the following:

    Nov 3 23:10:19 vmkernel: 0:00:00:44.917 cpu2:4347)Vob: ImplContextAddList:1222: Vob add (@&!*@*@(vob.scsi.scsipath.add)Add path: %s) failed: VOB context overflow
    The system might log similar messages during storage rescans. The messages are expected behavior and do not indicate any failure. You can safely ignore them.

    Workaround: Turn off logging if you do not want to see the messages.
  • Persistent reservation conflicts on shared LUNs might cause ESX hosts to take longer to boot *
    You might experience significant delays while starting hosts that share LUNs on a SAN. This might be because of conflicts between the LUN SCSI reservations.

    Workaround: To resolve this issue and speed up the boot process, change the timeout for synchronous commands during boot time by setting the Scsi.CRTimeoutDuringBoot parameter to 1.

    To modify the parameter from the vSphere Client:
    1. In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced Settings under Software.
    2. Select SCSI.
    3. Change the Scsi.CRTimeoutDuringBoot parameter to 1.

    Also, see KB 1016106 at http://kb.vmware.com/kb/1016106.

Server Configuration

  • Upgrading to ESX 4.1 Update 1 fails when LDAP is configured on the host and the LDAP server is not reachable
    Upgrade from ESX 4.x to ESX 4.1 Update 1 fails when you have configured LDAP on the ESX host and the LDAP server is not reachable.

    Workaround: To work around this issue, perform one of the following tasks:

    • Set the following parameters in the /etc/ldap.conf file.
      • To allow connections to the LDAP server to timeout, set bind_policy to soft.
      • To set the LDAP server connect timeout duration in seconds, set bind_timelimit to 30.
      • To set the LDAP per query timeout duration in seconds, set timelimit to 30.


    • Disable and then enable LDAP after the upgrade is completed.
      1. Disable LDAP by running the esxcfg-auth --disableldap command from the service console before the upgrade.
      2. Enable LDAP by running the esxcfg-auth --enableldap --enableldapauth --ldapserver=xx.xx.xx.xx --ldapbasedn=xx.xx.xx command from the service console after the upgrade.

Supported Hardware

  • ESX might fail to boot when allowInterleavedNUMANodes boot option is FALSE
    On an IBM eX5 host with a MAX 5 extension, ESX fails to boot and displays a
    SysAbort message on the service console. This issue might occur when the allowInterleavedNUMANodes boot option is not set to TRUE. The default value for this option is FALSE.

    Workaround: Set the
    allowInterleavedNUMANodes boot option to TRUE. See KB 1021454 at http://kb.vmware.com/kb/1021454 for more information about how to configure the boot option for ESX hosts.
  • PCI device mapping errors on HP ProLiant DL370 G6 *
    When you run I/O operations on the HP ProLiant DL370 G6 server, you might encounter a purple screen or see alerts about Lint1 Interrupt or NMI on the console. The HP ProLiant DL370 G6 server has two Intel I/O hub (IOH) and a BIOS defect in the ACPI Direct Memory Access remapping (DMAR) structure definitions, which causes some PCI devices to be described under the wrong DMA remapping unit. Any DMA access by such incorrectly described PCI devices triggers an IOMMU fault, and the device receives an I/O error. Depending on the device, this I/O error might result either in a Lint1 Interrupt or NMI alert message on the console, or in a system failure with a purple screen.


    Workaround: Update the BIOS to 2010.05.21 or a later version.
  • ESX installations on HP systems require the HP NMI driver *
    ESX 4.1 instances on HP systems require the HP NMI driver to ensure proper handling of non-maskable interrupts (NMIs). The NMI driver ensures that NMIs are properly detected and logged. Without this driver, NMIs, which signal hardware faults, are ignored on HP systems running ESX.
    Caution: Failure to install this driver might result in silent data corruption.

    Workaround: Download and install the NMI driver. The driver is available as an offline bundle from the HP Web site. Also, see KB 1021609 at http://kb.vmware.com/kb/1021609.
  • Virtual machines might become read-only when run on an iSCSI datastore deployed on EqualLogic storage *
    Virtual machines might become read-only if you use an EqualLogic array with a later firmware version. The firmware might occasionally drop I/O from the array queue, causing virtual machines to become read-only after marking the I/O as failed.


    Workaround: Upgrade EqualLogic Array Firmware to version 4.1.4 or later.
  • After you upgrade a storage array, the status for hardware acceleration in the vSphere Client changes to supported after a short delay *
    When you upgrade a storage array's firmware to a version that supports VAAI functionality, vSphere 4.1 does not immediately register the change. The vSphere Client temporarily displays Unknown as the status for hardware acceleration.


    Workaround: This delay is harmless. The hardware acceleration status changes to supported after a short period of time.
  • Slow performance during virtual machine power-on or disk I/O on ESX on the HP G6 Platform with P410i or P410 Smart Array Controller *
    Some hosts might show slow performance during virtual machine power-on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to
    /var/log/messages:
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
    "naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
    exists on the device.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0


    Workaround: Install the HP 256MB P-series Cache Upgrade module from http://h30094.www3.hp.com/product.asp?mfg_partno=462968-B21&pagemode=ca&jumpid=in_r3924/kc.

Upgrade and Installation

  • New: Host upgrade to ESX/ESXi 4.1 Update 1 fails if you upgrade by using Update Manager 4.1 * (KB 1035436)

  • Installation of the vSphere Client might fail with an error *
    When you install vSphere Client, the installer might attempt to upgrade an out-of-date Microsoft Visual J# runtime. The upgrade is unsuccessful and the vSphere Client installation fails with the error: The Microsoft Visual J# 2.0 Second Edition installer returned error code 4113.

    Workaround: Uninstall all earlier versions of Microsoft Visual J#, and then install the vSphere Client. The installer includes an updated Microsoft Visual J# package.

  • ESX service console displays error messages when upgrading from ESX 4.0 or ESX 4.1 to ESX 4.1 Update 1
    When you upgrade from ESX 4.0 or ESX 4.1 release to ESX 4.1 Update 1, the service console might display error messages similar to the following:
    On the ESX 4.0 host: Error during version check: The system call API checksum doesn’t match"
    On the ESX 4.1 host: Vmkctl & VMkernel Mismatch,Signature mismatch between Vmkctl & Vmkernel

    You can ignore the messages.

    Workaround: Reboot the ESX 4.1 Update 1 host.

  • esxupdate - a command output does not display inbox drivers when upgrading ESX host from ESX 4.0 Update 2 to ESX 4.1 Update 1
    When you upgrade the ESX host from ESX 4.0 Update 2 to ESX 4.1 Update 1 by using the esxupdate utility, the esxupdate -a command output does not display inbox drivers.

    Workaround
    Run the esxupdate -b <ESX410-Update01> info command to view information about all inbox and asynchronous driver-bulletins available for the ESX 4.1 Update 1 release.

  • Upgrading to ESX 4.1 Update 1 fails when an earlier version of IBM Management Agent 6.2 is configured on the host
    When you upgrade a host from ESX 4.x to ESX 4.1 Update 1, upgrade fails and error messages appear in ESX and VUM:

    • In ESX, the host logs the following message in the esxupdate.log file: DependencyError: VIB rpm_vmware-esx-vmware-release-4_4.1.0-0.0.260247@i386 breaks host API vmware-esx-vmware-release-4 <= 4.0.9.
    • In VUM, the Task & Events tab displays the following message: Remediation did not succeed : SingleHostRemediate: esxupdate error, version: 1.30, "operation: 14: There was an error resolving dependencies.

    This issue occurs when the ESX 4.x host is running an earlier version of IBM Management Agent 6.2.

    Workaround: Install IBM Management Agent 6.2 on the ESX 4.x host and then upgrade it to ESX 4.1 Update 1.

  • Scanning the ESX host against the ESX410-Update01 or ESX410-201101226-SG bulletin displays an incompatible status message
    When you use VUM to perform a scan against an ESX host containing the ESX410-Update01 or ESX410-201101226-SG bulletin, the scan result might show the status as incompatible.

    Workaround:
    • Ignore the incompatible status message and continue with the remediation process.
    • Remove the incompatible status message by installing the ESX410-201101203-UG bulletin and perform the scan.

vMotion and Storage vMotion

  • Hot-plug operations fail after the swap file is relocated *
    Hot-plug operations fail for powered-on virtual machines in a DRS cluster or on a standalone host, and result in the error failed to resume destination; VM not found after the swap file location is changed.

    Workaround: Perform one of the following tasks:
    • Reboot the affected virtual machines to register the new swap file location with them, and then perform the hot-plug operations.
    • Migrate the affected virtual machines using vMotion.
    • Suspend the affected virtual machines.

vMotion and Storage vMotion

  • Running vicfg-snmp -r or vicfg-snmp -D on ESX systems fails *
    On an ESX system, when you try to reset the current SNMP settings by running the command
    vicfg-snmp -r command or try to disable the SNMP agent by running the command vicfg-snmp -D command, the command fails. The failure occurs because the command tries to run the esxcfg-firewall command, which becomes locked and stops responding. With esxcfg-firewall not responding, the vicfg-snmp -r or vicfg-snmp -D command results in a timeout and results in an error. The problem does not occur on ESXi systems.

    Workaround: Starting the ESX system removes the lock file and applies the previously executed
    vicfg-snmp command that caused the lock. However, attempts to run vicfg-snmp -r or vicfg-snmp -D still result in an error.

VMware Tools

  • VMware Tools does not perform auto upgrade when a Microsoft Windows 2000 virtual machine is restarted
    When you configure VMware Tools for auto upgrading during power cycle, by selecting the Check and upgrade Tools before each power-on option under the Advanced pane in Virtual Machine Properties window, VMware Tools does not perform auto upgrade in Microsoft Windows 2000 guest operating systems.


    Workaround:
    Manually upgrade VMware Tools in the Microsoft Windows 2000 guest operating system.

Top of Page