VMware

VMware ESX 4.1 Update 3 Release Notes

ESX 4.1 Update 3 | 30 Aug 2012 | Build 800380
VMware Tools | 30 Aug 2012 | Build 784891

Last Document Update: 4 Apr 2013

These release notes include the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESX:

  • Support for additional guest operating systems This release updates support for many guest operating systems.For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.
  • Resolved Issues This release also delivers a number of bug fixes that have been documented in the Resolved Issues section.

Top of Page

Earlier Releases of ESX 4.1

Features and known issues from earlier releases of ESX 4.1 are described in the release notes for each release. To view release notes for earlier releases of ESX 4.1, click one of the following links:

Top of Page

Before You Begin

ESX, vCenter Server, and vSphere Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESX, VMware vCenter Server, the vSphere Client, and optional VMware products.

ESX, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 1.2.2 adds support for ESX 4.1 Update 3 and vCenter Server 4.1 Update 3 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility

  • Learn about hardware compatibility

    The Hardware Compatibility Lists are available in the Web-based Compatibility Guide. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides, and provides the option to search the guides, and save the search results in PDF format. For example, with this guide, you can verify whether your server, I/O, storage, and guest operating systems, are compatible.

    Subscribe to be notified of Compatibility Guide updates through This is the RSS image that serves as a link to an RSS feed.

  • Learn about vSphere compatibility:

    VMware Product Interoperability Matrix

Installation and Upgrade

Read the ESX and vCenter Server Installation Guide for step-by-step guidance about installing and configuring ESX and vCenter Server.

After successful installation, you must perform several configuration steps, particularly for licensing, networking, and security. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.

If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide for instructions about installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.

Management Information Base (MIB) files related to ESX are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0.x. You can download all MIB files from the VMware Web site at http://www.vmware.com/download.

Upgrading VMware Tools

VMware ESX 4.1 Update 3 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESX related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

Upgrading or Migrating to ESX 4.1 Update 3

ESX 4.1 Update 3 offers the following options for upgrading:

  • VMware vCenter Update Manager. vSphere module that supports direct upgrades from ESX 3.5 Update 5a and later, ESX 4.0.x, and ESX 4.1, ESX 4.1 Update 1, and ESX 4.1 Update 2 to ESX 4.1 Update 3. For more details, see VMware vCenter Update Manager Administration Guide.
  • vihostupdate. Command-line utility that supports direct upgrades from ESX 4.0.x, ESX 4.1, ESX 4.1 Update 1, and ESX 4.1 Update 2 to ESX 4.1 Update 3. This utility requires the vSphere CLI. For more details, see vSphere Upgrade Guide.
  • esxupdate. Command-line utility that supports direct upgrades from ESX 4.0.x, ESX 4.1, ESX 4.1 Update 1, and ESX 4.1 Update 2 to ESX 4.1 Update 3. For more details, see ESX 4.1 Patch Management Guide.
  • esxupgrade.sh script. Script that supports upgrades from ESX 3.5 Update 5a and later. For more details, see Knowledge Base article 1009440 and vSphere Upgrade Guide.

Supported Upgrade Paths for Host Upgrade to ESX 4.1 Update 3 :

Upgrade Type

Upgrade Tools Supported
Supported Upgrade Paths to ESX 4. 1 Update 3

ESX 3.5 Update 5a

ESX 4.0:
Includes
ESX 4.0 Update 1
ESX 4.0 Update 2
ESX 4.0 Update 3

ESX 4.0 Update 4

ESX 4.1:
Includes
ESX 4.1 Update 1

ESX 4.1 Update 2

ESX-4.1.0-update3-800380.iso
  • VMware vCenter Update Manager with ESX host upgrade baseline
  • esxupgrade.sh

Yes

No

No

upgrade-from-esx4.0-to-4.1-update03-800380.zip
  • VMware vCenter Update Manager with host upgrade baseline
  • esxupdate
  • vihostupdate
    Note: Install the pre-upgrade bundle (pre-upgrade-from-esx4.0-to-4.1-update03-800380.zip) first if you are using the vihostupdate utility or the esxupdate utility to perform the upgrade.

No

Yes

No

update-from-esx4.1-4.1_update03.zip
  • VMware vCenter Update Manager with patch baseline
  • esxupdate
  • vihostupdate

No

No

Yes

ESX 4.1 to 4.1.x using Patch definitions downloaded from VMware portal (Online) VMware vCenter Update Manager with Patch baseline

No

No

Yes

Notes:

Updated RPMs and Security Fixes

For a list of RPMs updated in ESX 4.1 Update 3, see the Updated RPMs and Security Fixes document . This document does not apply to the ESXi products.

Upgrading vSphere Client

After you upgrade vCenter Server or the ESX/ESXi host to vSphere 4.1 Update 3, you must upgrade the vSphere Client to vSphere Client 4.1 Update 3. Use the upgraded vSphere Client to access vSphere 4.1 Update 3.

Patches Contained in this Release

This release contains all bulletins for ESX that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESX410-Update03 contains the following individual bulletins:

ESX410-201208201-UG: Updates ESX 4.1 Core and CIM Components
ESX410-201208202-UG: Updates the ESX 4.1 Megaraid SAS driver
ESX410-201208203-UG: Updates the ESX 4.1 scsi-hpsa driver
ESX410-201208204-UG: Updates the ESX 4.1 e1000e driver
ESX410-201208205-UG: Updates the ESX 4.1 usbcore driver
ESX410-201208206-UG: Updates the ESX 4.1 usb-storage driver
ESX410-201208207-UG: Updates the ESX 4.1 e2fsprogs package

Patch Release ESX410-Update03 Security-only contains the following individual bulletins:

ESX410-201208101-SG: Updates ESX 4.1 Core and CIM Components
ESX410-201208102-SG: Updates ESX 4.1 libxml2-python package
ESX410-201208103-SG: Updates the ESX 4.1 openssl Component
ESX410-201208104-SG: Updates the ESX 4.1 glibc-nscd package
ESX410-201208105-SG: Updates ESX 4.1 popt and rpm-python libs
ESX410-201208106-SG: Updates the ESX 4.1 gnutls package
ESX410-201208107-SG: Updates the ESX 4.1 perl package

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

CIM and API

  • The present upper limit of 256 file descriptors for Emulex CIM provider is insufficient
    The Emulex CIM provider exceeds the SFCB allocation of 256 file descriptors, resulting in the exhaustion of socket resources on ESX hosts.

    This issue is resolved by increasing the socket limit and optimizing the preallocated socket pairs.
  • ESX 4.1 Update 2 System Event Log (SEL) is empty on certain servers
    The System Event Log in the vSphere Client might be empty if ESX 4.1Update 2 is run on certain physical servers. The host's IPMI logs (/var/log/ipmi/0/sel) might also be empty.
    An error message similar to the following might be written to /var/log/messages:
    Dec 8 10:36:09 esx-200 sfcb-vmware_raw[3965]: IpmiIfcSelReadAll: failed call to IpmiIfcSelReadEntry cc = 0xff

    This issue is resolved in this release.

Guest Operating System

  • SMP virtual machine fails with monitor panic while running kexec
    When a Linux kernel crashes, the linux kexec feature might be used to enable booting into a special kdump kernel and gathering crash dump files. An SMP Linux guest configured with kexec might cause the virtual machine to fail with a monitor panic during this reboot. Error messages such as the following might be logged:

    vcpu-0| CPU reset: soft (mode 2)
    vcpu-0| MONITOR PANIC: vcpu-0:VMM fault 14: src=MONITOR rip=0xfffffffffc28c30d regs=0xfffffffffc008b50


    This issue is resolved in this release.
  • Guest operating system of a virtual machine reports Kernel panic error when you try to install Solaris 10 with default memory size on some ESX versions
    When you try to install Solaris 10 on ESX, the guest operating system of the virtual machine reports a kernel panic error with the following message:
    panic[cpu0]/thread=fffffffffbc28340 ..page_unlock:...

    This issue is resolved in this release by increasing the default memory size to 3GB.

Miscellaneous

  • ESX fails when SCSI commands do not return values to the service console
    When completed SCSI commands do not return values to the service console, the service console displays the following error message and the ESX fails:
    COS Panic: Lost heartbeat @ esxsc_panic+0x43/0x4f error message

    This issue is resolved in this release.
  • On ESX the iSCSI initiator login timeout value allocated for software iSCSI and dependent iSCSI adapters is insufficient
    When multiple logins are attempted simultaneously on an ESX host, the login process fails due to insufficient login timeout value.

    Allowing the users to configure the login timeout value solves the issue.
  • On visorfs file systems ESX host does not capture vdf output for vm-support utility
    The option to capture vdf output is not available in ESX, without this option the user might not be able to know the Ramdisk space usage.

    Including vdf –h command in vm-support resolves this issue.
  • ESX host becomes unresponsive due to USB log spew for IBM devices
    An ESX host might become unresponsive due to constant spew of USB log messages similar to the following for non-passthrough IBM devices such as RSA2 or RNDIS/CDC Ether. This issue occurs even if no virtual machine is configured to use the USB passthrough option.

    USB messages: usb X-Y: usbdev_release : USB passthrough device opened for write but not in use: 0, 1

    This issue is resolved in this release.
  • Hot-removal of SCSI disk fails with error
    After you hot-add a SCSI disk successfully, hot-removing the same disk might fail with a disk not present error. Error messages similar to the following are written to the vmx log file:

    2010-06-22T19:40:26.214Z| vmx| scsi2:11: Cannot retrieve shares: A one-of constraint has been violated (-19)
    2010-06-22T19:40:26.214Z| vmx| scsi2:11: Cannot retrieve sched/limit/: A one-of constraint has been violated (-19)
    2010-06-22T19:40:26.214Z| vmx| scsi2:11: Cannot retrieve sched/bandwidthCap/: A one-of constraint has been violated (-19)
    2010-06-22T19:40:33.285Z| vmx| [msg.disk.hotremove.doesntexist] scsi2:11 is not present.
    2010-06-22T19:40:33.285Z| vmx| [msg.disk.hotremove.Failed] Failed to remove scsi2:11.


    This issue is resolved in this release.
  • Cannot join ESX host to Active Directory when DNS domain suffix differs from the Active Directory domain name

    This issue is resolved in this release.

Networking

  • Virtual machine network limits do not work correctly when the limit is set to a value higher than 2048Mbps
    On an ESX host, if you configure the Network I/O Control (NetIOC) to set the Host Limit for Virtual Machine Traffic to a value higher than 2048Mbps, the bandwidth limit is not enforced.

    This issue is resolved in this release.
  • ESX host fails with a purple screen after a failed vMotion operation
    An ESX host might fail with a purple diagnostic screen that displays a Exception 14 error after a failed vMotion operation.

    @BlueScreen: #PF Exception 14 in world 4362:vemdpa IP 0x418006cf1edc addr 0x588
    3:06:49:28.968 cpu8:4362)Code start: 0x418006c00000 VMK uptime: 3:06:49:28.968
    3:06:49:28.969 cpu8:4362)0x417f80857ac8:[0x418006cf1edc]Port_BlockNotify@vmkernel:nover+0xf stack: 0x4100afa10000
    3:06:49:28.969 cpu8:4362)0x417f80857af8:[0x418006d5c81d]vmk_PortLinkStatusSet@vmkernel:nover+0x58 stack: 0x417fc88e3ad8
    3:06:49:28.970 cpu8:4362)0x417f80857b18:[0x41800723a310]svs_set_vnic_link_state@esx:nover+0x27 stack: 0x4100afb3f530
    3:06:49:28.971 cpu8:4362)0x417f80857b48:[0x418007306a9f]sf_platform_set_link_state@esx:nover+0x96 stack: 0x417f80857b88
    3:06:49:28.971 cpu8:4362)0x417f80857b88:[0x41800725a31e]sf_set_port_admin_state@esx:nover+0x149 stack: 0x41800000002c
    3:06:49:28.972 cpu8:4362)0x417f80857cb8:[0x4180072bb5f0]sf_handle_dpa_call@esx:nover+0x657 stack: 0x417f80857cf8


    This issue has been observed in environments where the Cisco Nexus 1000v switch is used.

    This issue is resolved in this release.
  • IP address range is not displayed for VLANs
    If you run the esxcfg-info command, Network Hint does not display some VLAN IP address ranges on a physical NIC. The IP address range is also not displayed on the vCenter Server UI. An error message similar to the following is written to vmkernel.log:
    Dec 17 03:38:31 vmmon2 vmkernel: 8:19:26:44.179 cpu6:4102)NetDiscover: 732: Too many vlans for srcPort 0x2000002; won't track vlan 273

    This issue is resolved in this release.

  • PCI device driver e1000e does not support alternate MAC address feature on Intel 82571EB Serializer-Deserializer
    The PCI device Intel 82571EB Serializer-Deserializer with Device ID 1060 supports alternate MAC address feature however, the device driver e1000e of the same device does not support the feature.

    This issue is resolved in this release.
  • IBM server fails with purple diagnostic screen while trying to inject slow path packets
    If the metadata associated with slowpath packet is copied without checking whether enough data is mapped, then the metadata moves out of the frame mapped area and causes a page fault. Mapping necessary data to include the metadata before copying it resolves the issue.
  • When you disable coalescing on ESX, the host fails with a purple screen
    In ESX, when vmxnet3 is used as vNIC in some virtual machines and you turnoff packet coalescing, the ESX host fails with a purple screen as the virtual machine is booting up.

    Correcting the coalescing checking and assertion logic resolves this issue.
  • When Load Based Teaming changes vNIC port mapping vmkernel fails to send Reverse Address Resolution Protocol
    If route based on pNIC Load is the Teaming Policy of dvs portgroup, and vNIC-to-pNIC mapping is changed when some pNICs are saturated, vmkernel fails to send out RARP packets to update physical switch about this change, which results in virtual machines loosing network connectivity.

    This issue is resolved in this release.
  • vSwitch configuration appears blank on ESX host
    The networking configuration for an ESX host might appear blank on the vSphere Client. Running the command esxcfg-vswitch -l from the local Tech Support Mode console fails with the error:

    Failed to read advanced option subtree UserVars: Error interacting with configuration file
    /etc/vmware/esx.conf: Unlock of ConfigFileLocker failed : Error interacting with configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail. Lock should be released by (0)


    Error messages similar to the following are written to hostd.log:

    [2011-04-28 14:22:09.519 49B40B90 verbose 'App'] Looking up object with name = "firewallSystem" failed.
    [2011-04-28 14:22:09.610 49B40B90 verbose 'NetConfigProvider'] FetchFn: List of pnics opted out
    [2011-04-28 14:22:09.618 49B40B90 info 'HostsvcPlugin'] Failed to read advanced option subtree UserVars: Error interacting with configuration file /etc/vmware/esx.conf: Unlock of ConfigFileLocker failed : Error interacting with configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail. Lock should be released by (0)


    This issue is resolved in this release.
  • Network connectivity to a virtual machine configured to use IPv6 might fail after installing VMware Tools
    Network connectivity to guest operating systems using kernel versions 2.6.34 and higher, and configured to use IPv6 might not work after you install VMware Tools.

    This issue is resolved in this release.
  • vSphere Client might not display IPv6 addresses on some guest operating systems
    On some guest operating systems, IPv6 addresses might not be displayed in the vSphere Client as well as the command vmware-vim-cmd.

    This issue is resolved in this release.
  • Running the esxcli network connection list command on an ESX host results in an error message
    The esxcli network connection list command might result in an error message similar to the following when the ESX host is running raw IP connections, such as vSphere HA (FDM) agent and ICMP ping:

    terminate called after throwing an instance of 'VmkCtl::Lib::SysinfoException' what(): Sysinfo error on operation returned status : Bad parameter. Please see the VMkernel log for detailed error information Aborted

    This issue is resolved in this release.

Security

  • Update to Apache Tomcat 6.0.35 addresses multiple security issues
    Apache Tomcat has been updated to version 6.0.35 to address multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2011-3190, CVE-2011-3375, CVE-2011-4858, and CVE-2012-0022 to these issues.
  • Update to ESX service console Perl RPM addresses multiple security issues
    The ESX service console Perl RPM is updated to perl-5.8.8.32.1.8999.vmw to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-2761, CVE-2010-4410, and CVE-2011-3597 to these issues.
  • Update to ESX service console popt, rpm, rpm-libs, and rpm-python RPMs addresses multiple security issues
    The ESX service console popt, rpm, rpm-libs, and rpm-python RPMs are updated to the following versions to resolve multiple security issues:
    • popt-1.10.2.3-28.el5_8
    • rpm-4.4.2.3-28.el5_8
    • rpm-libs-4.4.2.3-28.el5_8
    • rpm-python-4.4.2.3-28.el5_8
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2012-0060 , CVE-2012-0061 , and CVE-2012-0815 to these issues.
  • Update to ESX service console GnuTLS RPM addresses multiple security issues
    The ESX service console GnuTLS RPM is updated to version 1.4.1-7.el5_8.2 to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2011-4128 , CVE-2012-1569 , and CVE-2012-1573 to these issues.
  • Update to service console OpenSSL RPM addresses a security issue
    The service console OpenSSL RPM is updated to version 0.9.8e-22.el5_8.3 to resolve a security issue.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2012-2110 to this issue.
  • Update to ThinPrint agent removes DLL call
    This update removes a call to a non-existing ThinPrint DLL as a security hardening measure.
    VMware would like to thank Moshe Zioni from Comsec Consulting for reporting this issue to us.
  • Update to ESX service console libmxl2 RPMs addresses a security issue
    The ESX service console libmxl2 RPMs are updated to libxml2-2.6.26-2.1.15.el5_8.2 and libxml2-python-2.6.26-2.1.15.el5_8.2 to resolve a security issue.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2012-0841 to this issue.

Server Configuration

  • ESX host on which page-sharing is disabled fails with a purple screen
    If you perform a VMotion operation to an ESX host on which the boot-time option page-sharing is disabled, the ESX host might fail with a purple screen.
    Disabling page-sharing severely affects performance of the ESX host. Because page-sharing should never be disabled, starting with this release, the page-sharing configuration option is removed.
  • ESX host logs an incorrect C1E state
    The vmkernel.log and the dmesg command might show the message C1E enabled by the BIOS. The message may also be shown even when C1E has been disabled by the BIOS, and it may not be shown even when C1E has been enabled by the BIOS.

Storage

  • Storage log messages for PSA components need some enhancement
    The error logging mechanism in ESX host does not log all the storage error messages, because of this troubleshooting storage issues becomes difficult.

    Enhancing the log messages for the PSA components resolves this issue.
  • Reverting to a snapshot fails when virtual machines reference a shared VMDK file
    In an environment with two powered on virtual machines on the same ESX host that reference a shared VMDK file, attempts to revert to a snapshot on either virtual machine might fail and the vSphere Client might display a File lock error. This issue occurs with both VMFS and NFS datastores.

    This issue is resolved in this release.
  • Corrupt VMFS volume causes VMFS heap memory exhaustion
    When an ESX host encounters a corrupt VMFS volume, VMFS driver might leak memory causing VMFS heap exhaustion. This stops all VMFS operations causing orphaned virtual machines and missing datastores. vMotion operations might not work and attempts to start new virtual machines might fail with errors about missing files and memory exhaustion. This issue might affect all ESX hosts that share the corrupt LUN and have running virtual machines on that LUN.

    This issue is resolved in this release.
  • VirtualCenter Agent Service fails during cold migration
    VirtualCenter Agent Service (vpxa) might fail during cold migration of a virtual machine. Error messages similar to the following are written to vpxd.log:

    [2011-11-02 12:06:34.557 03296 info 'App' opID=CFA0C344-00000198] [VpxLRO] -- BEGIN task-342826 -- vm-2851 -- vim.VirtualMachine.relocate -- 8D19CD22-FD15-44B9-9384-1DB4C1A7F7A2(ED8C34F5-CE61-4260-A8C1-D9CA5C2A1C4B)
    [2011-11-02 12:20:05.509 03296 error 'App' opID=CFA0C344-00000198] [VpxdVmprovUtil] Unexpected exception received during NfcCopy
    [2011-11-02 12:20:05.535 03296 error 'App' opID=CFA0C344-00000198] [migrate] (SINITAM02) Unexpected exception (vmodl.fault.HostCommunication) while relocating VM. Aborting.


    This issue is resolved in this release.
  • VMW_SATP_LSI plug-in timeout issue results in path thrashing
    Under certain circumstances Logical Units(LU) on storage controllers claimed by VMW_SATP_LSI plugin might not respond to the plugin issued path evaluation commands within the plugin timeout period of 5 seconds. When two or more vSphere hosts share access to these affected LU, this might result in path thrashing (see Understanding Path Thrashing).

    In this release, the timeout value in the VMW_SATP_LSI plug-in is increased to 10 seconds. Before installing this update consult your storage vendor to determine the guest operating system I/O timeout value.
  • Cannot create greater than 2TB-512B datastore on ESX 4.x host using vSphere Client
    Prior to this release it was possible to create a greater than 2TB-512B datastore using the vSphere Command-Line Interface. However this is not a supported configuration.

    Now an attempt to create a datastore greater than 2TB-512 using the vSphere CLI fails gracefully.
  • Warning messages are logged during heartbeat reclaim operation
    VMFS might issue I/Os to a volume when a VMFS heartbeat reclaim operation is in progress or a virtual reset operation is performed on an underlying device. As a result, warning messages similar to the following are logged:

    WARNING: ScsiDeviceIO: 2360: Failing WRITE command (requiredDataLen=512 bytes) to write-quiesced partition naa.9999999999

    Further, an alert message is reported on the ESX console.

    These warnings and alerts are harmless and can be ignored.

    In this release, the alert messages are removed and warnings are changed to log messages.
  • Updated Installing certain versions of VMware Tools results in log spew
    When you install certain versions of VMware Tools such as version 8.3.7, a spew of messages similar to the following might be written to vmkernel.log:

    Nov 22 11:55:06 [hostname] vmkernel: 107:01:39:59.667 cpu12:21263)VSCSIFs: 329: handle 9267(vscsi0:0):Invalid Opcode (0xd1)
    Nov 22 11:55:06 [hostname] vmkernel: 107:01:39:59.687 cpu5:9487)VSCSIFs: 329: handle 9261(vscsi0:0):Invalid Opcode (0xd1)


    This issue is resolved in this release.
  • Default SATP Plugin is changed for ALUA supported LSI Arrays
    On ESX 4.1Update2 hosts, the default Storage Array Type Plugin (SATP) for LSI arrays was VMW_SATP_LSI, which did not support Asymmetric Logical Unit Access (ALUA) functionality. Starting with this release, SATP Plugin for LSI arrays that supports ALUA is changed to VMW_SATP_ALUA, so that TPGS/ALUA arrays are automatically claimed by the default VMW_SATP_ALUA satp plugin.The following storage arrays are claimed by VMW_SATP_ALUA:
    Vendor Model Description
    • LSI INF-01-00
    • IBM ^1814* DS4000
    • IBM ^1818* DS5100/DS5300
    • IBM ^1746* IBM DS3512/DS3524
    • DELL MD32xx Dell MD3200
    • DELL MD32xxi Dell MD3200i
    • DELL MD36xxi Dell MD3600i
    • DELL MD36xxf Dell MD3600f
    • SUN LCSM100_F
    • SUN LCSM100_I
    • SUN LCSM100_S
    • SUN STK6580_6780 Sun StorageTek 6580/6780
    • SUN SUN_6180 Sun Storage 6180
    • SGI IS500 SGI InfiniteStorage 4000/4100
    • SGI IS600 SGI InfiniteStorage 4600

  • ESX host might report about corrupted VMFS volume when you delete files from directories which have more than 468 files on an ESX host
    An attempt to delete a file from a directory with more than 468 files or delete the directory itself might fail, and the ESX host might erroneously report that the VMFS is corrupted.The ESX host logs error messages similar to the following to the /var/log/vmkernel file:

    cpu10:18599)WARNING: Fil3: 10970: newLength 155260 but zla 2
    cpu10:18599)Fil3: 7054: Corrupt file length 155260, on FD <70, 93>, not truncating

    This issue is resolved in this release.
  • ESX host might fail with a purple diagnostic screen due to an issue in VMFS module.
    An ESX host might fail with a purple diagnostic screen that displays error messages similar to the following because of an issue in VMFS module.

    @BlueScreen: #PF Exception 14 in world 8008405:vmm0:v013313 IP 0x418001562b6d addr 0x28
    34:15:27:55.853 cpu9:8008405)Code start: 0x418000e00000 VMK uptime: 34:15:27:55.853
    34:15:27:55.853 cpu9:8008405)0x417f816af398:[0x418001562b6d]PB3_Read@esx:nover+0xf0 stack: 0x41000e1c9b60
    34:15:27:55.854 cpu9:8008405)0x417f816af468:[0x4180015485df]Fil3ExtendHelper@esx:nover+0x172 stack: 0x0
    34:15:27:55.854 cpu9:8008405)0x417f816af538:[0x41800154ded4]Fil3_SetFileLength@esx:nover+0x383 stack: 0xa00000001
    34:15:27:55.854 cpu9:8008405)0x417f816af5a8:[0x41800154e0ea]Fil3_SetFileLengthWithRetry@esx:nover+0x6d stack: 0x417f816af5e8
    34:15:27:55.854 cpu9:8008405)0x417f816af638:[0x41800154e38b]Fil3_SetAttributes@esx:nover+0x246 stack: 0x41027fabeac0
    34:15:27:55.854 cpu9:8008405)0x417f816af678:[0x41800101de7e]FSS_SetFileAttributes@vmkernel:nover+0x3d stack: 0x1000b000
    34:15:27:55.855 cpu9:8008405)0x417f816af6f8:[0x418001434418]COWUnsafePreallocateDisk@esx:nover+0x4f stack: 0x4100a81b4668
    34:15:27:55.855 cpu9:8008405)0x417f816af728:[0x418001434829]COWIncrementFreeSector@esx:nover+0x68 stack: 0x3
    34:15:27:55.855 cpu9:8008405)0x417f816af7b8:[0x418001436b1a]COWWriteGetLBNAndMDB@esx:nover+0x471 stack: 0xab5db53a0
    34:15:27:55.855 cpu9:8008405)0x417f816af908:[0x41800143761f]COWAsyncFileIO@esx:nover+0x8aa stack: 0x41027ff88180
    34:15:27:55.855 cpu9:8008405)0x417f816af9a8:[0x41800103d875]FDS_AsyncIO@vmkernel:nover+0x154 stack: 0x41027fb585c0
    34:15:27:55.856 cpu9:8008405)0x417f816afa08:[0x4180010376cc]DevFSFileIO@vmkernel:nover+0x13f stack: 0x4100077c3fc8


    This issue is resolved in this release.
  • ESX host stops responding when VMW_SATP_LSI module runs out of heap memory
    This issue occurs on servers that have access to LUNs which are claimed by VMW_SATP_LSI module. A memory leak that exists in VMW_SATP_LSI module forces the module to run out of memory. Error messages similar to the following are logged to vmkernel.log file:

    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: Heap: 2218: Heap VMW_SATP_LSI already at its maximumSize. Cannot expand.
    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: Heap: 2481: Heap_Align(VMW_SATP_LSI, 316/316 bytes, 8 align) failed. caller: 0x41800a9e91e5
    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: VMW_SATP_LSI: satp_lsi_IsInDualActiveMode: Out of memory.


    The memory leak in the VMW_SATP_LSI module has been resolved in this release.
  • ESX host might fail with a purple screen while resignaturing a VMFS volume
    An ESX host might fail with a purple diagnostic screen that displays error messages similar to the following during a VMFS volume resignaturing operation.

    #DE Exception 0 in world 20519269:helper22-6 @ 0x418024b26a33
    117:05:20:07.444 cpu11:20519269)Code start: 0x418024400000 VMK uptime: 117:05:20:07.444
    117:05:20:07.444 cpu11:20519269)0x417f84b2f290:[0x418024b26a33]Res3_ExtendResources@esx:nover+0x56 stack: 0x4100ab400040
    117:05:20:07.445 cpu11:20519269)0x417f84b2f2e0:[0x418024af9a58]Vol3_Extend@esx:nover+0x9f stack: 0x0
    117:05:20:07.445 cpu11:20519269)0x417f84b2f4f0:[0x418024afd3f6]Vol3_Open@esx:nover+0xdc9 stack: 0x417f84b2f668
    117:05:20:07.446 cpu11:20519269)0x417f84b2f6a0:[0x4180246225d1]FSS_Probe@vmkernel:nover+0x3ec stack: 0x417f84b2f6f0
    117:05:20:07.446 cpu11:20519269)0x417f84b2f6f0:[0x41802463d0e6]FDS_AnnounceDevice@vmkernel:nover+0x1dd stack: 0x3133306161336634


    This issue is resolved in this release.
  • ESX host fails with purple screen and the Out of memory for timers error message during VMware View recompose operation
    An ESX host might fail with a purple diagnostic screen that displays error messages and stack trace similar to the following when you perform a recompose operation on VMware View:

    @BlueScreen: Out of memory for timers
    0:20:06:44.618 cpu38:4134)Code start: 0x418033600000 VMK uptime: 0:20:06:44.618
    0:20:06:44.619 cpu38:4134)0x417f80136cf8:[0x418033658726]Panic@vmkernel:nover+0xa9 stack: 0x417f80136d78
    0:20:06:44.619 cpu38:4134)0x417f80136d28:[0x41803367958e]TimerAlloc@vmkernel:nover+0x10d stack: 0x9522bf175903
    0:20:06:44.619 cpu38:4134)0x417f80136d78:[0x418033679fbb]Timer_AddTC@vmkernel:nover+0x8a stack: 0x4100b8317660
    0:20:06:44.620 cpu38:4134)0x417f80136e08:[0x41803384d964]SCSIAsyncDeviceCommandCommon@vmkernel:nover+0x2f7 stack: 0x41037db8c
    0:20:06:44.620 cpu38:4134)0x417f80136e58:[0x41803383fbed]FDS_CommonAsyncIO@vmkernel:nover+0x48 stack: 0x410092dea0e8


    This issue is resolved in this release.
  • Data corruption occurs on Emulex LPe12000 driver when dealing with 4G DMA boundary address
    In ESX host, when the Emulex LPe12000 driver fails to set the dma_boundary value in the host template, the dma_boundary value is set to zero. This causes the SG list addresses to go beyond the address boundary defined for the driver, resulting in data corruption.

    This issue is resolved in this release.

Supported Hardware

  • Cannot change power policy of an ESX host on IBM BladeCenter HX5 UEFI server
    When you try to change the power policy of an ESX host running on a IBM BladeCenter HX5 UEFI server, the Power Management Settings on the vSphere Client displays the following message:

    Technology: Not Available
    Active Policy: Not Supported.


    This issue is resolved in this release.

Upgrade and Installation

  • Installing ESX in graphical mode results in misaligned display on specific Dell PowerEdge 12G servers
    When you install ESX in graphical mode on 12G servers, the alignment of the installation screen is incorrect, this issue is observed on all the 12G servers. The use of vesa driver for the installer resolves this issue.

    This issue is resolved in this release.

vCenter Server, vSphere Client, and vSphere Web Access

  • Hostd and vpxa services fail and ESX host disconnects from vCenter Server
    An sfcb-vmware_base TIMEOUT error might cause the hostd and vpxa services to fail and the ESX host to disconnect intermittently from vCenter Server. Error messages similar to the following are written to /var/log/messages:

    Jan 30 12:25:17 sfcb-vmware_base[2840729]: TIMEOUT DOING SHARED SOCKET RECV RESULT (2840729)
    Jan 30 12:25:17 sfcb-vmware_base[2840729]: Timeout (or other socket error) waiting for response from provider
    Jan 30 12:25:17 sfcb-vmware_base[2840729]: Request Header Id (1670) != Response Header reqId (0) in request to provider 685 in process 3. Drop response.
    Jan 30 12:25:17 vmkernel: 7:19:02:45.418 cpu32:2836462)User: 2432: wantCoreDump : hostd-worker -enabled : 1


    This issue is resolved in this release.
  • vSphere Client displays incorrect data for a virtual machine
    The vSphere Client overview performance charts might display data for a virtual machine even for the period when the virtual machine was powered off.

    This issue is resolved in this release.

Virtual Machine Management

  • VMX file might become corrupted during quiesced snapshot operation
    When you create a quiesced snapshot of a virtual machine using either the VSS service, VMware Tools SYNC driver or a backup agent, hostd writes to the .vmx file. As a result, the .vmx file becomes blank.

    This issue is resolved in this release.
  • Virtual machine fails with monitor panic if paging is disabled
    An error messages similar to the following is written to vmware.log:

    Aug 16 14:17:39.158: vcpu-0| MONITOR PANIC: vcpu-1:VMM64 fault 14: src=MONITOR rip=0xfffffffffc262277 regs=0xfffffffffc008c50

    This issue is resolved in this release.
  • Windows 2003 virtual machine on ESX host with NetBurst-based CPU takes a long time to restart
    Restarting a Windows 2003 Server virtual machine that has shared memory pages takes approximately 5 to 10 minutes if you have a NetBurst-based CPU installed on the ESX host. However, you can shut down and power on the same virtual machine without experiencing any delay.

    This issue is resolved in this release.
  • Sometimes the reconfiguration task of virtual machine fails due to a deadlock
    In some scenarios, the reconfiguration task of a virtual machine fails due to a deadlock. The deadlock occurs while executing the reconfigure and data store change operations.

    This issue is resolved in this release.
  • Deleting a virtual machine results in removal of unassociated virtual disks
    If you create a virtual machine snapshot and later delete the virtual machine, independent or non-independent virtual disks that were detached from the virtual machine earlier might also be deleted.

    This issue is resolved in this release.
  • PCI configuration space values for VMDirectIO between ESX and virtual machine are inconsistent
    When you set the VMDirectIO path for a network interface adapter in pass-through mode and assign it to a virtual machine, the state of the Device Control register’s Interrupt Disable bit (INTx) is displayed as enabled for the virtual machine and disabled for ESX. This is incorrect, because the INTx value should be in enabled state for both the cases.

    This issue is resolved in this release.
  • Bridge Protocol Data Unit frames sent from bridged NIC disables physical uplink
    When you enable BPDU guard on the physical switch port, BPDU frames sent from the bridged virtual NIC cause the physical uplink to get disabled and as a result, the uplink goes down.
    Identify the host, which sent out the BPDU packets and set esxcfg-advcfg -s 1 /Net/BlockGuestBPDU on that host. This filters out and blocks BPDU packets from that virtual NIC. The virtual machines with the bridged virtual NICs should be powered on only after this filter is turned on for the filter to take effect.

    This issue is resolved in this release.
  • Cannot remove the extraConfig settings for a virtual machine through API
    This issue is resolved in this release.

VMware HA and Fault Tolerance

  • Secondary FT virtual machine running on ESX host might fail
    On an ESX host, a secondary Fault Tolerance virtual machine installed with VMXNET 3 adapter might fail. Error messages similar to the following are written to vmware.log:

    Dec 15 16:11:25.691: vmx| GuestRpcSendTimedOut: message to toolbox timed out.
    Dec 15 16:11:25.691: vmx| Vix: [115530 guestCommands.c:2468]: Error VIX_E_TOOLS_NOT_RUNNING in VMAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest
    Dec 15 16:11:30.287: vcpu-0| StateLogger::Commiting suicide: Statelogger divergence
    Dec 15 16:11:31.295: vmx| VTHREAD watched thread 4 "vcpu-0" died


    This issue does not occur on a virtual machine installed with E1000 adapter.

    This issue is resolved in this release.

vMotion and Storage vMotion

  • When you live migrate Windows 2008 virtual machines from ESX4.0 to ESX4.1 and then perform a storage vMotion the quiesced snapshots fail
    A storage vMotion operation on ESX 4.1 by default sets disk.enableUUID to true for a Windows 2008 virtual machine, thus enabling application quiescing. Subsequent quiesce snapshot operation will fail till the virtual machine undergoes a power cycle.

    This issue is resolved in this release.

VMware Tools

  • VMware Snapshot Provider service (vmvss) is not removed while uninstalling VMware Tools on Windows 2008 R2 guest operating systems

    This issue is resolved in this release.
  • Certain SLES virtual machines do not restart after VMware Tools upgrade
    After you upgrade VMware Tools on certain SLES virtual machines such as SLES 10 SP4 and SLES 11 SP2, attempts to restart the virtual machine might fail with a waiting for sda2....... not responding error message. This issue occurs because the INITRD_MODULES options in /etc/sysconfig/kernel are deleted during the VMware Tools uninstall process.

    This issue is resolved in this release. However, the issue might still occur if you upgrade from an earlier version of VMware Tools to the version of VMware Tools available in this release. See Technical Information Document (TID) 7005233 on the Novell website.
  • VMware Tools upgrade times out on ESX 4.1 Update 1
    On virtual machines running on ESX 4.1 Update 1, attempts to upgrade VMware Tools might time out. Error messages similar to the following are written to vmware.log:

    Nov 30 15:36:34.839: vcpu-0| TOOLS INSTALL finished copying upgrader binary into guest. Starting Upgrader in guest.
    Nov 30 15:36:34.840: vcpu-0| TOOLS INSTALL Sending "upgrader.create 1"
    Nov 30 15:36:34.902: vcpu-0| TOOLS INSTALL Received guest file root from upgrader during unexpected state...ignoring.
    Nov 30 15:36:34.904: vcpu-0| GuestRpc: Channel 6, unable to send the reset rpc.
    Nov 30 15:36:34.905: vcpu-0| GuestRpc: Channel 6 reinitialized.


    This issue is resolved in this release.
  • VMware Tools service fails while starting Windows 2008 R2 virtual machine
    The VMware Tools service (vmtoolsd.exe) fails during the Windows 2008 R2 guest operating system start up process. However, you can start this service manually after the operating system start up process is complete.

    This issue is resolved in this release.
  • Esxtop fails while attempting a batch capture on a server with 128 CPUs
    When you attempt a batch capture on a server with 128 logical CPUs, the esxtop fails. This happens due to the limited buffer size of the header. Increasing the buffer size of the header resolves this issue.
  • Uninstalling or upgrading VMware Tools removes custom entries in modprobe.conf file
    Any changes that you make to the /etc/modprobe.conf file might be overwritten when you uninstall or upgrade VMware Tools.

    This issue is resolved in this release.
  • Windows Server 2008 R2 64-bit Remote Desktop IP virtualization might not work on ESX 4.0 Update 1
    IP virtualization, which allows you to allocate unique IP addresses to RDP sessions, might not work on a Windows Server 2008 R2 64-bit running on ESX 4.0 Update 1. This happens because the vsock dlls were registered by separate 32-bit and 64-bit executable file. This makes the catalog IDs to be out-of-sync between 32-bit and 64-bit Winsock catalogs for vSock LSP.

    This issue is resolved in this release.
  • VMware Tools upgrade does not replace VMCI driver required for Remote Desktop IP virtualization
  • When you upgrade VMware Tools from an earlier version to a later version, IP virtualization fails. This happens because, the ESX host fails to check for the new VMCI driver version and is unable install the vsock DLL files.

Top of Page

Known Issues

This section describes known issues in this release in the following subject areas:

Known issues not previously documented are marked with the * symbol.

Backup

  • VCB service console commands generate error messages in ESX service console
    When you run VCB service console commands in the service console of ESX hosts, error messages similar to the following might be displayed:

    Closing Response processing in unexpected state:3
    canceling invocation: server=TCP:localhost:443, moref=vim.SessionManager:ha-sessionmgr, method=logout

    Closing Response processing in unexpected state:3
    [.... error 'App'] SSLStreamImpl::BIORead (58287920) failed: Thread pool shutdown in progress
    [.... error 'App'] SSLStreamImpl::DoClientHandshake (58287920) SSL_connect failed with BIO Erro


    You can ignore these messages. These messages do not impact the results of VCB service console commands.

    Workaround: None.

CIM and API

  • SFCC library does not set the SetBIOSAttribute method in the generated XML file
    When Small Footprint CIM Client (SFCC) library tries to run the
    SetBIOSAttribute method of the CIM_BIOSService class through SFCC, a XML file containing the following error will be returned by SFCC: ERROR CODE="13" DESCRIPTION="The value supplied is incompatible with the type". This issue occurs when the old SFCC library does not support setting method parameter type in the generated XML file. Due to this issue, you cannot invoke the SetBIOSAttribute method. SFCC library in ESX 4.1 hosts does not set the method parameter type in the socket stream XML file that is generated.

    A few suggested workarounds are:
    • IBM updates the CIMOM version
    • IBM patches the CIMOM version with this fix
    • IBM uses their own version of SFCC library

Guest Operating System

  • Installer window is not displayed properly during RHEL 6.1 guest operating system installation (KB 2003588).
  • Guest operating system might become unresponsive after you hot-add memory more than 3GB
    Redhat 5.4-64 guest operating system might become unresponsive if you start with an IDE device attached, and perform a hot-add operation to increase memory from less than 3GB to more than 3GB.

    Workaround: Do not use hot-add to change the virtual machine's memory size from less than or equal to 3072MB to more than 3072MB. Power off the virtual machine to perform this reconfiguration. If the guest operating system is already unresponsive, restart the virtual machine. This problem occurs only when the 3GB mark is crossed while the operating system is running.
  • Windows NT guest operating system installation error with hardware version 7 virtual machines
    When you install Windows NT 3.51 in a virtual machine that has hardware version 7, the installation process stops responding. This happens immediately after the blue startup screen with the Windows NT 3.51 version appears. This is a known issue in the Windows NT 3.51 kernel. Virtual machines with hardware version 7 contain more than 34 PCI buses, and the Windows NT kernel supports hosts that have a limit of 8 PCI buses.

    Workaround: If this is a new installation, delete the existing virtual machine and create a new one. During virtual machine creation, select hardware version 4. You must use the New Virtual Machine wizard to select a custom path for changing the hardware version. If you created the virtual machine with hardware version 4 and then upgraded it to hardware version 7, use VMware vCenter Converter to downgrade the virtual machine to hardware version 4.
  • Installing VMware Tools OSP packages on SLES 11 guest operating systems displays a message stating that the packages not supported
    When you install VMware Tools OSP packages on a SUSE Linux Enterprise Server 11 guest operating system, an error message similar to the following is displayed:
    The following packages are not supported by their vendor.

    Workaround: Ignore the message. The OSP packages do not contain a tag that marks them as supported by the vendor. However, the packages are supported.

Miscellaneous

  • An ESX/ESXi 4.1 U2 host with vShield Endpoint 1.0 installed fails with a purple diagnostic screen mentioning VFileFilterReconnectWork (KB 2009452).

  • Running resxtop or esxtop for extended periods might result in memory problems
    Memory usage by
    resxtop or esxtop might increase over time depending on what happens on the ESX host being monitored. That means that if a default delay of 5 seconds between two displays is used, resxtop or esxtop might shut down after around 14 hours.

    Workaround: Although you can use the -n option to change the total number of iterations, you should consider running resxtop only when you need the data. If you do have to collect resxtop or esxtop statistics over a long time, shut down and restart resxtop or esxtop periodically instead of running one resxtop or esxtop instance for weeks or months.
  • Group ID length in vSphere Client shorter than group ID length in vCLI
    If you specify a group ID using the vSphere Client, you can use only nine characters. In contrast, you can specify up to ten characters if you specify the group ID by using the
    vicfg-user vCLI.

    Workaround: None


  • Warning message appears when you run esxcfg-pciid command
    When you try to run the esxcfg-pciid command in the service console to list the Ethernet controllers and adapters, you might see a warning message similar to the following:
    Vendor short name AMD Inc does not match existing vendor name Advanced Micro Devices [AMD]
    kernel driver mapping for device id 1022:7401 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7409 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7411 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7441 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown


    This issue occurs when both the platform device-descriptor files and the driver-specific descriptor files contain descriptions for the same device.

    Workaround: You can ignore this warning message.

Networking

  • Certain versions of VMXNET 3 driver fail to initialize the device when the number of vCPUs is not a power of two (KB 2003484).

  • Network connectivity and system fail while control operations are running on physical NICs
    In some cases, when multiple X-Frame II s2io NICs are sharing the same PCI-X bus, control operations, such as changing the MTU, on the physical NIC cause network connectivity to be lost and the system to fail.

    Workaround: Avoid having multiple X-Frame II s2io NICs in slots that share the same PCI-X bus. In situations where such a configuration is necessary, avoid performing control operations on the physical NICs while virtual machines are doing network I/O.
  • Poor TCP performance might occur in traffic-forwarding virtual machines with LRO enabled
    Some Linux modules cannot handle LRO-generated packets. As a result, having LRO enabled on a VMXNET2 or VMXNET3 device in a traffic-forwarding virtual machine that is running a Linux guest operating system can cause poor TCP performance. LRO is enabled by default on these devices.

    Workaround: In traffic-forwarding virtual machines running Linux guest operating systems, set the module load time parameter for the VMXNET2 or VMXNET3 Linux driver to include disable_lro=1.
  • Memory problems occur when a host uses more than 1016 dvPorts on a vDS
    Although the maximum number of allowed dvPorts per host on vDS is 4096, memory problems can start occurring when the number of dvPorts for a host approaches 1016. When this occurs, you cannot add virtual machines or virtual adapters to the vDS.

    Workaround: Configure a maximum of 1016 dvPorts per host on a vDS.
  • Reconfiguring VMXNET3 NIC might cause virtual machine to wake up
    Reconfiguring a VMXNET3 NIC while Wake-on-LAN is enabled and the virtual machine is asleep causes the virtual machine to resume.

    Workaround: Put the virtual machine back into sleep mode manually after reconfiguring (for example, after performing a hot-add or hot-remove) a VMXNET3 vNIC.
  • Recently created VMkernel and service console network adapters disappear after a power cycle
    If an ESX host is power cycled within an hour of creating a new VMkernel or service console adapter on a vDS, the new adapter might disappear.

    Workaround: If you need to power cycle an ESX host within an hour of creating a VMkernel or service console adapter, run
    esxcfg-boot -r in the host's CLI before starting the host.

Server Configuration

  • Upgrading to ESX 4.1.x fails when LDAP is configured on the host and the LDAP server is not reachable
    Upgrade from ESX 4.x to ESX 4.1.x fails when you have configured LDAP on the ESX host and the LDAP server is not reachable.

    Workaround: To work around this issue, perform one of the following tasks:

    • Set the following parameters in the /etc/ldap.conf file.
      • To allow connections to the LDAP server to timeout, set bind_policy to soft.
      • To set the LDAP server connect timeout duration in seconds, set bind_timelimit to 30.
      • To set the LDAP per query timeout duration in seconds, set timelimit to 30.


    • Disable and then enable LDAP after the upgrade is completed.
      1. Disable LDAP by running the esxcfg-auth --disableldap command from the service console before the upgrade.
      2. Enable LDAP by running the esxcfg-auth --enableldap --enableldapauth --ldapserver=xx.xx.xx.xx --ldapbasedn=xx.xx.xx command from the service console after the upgrade.

Storage

  • ESX host fails with a purple screen when a LUN is added to certain guest operating systems at 10 second intervals *
    The purple diagnostic screen displays error messages similar to the following:

    20:16:16:49.575 cpu0:4120)Code start: 0x41801d800000 VMK uptime: 20:16:16:49.575
    20:16:16:49.576 cpu0:4120)0x417f800c7c18:[0x41801ddb5cf4]fnic_fcpio_cmpl_handler@esx:nover+0x8ef stack: 0x1dc0
    20:16:16:49.577 cpu0:4120)0x417f800c7c68:[0x41801ddb4aa0]fnic_wq_copy_cmpl_handler@esx:nover+0xaf stack: 0x417f800c7d18
    20:16:16:49.577 cpu0:4120)0x417f800c7c88:[0x41801ddb97dd]fnic_isr_msix_wq_copy@esx:nover+0x18 stack: 0x22
    20:16:16:49.578 cpu0:4120)0x417f800c7cc8:[0x41801dc7fd64]Linux_IRQHandler@esx:nover+0x43 stack: 0x83
    20:16:16:49.578 cpu0:4120)0x417f800c7d58:[0x41801d8323e1]IDTDoInterrupt@vmkernel:nover+0x348 stack: 0x4100b1a77ed0
    20:16:16:49.579 cpu0:4120)0x417f800c7d98:[0x41801d8326ba]IDT_HandleInterrupt@vmkernel:nover+0x85 stack: 0x12927df33a3a96
    20:16:16:49.580 cpu0:4120)0x417f800c7db8:[0x41801d83300d]IDT_IntrHandler@vmkernel:nover+0xc4 stack: 0x417f800c7ec0


    This issue is known to occur in environments where the Cisco UCS M81KR Virtual Interface Card (VIC) is used.

    Workaround: Use the async driver. Consult Cisco to determine the correct version.

  • Cannot configure iSCSI over NIC with long logical-device names
    Running the command
    esxcli swiscsi nic add -n from a remote command line interface or from a service console does not configure iSCSI operation over a VMkernel NIC whose logical-device name exceeds 8 characters. Third-party NIC drivers that use vmnic and vmknic names that contain more than 8 characters cannot work with iSCSI port binding feature in ESX hosts and might display exception error messages in the remote command line interface. Commands such as esxcli swiscsi nic list, esxcli swiscsi nic add, esxcli swiscsi vmnic list from the service console fail because they are unable to handle long vmnic names created by the third-party drivers.

    Workaround: The third-party NIC driver needs to restrict their vmnic names to less than or equal to 8 bytes to be compatible with iSCSI port binding requirement.
    Note: If the driver is not used for iSCSI port binding, the driver can still use up to names of 32 bytes. This will also work with iSCSI without the port binding feature.


  • Large number of storage-related messages in VMkernel log file
    When ESX starts on a host with several physical paths to storage devices, the VMkernel log file records a large number of storage-related messages similar to the following:

    Nov 3 23:10:19 vmkernel: 0:00:00:44.917 cpu2:4347)Vob: ImplContextAddList:1222: Vob add (@&!*@*@(vob.scsi.scsipath.add)Add path: %s) failed: VOB context overflow
    The system might log similar messages during storage rescans. The messages are expected behavior and do not indicate any failure. You can safely ignore them.

    Workaround: Turn off logging if you do not want to see the messages.
  • Persistent reservation conflicts on shared LUNs might cause ESX hosts to take longer to boot
    You might experience significant delays while starting hosts that share LUNs on a SAN. This might be because of conflicts between the LUN SCSI reservations.

    Workaround: To resolve this issue and speed up the boot process, change the timeout for synchronous commands during boot time to 10 seconds by setting the Scsi.CRTimeoutDuringBoot parameter to 10000.

    To modify the parameter from the vSphere Client:
    1. In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced Settings under Software.
    2. Select SCSI.
    3. Change the Scsi.CRTimeoutDuringBoot parameter to 10000.

Supported Hardware

  • ESX might fail to boot when allowInterleavedNUMANodes boot option is FALSE
    On an IBM eX5 host with a MAX 5 extension, ESX fails to boot and displays a
    SysAbort message on the service console. This issue might occur when the allowInterleavedNUMANodes boot option is not set to TRUE. The default value for this option is FALSE.

    Workaround: Set the
    allowInterleavedNUMANodes boot option to TRUE. See KB 1021454 for more information about how to configure the boot option for ESX hosts.
  • PCI device mapping errors on HP ProLiant DL370 G6
    When you run I/O operations on the HP ProLiant DL370 G6 server, you might encounter a purple screen or see alerts about Lint1 Interrupt or NMI on the console. The HP ProLiant DL370 G6 server has two Intel I/O hub (IOH) and a BIOS defect in the ACPI Direct Memory Access remapping (DMAR) structure definitions, which causes some PCI devices to be described under the wrong DMA remapping unit. Any DMA access by such incorrectly described PCI devices triggers an IOMMU fault, and the device receives an I/O error. Depending on the device, this I/O error might result either in a Lint1 Interrupt or NMI alert message on the console, or in a system failure with a purple screen.


    Workaround: Update the BIOS to 2010.05.21 or a later version.
  • ESX installations on HP systems require the HP NMI driver
    ESX 4.1 instances on HP systems (G7 and earlier) require the HP NMI driver to ensure proper handling of non-maskable interrupts (NMIs). The NMI driver ensures that NMIs are properly detected and logged to IML. Without this driver, NMIs, which signal hardware faults, are ignored on HP systems running ESX.
    Caution: Failure to install this driver might result in NMI events being ignored by the OS. Ignoring NMI events may lead to system instability.

    Workaround: Download and install the NMI driver. The driver is available as an offline bundle from the HP Web site. Also, see KB 1021609.
  • Virtual machines might become read-only when run on an iSCSI datastore deployed on EqualLogic storage
    Virtual machines might become read-only if you use an EqualLogic array with a later firmware version. The firmware might occasionally drop I/O from the array queue, causing virtual machines to become read-only after marking the I/O as failed.


    Workaround: Upgrade EqualLogic Array Firmware to version 4.1.4 or later.
  • After you upgrade a storage array, the status for hardware acceleration in the vSphere Client changes to supported after a short delay
    When you upgrade a storage array's firmware to a version that supports VAAI functionality, vSphere 4.1 does not immediately register the change. The vSphere Client temporarily displays Unknown as the status for hardware acceleration.


    Workaround: This delay is harmless. The hardware acceleration status changes to supported after a short period of time.
  • Slow performance during virtual machine power-on or disk I/O on ESX on the HP G6 Platform with P410i or P410 Smart Array Controller
    Some hosts might show slow performance during virtual machine power-on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to
    /var/log/messages:
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
    "naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
    exists on the device.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0


    This issue is caused by the lack of a battery-backed cache module in the host.
    Without the battery-backed cache module, the controller operates in Zero Memory Raid mode, severely limiting the number of simultaneous commands that can be processed by the controller.

    Workaround: Install the HP 256MB P-series Cache Upgrade module from the HP website.

Upgrade and Installation

  • Upgrade to ESX 4.1 Update 2 fails if you apply the pre-upgrade bulletin pre-upgrade-from-ESX4.0-to-4.1.0-1.4.348481-release.zip
    After you apply the pre-upgrade bulletin pre-upgrade-from-ESX4.0-to-4.1.0-1.4.348481-release.zip on hosts with patches or updates released after September 2010, which includes ESX400-201009001 and ESX 4.0 Update 3, when you attempt to upgrade to ESX 4.1 Update 2, the upgrade fails and results in the following error:

    Encountered error RunCommandError:
    This is an unexpected error. Please report it as a bug.
    Error Message - Command '['/usr/bin/vim-cmd', 'hostsvc/runtimeinfo']'
    terminated due to signal 6


    This issue does not occur if you apply the pre-upgrade bulletin pre-upgrade-from-esx4.0-to-4.1-502767.zip.

    Workaround: Apply the esxupdate bulletin pre-upgrade-from-esx4.0-to-4.1-502767.zip before applying the upgrade bundle.

    Note: Use this bulletin only if you are performing an upgrade using the esxupdate utility. You do not need to apply this bulletin for an upgrade using the VMware Update Manager.

  • Host upgrade to ESX/ESXi 4.1 Update 1 fails if you upgrade by using Update Manager 4.1(KB 1035436).

  • Installation of the vSphere Client might fail with an error
    When you install vSphere Client, the installer might attempt to upgrade an out-of-date Microsoft Visual J# runtime. The upgrade is unsuccessful and the vSphere Client installation fails with the error: The Microsoft Visual J# 2.0 Second Edition installer returned error code 4113.

    Workaround: Uninstall all earlier versions of Microsoft Visual J#, and then install the vSphere Client. The installer includes an updated Microsoft Visual J# package.

  • ESX service console displays error messages when upgrading from ESX 4.0 or ESX 4.1 to ESX 4.1.x
    When you upgrade from ESX 4.0 or ESX 4.1 release to ESX 4.1.x, the service console might display error messages similar to the following:
    On the ESX 4.0 host: Error during version check: The system call API checksum doesn’t match"
    On the ESX 4.1 host: Vmkctl & VMkernel Mismatch,Signature mismatch between Vmkctl & Vmkernel

    You can ignore the messages.

    Workaround: Reboot the ESX 4.1.x host.

  • esxupdate - a command output does not display inbox drivers when upgrading ESX host from ESX 4.0 Update 2 to ESX 4.1.x
    When you upgrade the ESX host from ESX 4.0 Update 2 to ESX 4.1.x by using the esxupdate utility, the esxupdate -a command output does not display inbox drivers.

    Workaround
    Run the esxupdate -b <ESX410-Update01> info command to view information about all inbox and asynchronous driver-bulletins available for the ESX 4.1.x release.

  • Upgrading to ESX 4.1.x fails when an earlier version of IBM Management Agent 6.2 is configured on the host
    When you upgrade a host from ESX 4.x to ESX 4.1.x, upgrade fails and error messages appear in ESX and VUM:

    • In ESX, the host logs the following message in the esxupdate.log file: DependencyError: VIB rpm_vmware-esx-vmware-release-4_4.1.0-0.0.260247@i386 breaks host API vmware-esx-vmware-release-4 <= 4.0.9.
    • In VUM, the Task & Events tab displays the following message: Remediation did not succeed : SingleHostRemediate: esxupdate error, version: 1.30, "operation: 14: There was an error resolving dependencies.

    This issue occurs when the ESX 4.x host is running an earlier version of IBM Management Agent 6.2.

    Workaround: Install IBM Management Agent 6.2 on the ESX 4.x host and then upgrade it to ESX 4.1.x.

  • Scanning the ESX host against the ESX410-Update01 or ESX410-201101226-SG bulletin displays an incompatible status message
    When you use VUM to perform a scan against an ESX host containing the ESX410-Update01 or ESX410-201101226-SG bulletin, the scan result might show the status as incompatible.

    Workaround:
    • Ignore the incompatible status message and continue with the remediation process.
    • Remove the incompatible status message by installing the ESX410-201101203-UG bulletin and perform the scan.

vMotion and Storage vMotion

  • Hot-plug operations fail after the swap file is relocated
    Hot-plug operations fail for powered-on virtual machines in a DRS cluster or on a standalone host, and result in the error failed to resume destination; VM not found after the swap file location is changed.

    Workaround: Perform one of the following tasks:
    • Reboot the affected virtual machines to register the new swap file location with them, and then perform the hot-plug operations.
    • Migrate the affected virtual machines using vMotion.
    • Suspend the affected virtual machines.

vSphere Command-Line Interface

  • Running vicfg-snmp -r or vicfg-snmp -D on ESX systems fails
    On an ESX system, when you try to reset the current SNMP settings by running the command
    vicfg-snmp -r command or try to disable the SNMP agent by running the command vicfg-snmp -D command, the command fails. The failure occurs because the command tries to run the esxcfg-firewall command, which becomes locked and stops responding. With esxcfg-firewall not responding, the vicfg-snmp -r or vicfg-snmp -D command results in a timeout and results in an error. The problem does not occur on ESXi systems.

    Workaround: Starting the ESX system removes the lock file and applies the previously executed
    vicfg-snmp command that caused the lock. However, attempts to run vicfg-snmp -r or vicfg-snmp -D still result in an error.

VMware Tools

  • Unable to use VMXNET network interface card, after installing VMware Tools in RHEL3 with latest errata kernel on ESX 4.1 U1
    Some drivers in VMware Tools pre-built with RHEL 3.9 modules do not function correctly with the 2.4.21-63 kernel because of ABI incompatibility. As a result, some device drivers,such as vmxnet and vsocket, do not load when you install VMware Tools on REHL3.9.

    Workaround: Boot into the 2.4.21-63 kernel. Install the kernel-source and gcc package for the 2.4.21-63 kernel. Run the command vmware-config-tools.pl, --compile. This compiles the modules for this kernel, the resulting modules should work with the running kernel.

  • Windows guest operating systems display incorrect NIC device status after a virtual hardware upgrade
    When you upgrade ESX host from ESX 3.5 to ESX 4.1 along with the hardware version of the ESX from 4 to 7 on Windows guest operating systems, the device status of the NIC is displayed as
    This hardware device is not connected to the computer (Code 45).

    Workaround: Uninstall and reinstall the NIC. Also uninstall any corresponding NICs that are displayed as ghosted in Device Manager when following the steps mentioned in: http://support.microsoft.com/kb/315539.

Top of Page