VMware

VMware ESX 4.1 Update 2 Release Notes

ESX 4.1 Update 2 | 27 OCT 2011 | Build 502767
VMware Tools | 27 OCT 2011 | Build 493255

Last Document Update: 12 DEC 2011

These release notes include the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESX:

  • Support for new processors – ESX 4.1 Update 2 supports AMD Opteron 6200 series (Interlagos) and AMD Opteron 4200 series (Valencia).

    Note: For the AMD Opteron 6200 and 4200 series (Family 15h) processors, ESX/ESXi 4.1 Update 2 treats each core within a compute unit as an independent core, except while applying licenses. For the purpose of licensing, ESX/ESXi treats each compute unit as a core. For example, a processor with 8 compute units can provide the processor equivalent of 16 cores on ESX/ESXi 4.1 Update 2. However, ESX/ESXi 4.1 Update 2 only requires an 8 core license for each 16-core processor.
  • Support for additional guest operating system ESX 4.1 Update 2 adds support for Ubuntu 11.10 guest operating system. For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.

Resolved Issues In addition, this release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Top of Page

Earlier Releases of ESX 4.1

Features and known issues from earlier releases of ESX 4.1 are described in the release notes for each release. To view release notes for earlier releases of ESX 4.1, click one of the following links:

Top of Page

Before You Begin

ESX, vCenter Server, and vSphere Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESX, VMware vCenter Server, the vSphere Client, and optional VMware products.

ESX, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 1.2.1 adds support for ESX 4.1 Update 2 and vCenter Server 4.1 Update 2 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility

  • Learn about hardware compatibility

    The Hardware Compatibility Lists are available in the Web-based Compatibility Guide. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides, and provides the option to search the guides, and save the search results in PDF format. For example, with this guide, you can verify whether your server, I/O, storage, and guest operating systems, are compatible.

    Subscribe to be notified of Compatibility Guide updates through This is the RSS image that serves as a link to an RSS feed.

  • Learn about vSphere compatibility:

    VMware vSphere Compatibility Matrixes (PDF)

Installation and Upgrade

Read the ESX and vCenter Server Installation Guide for step-by-step guidance about installing and configuring ESX and vCenter Server.

After successful installation, you must perform several configuration steps, particularly for licensing, networking, and security. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.

If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide for instructions about installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.

Management Information Base (MIB) files related to ESX are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0.x. You can download all MIB files from the VMware Web site at http://www.vmware.com/download.

Upgrading VMware Tools

VMware ESX 4.1 Update 2 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESX related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

Upgrading or Migrating to ESX 4.1 Update 2

ESX 4.1 Update 2 offers the following options for upgrading:

  • VMware vCenter Update Manager. vSphere module that supports direct upgrades from ESX 3.5 Update 5a and later, ESX 4.0.x, and ESX 4.1 and ESX 4.1 Update 1 to ESX 4.1 Update 2. For more details, see VMware vCenter Update Manager Administration Guide.
  • vihostupdate. Command-line utility that supports direct upgrades from ESX 4.0.x, ESX 4.1, and ESX 4.1 Update 1 to ESX 4.1 Update 2. This utility requires the vSphere CLI. For more details, see vSphere Upgrade Guide.
  • esxupdate. Command-line utility that supports direct upgrades from ESX 4.0.x, ESX 4.1, and ESX 4.1 Update 1 to ESX 4.1 Update 2. For more details, see ESX 4.1 Patch Management Guide.
  • esxupgrade.sh script. Script that supports upgrades from ESX 3.5 Update 5a and later. For more details, see Knowledge Base article 1009440 and vSphere Upgrade Guide.

Supported Upgrade Paths for Host Upgrade to ESX 4.1 Update 2 :

Upgrade Type

Upgrade Tools Supported
Supported Upgrade Paths to ESX 4. 1 Update 2

ESX 3.5 Update 5a

ESX 4.0:
Includes
ESX 4.0 Update 1
ESX 4.0 Update 2
ESX 4.0 Update 3

ESX 4.1:
Includes
ESX 4.1 Update 1

ESX-4.1.0-update02-502767.iso
  • VMware vCenter Update Manager with ESX host upgrade baseline
  • esxupgrade.sh

Yes

No

No

upgrade-from-esx4.0-to-4.1-update02-502767.zip
  • VMware vCenter Update Manager with host upgrade baseline
  • esxupdate
  • vihostupdate
    Note: Install the pre-upgrade bundle (pre-upgrade-from-esx4.0-to-4.1-502767.zip) first if you are using the vihostupdate utility or the esxupdate utility to perform the upgrade.

No

Yes

No

update-from-esx4.1-4.1_update02.zip
  • VMware vCenter Update Manager with patch baseline
  • esxupdate
  • vihostupdate
No
No
Yes

Notes:

Updated RPMs and Security Fixes

For a list of RPMs updated in ESX 4.1 Update 2, see Updated RPMs and Security Fixes. This document does not apply to the ESXi products.

Upgrading vSphere Client

After you upgrade vCenter Server or the ESX/ESXi host to vSphere 4.1 Update 2, you are prompted to upgrade the vSphere Client to vSphere Client 4.1 Update 2. The vSphere Client upgrade is mandatory. You must use only the upgraded vSphere Client to access vSphere 4.1 Update 2.

Note: You must use vSphere Client 4.1 Update 2 to access vCenter Servers that are part of a linked mode group with at least one vCenter Server 4.1 Update 2 instance.

Patches Contained in this Release

This release contains all bulletins for ESX that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESX410-Update02 contains the following individual bulletins:

ESX410-201110201-SG: Updates ESX 4.1 Core and CIM Components
ESX410-201110203-UG: Updates the ESX 4.1 bnx2x device driver
ESX410-201110204-SG: Updates the ESX 4.1 Openssl Component
ESX410-201110205-UG: Updates the ESX 4.1 bnx2 device driver
ESX410-201110206-SG: Updates the ESX 4.1 libuser rpm
ESX410-201110207-SG: Updates the ESX 4.1 pam rpm
ESX410-201110208-UG: Updates the ESX 4.1 parted rpm
ESX410-201110209-UG: Updates the ESX 4.1 vaai Component
ESX410-201110210-UG: Updates ESX 4.1 qlogic-fchba-provider
ESX410-201110211-UG: Updates the ESX 4.1 sata-ahci drivers
ESX410-201110212-UG: Updates the ESX 4.1 scsi-aic79xx driver
ESX410-201110213-UG: Updates the ESX 4.1 Megaraid SAS driver
ESX410-201110214-SG: Updates ESX 4.1 nss and nspr Libraries
ESX410-201110215-UG: Updates the ESX 4.1 tg3 network driver
ESX410-201110216-UG: Updates the ESX 4.1 igb network driver
ESX410-201110217-UG: Updates the ESX 4.1scsi-qla2xxx driver
ESX410-201110219-UG: Updates the ESX 4.1 tzdata rpm
ESX410-201110221-UG: Updates ESX 4.1 esxupdate Package
ESX410-201110222-UG: Updates the ESX 4.1 dhcp-cos Component
ESX410-201110223-UG: Updates ESX 4.1 nx-nic device driver
ESX410-201110224-SG: Updates ESX 4.1 mptsas device driver
ESX410-201110225-SG: Updates the ESX e1000 and e1000e driver
ESX410-201110226-UG: Updates the ESX scsi-hpsa driver
ESX410-201110227-UG: Updates sudo

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

Resolved issues previously documented as known issues are marked with the † symbol.

For a list of resolved issues that might occur if you upgrade from vSphere 4.1 Update 2 to vSphere 5.0, see KB 2007404.

CIM and API

  • RefreshDatastore API does not display any error when invoked on a datastore that is offline
    When you disconnect a storage cable from ESX 4.0, browse to the datastore created on this FC SAN using MOB (Managed Object Browser) and invoke the RefreshDatastore method on the MoRef (Managed Object Reference) of the defunct datastore. The RefreshDatastore API does not display any error.

    This issue is resolved in this release.
  • ESX host displays the memory type as unknown
    If you check the Hardware Status of an ESX host on HP DL980 G7 server connected through vCenter 4.1, the memory type is displayed as Unknown. This happens because, the Common Interface Model (CIM) does not support DDR3 memory type.

    This issue is resolved in this release.
  • Installing ESX 4.1 Update 1 embedded and rebooting the system results in a user world core dump
    After installing ESX 4.1 Update 1 and performing a reboot a user world core dump is generated and the following error message is displayed on the Alt-F11 console screen:

    CoreDump: 1480:Userworld sfcb-qlgc /var/core/sfcb-qlgc-zdump.003.

    The qlogic-fchba-provider-410.1.3.5-260247 (version 1.3.5) shipped with ESX 4.1 Update 2 resolves this issue.
  • CIM server sends invalid alerts to IBM Systems Director
    The CIM server (sfcbd) process on the ESX host might send invalid OMC_IpmiAlertIndication alerts related to missing CPUs to IBM Systems Director software. This issue has been observed on IBM blade servers such as IBM LS22 7901-AC1.

  • Filter_deserialzation and ws_deserialize_duration functions related to Openwsman have to be fixed as per partner request
    VMware partner has requested VMware to resolve filter_deserialzation and ws_deserialize_duration functions related to Openwsman in the next ESX release. The new ESX release resolves the Openwsman issues.

    This issue is resolved in this release.
  • IPMI logs in ESX/ESXi 4.x consume excessive disk space after a BMC reset or firmware update (KB 2000089).

  • The vSphere Client Health Status tab displays incorrect voltages and temperatures on Nehalem-EX based server
    On Nehalem-EX based servers, the Health Status tab of the vSphere Client displays incorrect voltages and temperatures due to IPMI driver timeout. You must restart the sfcbd service to add esxcfg-advcfg properties: CIMOemIpmiRetries and CIMOemIpmiRetryTime to change the default value in IPMI driver as shown in the following example:

    esxcfg-advcfg -A CIMOemIpmiRetries --add-desc 'oem ipmi retries' --add-default 4 --add-type int --add-min 0 --add-max 100
    esxcfg-advcfg -A CIMOemIpmiRetryTime --add-desc 'oem ipmi retry time' --add-default 1000 --add-type int --add-min 0 --add-max 10000/etc/init.d/sfcbd-watchdog restart

Guest Operating System

  • vMotion intermittently fails with a timeout message
    vMotion intermittently fails with a timeout message and logs the following error message in hostd.log file:

    error -1: Failed to launch virtual machine process. Failed to launch peer process.

    This message is logged because the name resolution call is performed before the signal handlers are installed.

    This issue is resolved in this release. The fix performs the name resolution call after the signal handlers are installed.

  • Memory Hotplug feature does not work on SLES 11 32-bit guest operating systems
    Memory Hotplug is not a supported feature on SLES 11 32-bit guest operating systems. The Memory/CPU Hotplug option at Edit Settings > Options is disabled in this release.

  • Certain Linux virtual machines configured with the clock=pit parameter fail to boot
    Certain Linux virtual machines such as SLES 10 SP2 configured with the clock=pit parameter might fail to boot. This issue is known to occur when you use hardware virtualization on Intel systems.

    This issue is resolved in this release.

  • Compiling modules for VMware kernel is supported only for the running kernel

    This issue is resolved in this release

  • No network connectivity after deploying and powering on a virtual machine
    If you deploy a virtual machine created by using the Customization Wizard on an ESX host, and power on the virtual machine, the virtual machine might lose network connectivity.

    This issue is resolved in this release.

Miscellaneous

  • Cannot log in to the ESX 4.1 host using non-local user credentials
    This is due to changes made to the /etc/security/access.conf file in ESX 4.1. The -:ALL:ALL entry in the access.conf file in ESX 4.1 causes NIS, or any non-local user authentication to fail.

    This issue is resolved in this release by adding a new plugins.vimsvc.shellAccessForAllUsers parameter on the vSphere Client. Now you can enable any non-local user authentication by setting the plugins.vimsvc.shellAccessForAllUsers to true and reconnecting to vCenter Server.
  • Adds memory imbalance warning for ESX hosts on which NUMA nodes are enabled
    Performance of ESX hosts on which NUMA nodes are enabled might be affected if the memory associated with one NUMA node is over 30% larger than other nodes. Starting with ESX 4.1 Update 2, the following warning appears if an ESX host encounters a memory imbalance:

    A significant memory (DRAM) imbalance (more than 30 percent) was detected between NUMA nodes x(a MB) and y(b MB). This may impact performance.
  • When you reset a Windows 2003 Service Pack 2 R2 virtual machine using vSphere 4.1, the virtual machine fails with a blue screen
    In vSphere 4.1 client, if you select the Restart Guest option for a symmetric multiprocessing virtual machine with Windows 2003 Service Pack 2 R2 running on uniprocessor kernel, the virtual machine fails and displays a blue screen.

    This issue is resolved in this release.
  • The datastore browser is unable to manage symbolic links on NFS volumes when connected to ESX host through vSphere Client 4.1
    When you connect to the ESX host through vSphere Client 4.1, the Datastore browser displays incorrect or inconsistent files for symlinks on NFS volumes.

    This issue is resolved in this release. The Symlink path is not canonicalized now.
  • ESX host fails with a purple diagnostic screen when you run 'less /proc/scsi/*/*' from service console
    Some SCSI drivers are ported from Linux when you run the command less /proc/scsi/*/* from the console. These drivers provide entries under /proc/scsi/ and write out more characters than the kernel had requested for, leading to a memory corruption. This causes the ESX host to fail with a purple diagnostic screen.

    This issue is resolved in this release.
  • When one controller takes over another controller, the datastores that reside on LUNs of the taken over controller might become inactive
    The datastores that reside on LUNs of the taken over controller might become inactive and remain inactive until you perform a rescan manually.

    This issue is resolved in this release. A manual rescan of datastores is not required when controllers change.
  • In ESX 4.1, hostd process might frequently fail
    The objects shared between task and internationalization filter results in the frequent failure of hostd process.

    This issue is resolved in this release. The fix in ESX 4.1 Update 2 clones the object instead of sharing the object.
  • Modifying snapshots using the command vim-cmd fails in certain snapshot tree structures
    Modifying snapshots using the commands vim-cmd vmsvc/snapshot.remove or vim-cmd vmsvc/snapshot.revert
     will fail when applied against certain snapshot tree structures.

    This issue is resolved in this release. Now a unique identifier, snapshotId, is created for every snapshot associated to a virtual machine. You can get the snapshotId by running the command vim-cmd vmsvc/snapshot.get <vmid>. You can use the following new syntax when working with the same commands:

    Revert to snapshot: vim-cmd vmsvc/snapshot.revert <vmid> <snapshotId> [suppressPowerOff/suppressPowerOn]
    Remove a snapshot: vim-cmd vmsvc/snapshot.remove <vmid> <snapshotId>
  • Virtual machine configured with fixed passthrough device fails to power on
    A virtual machine configured with 14 or more PCIe devices might fail to power on, if one of them is a fixed passthrough (FPT) device. Sometimes the virtual machine would boot successfully once, but fail to power on in subsequent reboots. An error message similar to the following is written to vmware.log:

    Mar 25 20:56:17.659: vcpu-0| Msg_Post: Error
    Mar 25 20:56:17.659: vcpu-0| [msg.pciPassthru.mmioOutsidePCIHole] PCIPassthru 005:00.0: Guest tried to map 32 device pages (with base address of 0x0) to a range occupied by main memory. This is outside of the PCI Hole. Add pciHole.start = "0" to the configuration file and then power on the VM.


    This issue is resolved in this release.

  • Cold migration fails between ESX 4.1 hosts that are in two different clusters
    When you perform cold migration between two ESX 4.1 hosts present on two different clusters, the migration fails.
    Enabling VCB on ESX firewall resolves this issue.

  • CPU shares are not enforced in an under-committed ESX host
    In an under-committed ESX host, the CPU shares for a virtual machine in a resource pool having limits on its CPU resources are not enforced. This might lead to performance issues.

    This issue is resolved in this release.

  • esxtop tool displays abnormal values in its memory view
    In the memory view of esxtop tool, you might observe abnormal values for the amount of memory allocated on a NUMA node for a virtual machine. For example, for a virtual machine with 6GB memory, the total remote memory or the total local memory might be shown as 4TB.

    This issue is resolved in this release.

  • ESX host fails with Usage error in dlmalloc error
    An ESX host might fail with a purple diagnostic screen that displays an error message similar to the following:

    #0 Panic (fmt=0x41800c703488 "PANIC %s:%d - Usage error in dlmalloc")

    You might see this error in the UserEventfdClose path of the backtrace information.

    This issue is resolved in this release.

  • Some HP servers running ESX/ESXi 4.1 hosts with Automatic Power On option disabled in BIOS power on automatically when the power cable is connected
    Some HP Servers running ESX/ESXi 4.1 hosts with the Automatic Power On option disabled in BIOS automatically power on when you connect the power cable. This issue has been observed on HP Servers DL380 G5/G6/G7 and DL360 G7.

    This issue is resolved in this release.

  • Updates the sudo package
    This release updates the sudo package for the ESX service console to sudo-1.7.2p1-9.el5_5. This resolves an issue where a cache error occurs when a user name or group name that uses mixed case is fetched from an Active Directory Server through LDAP or QAS.

  • Updates the tzdata package
    The updated tzdata package (tzdata-2011h-2.el5.x86_64.rpm) provided in this release addresses the changes in daylight saving time (DST) observation for Newfoundland, Labrador and the Russian Federation.

  • The Machine Check Exception purple diagnostic screen does not guide users in case of certain hardware issues
    This issue is resolved in this release. Now the following message displays in the Machine Check Exception purple diagnostic screen:

    System has encountered a Hardware Error - Please contact the hardware vendor.


  • Kerberos user authentication fails on certain ESX hosts
    You might not be able to use Kerberos user credentials to log in to an ESX host configured for Kerberos user authentication. This issue occurs in environments where only the TCP 88 port is enabled for Kerberos authentication.

    This issue is resolved in this release.

  • ESX host sometimes fails due to vmkiscsid.log file size
    When you run VMware ESX 4.x with a software iSCSI initiator, the /var/log/vmkiscsid.log file might not get cleared, unlike other syslog-generated log files. As a result, this file grows to a very large size if the ESX host cannot communicate with the iSCSI storage.

    This issue is resolved in this release. After installing this update if the vmkiscsid.log exceeds 100KB in size, the logrotate utility creates a new file, up to six files. After the sixth file, the first file is overwritten with new messages.

  • Updates to the openldap COS RPMs
    In this release, the openldap Service Console RPMs are updated to version 2.3.43-12.el5_6.7 for the latest bug fixes.

  • Updates Tomcat
    In this release, Tomcat is updated to version 6.0.32 for the latest bug fixes from the Apache Software Foundation.

Networking

  • When you delete a vNetwork Distributed Switch ESX host displays error message
    Attempts to remove a vNetwork Distributed Switch from the configuration section, it results in the following error message:

    Call "HostNetworkSystem.UpdateNetworkConfig" for object "networkSystem-3739" on vCenter Server "loganvc29.eng.vmware.com" failed.
    Operation failed, diagnostics report: DVS DvsPortset-0 has 1 shadow or zombie port.


    This issue is resolved in this release.
  • The first packet that e1000 vNIC sends has an invalid MAC address
    The latest guest operating system drivers write zeros to RAL/RAH before setting a valid MAC address. As a result, the first packet that the e1000 vNIC sends has the following MAC address: 00:00:00:xx:xx:xx.

    This issue is resolved in this release. The e1000 vNIC now sends out a packet only after a valid MAC address (RAL is nonzero) is set.
  • ESX host configured with vNetwork Distributed Switch (vDS) disconnects from the vCenter Server and does not reconnect even after multiple attempts
    The portset global lock is present for port enable, but not for port disable. When port disable modifies vDS properties, it conflicts with other port states that are modifying vDS properties. As a result of the conflict, the network connection is lost and the ESX host disconnects from the vCenter Server. Also, a Limit exceeded error message appears in the VMkernel log.

    This issue is resolved in this release. The fix adds a portset global lock for port disable.
  • Network connectivity might fail when VLANs are configured with physical NICs using be2net or ixgbe drivers
    On a vNetwork Distributed Switch, when you configure a single VLAN or a VLAN ID range for dvUplink portgroups, network connectivity might fail for the single VLAN or for the VLAN that is assigned the highest VLAN ID from the VLAN ID range, if you have installed a be2net or ixgbe driver for a physical NIC.

    This issue is resolved in this release.

  • Upgrades the igb driver to version 2.1.11.1.

  • ESX host might fail with a purple diagnostic screen due to an issue with the NetXen adapter
    Due to an issue with the NetXen adapter driver software, if adapter firmware auto-reset and NetQueue delete operations run concurrently, the host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • Network connectivity might fail if VLANS and PVLANs are configured on the same vNetwork Distributed Switch
    Virtual machines on a vNetwork Distributed Switch (vDS) configured with VLANs might lose network connectivity upon boot if you configure Private VLANs on the vDS. However, disconnecting and reconnecting the uplink solves the problem.This issue has been observed on be2net NICs and ixgbe vNICs.

    This issue is resolved in this release.

  • Purple diagnostic screen with vShield or third-party vSphere integrated firewall products (KB 2004893).

  • Upgrades the tg3 driver to version 3.110h.v41.1.

  • DHCP server runs out of IP addresses
    An issue in the DHCP client might cause the DHCP client to send DHCPRELEASE messages using the broadcast MAC address of the server. However, the release messages might be dropped by an intermediate DHCP proxy router. Eventually the DHCP server runs out of IP addresses.

    This issue is resolved in this release. Now the release messages are sent using unicast addresses.

Security

  • Resolves an integer overflow issue in the SFCB
    This release resolves an integer overflow issue in the SFCB that arises when the httpMaxContentLength is changed from its default value to 0 in in /etc/sfcb/sfcb.cfg. The integer overflow could allow remote attackers to cause a denial of service (heap memory corruption) or possibly execute arbitrary code via a large integer in the Content-Length HTTP header.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2010-2054 to this issue.
  • Updates the pam COS RPM
    In this release, the Service Console pam RPM is updated to version 1.1.0-1.5814, which resolves multiple security issues with PAM modules. 

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-3316, CVE-2010-3435, and CVE-2010-3853 to these issues.
  • Updates the openssl COS RPM 
    In this release, the Service Console openssl RPM is updated to openssl-0.9.8e.12.el5_5.7 which resolves two security issues. 

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2008-7270 and CVE-2010-4180 to these issues.
  • Updates the NSS and NSPR COS RPM 
    In this release, the license text in the source files of Network Security Services (NSS) and Netscape Portable Runtime (NSPR) libraries are updated to distribute these packages under the terms of the Lesser General Public License (LGPL) 2.1 instead of Mozilla Public License (MPL). In addition, the Service Console NSS and NSPR RPMs are updated to nspr-4.8.6-1 and nss-3.12.8-4 which addresses multiple security issues. 

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-3170 and CVE-2010-3173 to these issues.
  • Updates the libuser COS RPM    
    In this release, the Service Console libuser RPM is updated to version 0.54.7-2.1.el5_5.2 to resolve a security issue.

    The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2011-0002 to this issue.
  • Updates to the third-party glibc library
    Updates the glibc third-party library to 2.5-58.el5_6.2 to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-0296, CVE-2011-0536, CVE-2011-1071, CVE-2011-1095, CVE-2011-1658, and CVE-2011-1659 to these issues.

  • Update to mpt2sas driver
    In this release, the mpt2sas driver is updated to resolve multiple security issues that allow local user privilege escalation.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2011-1494 and CVE-2011-1495 to these issues.
  • Updates the Intel e1000 and e1000e drivers
    This resolves a security issue in the e1000 and e1000e Linux drivers for Intel PRO/1000 adapters that allows a remote attacker to bypass packet filters and send manipulated packets.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2009-4536 to this issue.

  • Updates JRE
    In this release, JRE is updated to version 1.6.0_24, which addresses multiple security issues that existed in earlier releases.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the following names to the security issues fixed in JRE 1.6.0_24: CVE-2010-4422, CVE-2010-4447, CVE-2010-4448, CVE-2010-4450, CVE-2010-4451, CVE-2010-4452, CVE-2010-4454, CVE-2010-4462, CVE-2010-4463, CVE-2010-4465, CVE-2010-4466, CVE-2010-4467, CVE-2010-4468, CVE-2010-4469, CVE-2010-4470, CVE-2010-4471, CVE-2010-4472, CVE-2010-4473, CVE-2010-4474, CVE-2010-4475, CVE-2010-4476.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the following names to the security issues fixed in JRE 1.6.0_22: CVE-2010-1321, CVE-2010-3541, CVE-2010-3548, CVE-2010-3549, CVE-2010-3550, CVE-2010-3551, CVE-2010-3552, CVE-2010-3553, CVE-2010-3554, CVE-2010-3555, CVE-2010-3556, CVE-2010-3557, CVE-2010-3558, CVE-2010-3559, CVE-2010-3560, CVE-2010-3561, CVE-2010-3562, CVE-2010-3563, CVE-2010-3565, CVE-2010-3566, CVE-2010-3567, CVE-2010-3568, CVE-2010-3569, CVE-2010-3570, CVE-2010-3571, CVE-2010-3572, CVE-2010-3573, CVE-2010-3574.

  • Updates ESX Service Console Operating System kernel
    In this release, the ESX Service Console Operating System (COS) kernel is updated to kernel-2.6.18-238.9.1.el5 which fixes multiple security issues in the COS kernel.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-1083, CVE-2010-2492, CVE-2010-2798, CVE-2010-2938, CVE-2010-2942, CVE-2010-2943, CVE-2010-3015, CVE-2010-3904, CVE-2010-3066, CVE-2010-3067, CVE-2010-3078, CVE-2010-3086, CVE-2010-3477, CVE-2010-3432, CVE-2010-3442, CVE-2010-3699, CVE-2010-3858, CVE-2010-3859, CVE-2010-3865, CVE-2010-3876, CVE-2010-3880, CVE-2010-4083, CVE-2010-4157, CVE-2010-4161, CVE-2010-4242, CVE-2010-4247, CVE-2010-4248, CVE-2010-3296, CVE-2010-3877, CVE-2010-4072, CVE-2010-4073, CVE-2010-4075, CVE-2010-4080, CVE-2010-4081, CVE-2010-4158, CVE-2010-4238, CVE-2010-4243, CVE-2010-4255, CVE-2010-4263, CVE-2010-4343, CVE-2010-4526, CVE-2010-4249, CVE-2010-4251, CVE-2010-4655, CVE-2010-4346, CVE-2011-0521, CVE-2011-0710, CVE-2011-1010, CVE-2011-1090, and CVE-2011-1478 to these issues.

Storage

  • FalconStor host bus adapter (HBA) failover results in All Paths Down (APD) state, when using QLogic QME2472 host bus adapters with ESX 4.0
    The failover with FalconStor host bus adapter results in all paths down state when one of the IPStor controllers is powered off. Qlogic has released an updated driver that addresses WWPN spoofing, FalconStor arrays use this method for handling failover.

    This issue is resolved in this release.
  • ESX/ESXi host fails to detect the iSCSI capabilities of NC382i adapter after an upgrade
    If you do not configure swiscsi using Broadcom dependent adapter and upgrade ESX/ESXi from 4.0 to 4.1, ESX/ESXi host fails to detect the iSCSI capabilities of NC382i adapter.

    This issue is resolved in this release.
  • When you reboot a Fibre Channel switch in ESX/ESXi host the link status of the switch is not restored
    When you enforce Qlogic ISP2532 HBA to work in 4GB mode and reboot the Fibre Channel switch, the link status of the Fibre Channel switch is not restored.

    This issue is resolved in this release.
  • The target information for Fiber Channel logical unit numbers (LUNs) of 3par array connected to ESX host is not displayed in vSphere
    When viewing multi path information from the Vmware Infrastructure Client connected to ESX host or vCenter Server, you might not see the target information for some paths of working LUNs.

    This issue is resolved in this release by replacing the empty values for port numbers with the new path information.
  • In ESX/ESXi, virtual machines cannot detect the raw device mapping files that reside on Dell MD36xxi storage array
    Virtual machines cannot detect the raw device mapping files that reside on Dell MD36xxi storage array if the Dell MD36xxi storage array is not added to the claim rule set.

    This issue is resolved in this release. The fix adds DELL MD36xxi, MD3600f, and MD3620f storage arrays to the claim rule set of the Native Multipathing Plug-in's (NMP) Storage Array Type Plug-in (SATP). Also, these storage arrays are handled by VMW_SATP_LSI SATP. For more information, see KB 1037925.
  • Snapshots of upgraded VMFS volumes fail to mount on ESX 4.x hosts
    Snapshots of VMFS3 volumes upgraded from VMFS2 with block size greater than 1MB might fail to mount on ESX 4.x hosts. The esxcfg-volume -l command to list the detected VMFS snapshot volumes fail with the following error message:

    ~ # esxcfg-volume -l
    Error: No filesystem on the device


    This issue is resolved in this release. Now you can mount or re-signature snapshots of VMFS3 volumes upgraded from VMFS2.
  • ESX hosts with QLogic fibre channel in-box driver fail with a purple diagnostic screen when scanning for LUNs
    While processing the reported data, the LUN discovery function does not check whether the number of LUNs reported exceeds the maximum number of LUNs that are supported. As a result, ESX hosts with QLogic fibre channel in-box driver might encounter an Exception 14 error and fail with a purple diagnostic screen.

    This issue is resolved in this release. The LUN discovery function now checks whether the number of LUNs reported exceeds the maximum number of LUNs supported (256).
  • The Use Active Unoptimized paths (useANO) setting for devices is not persistent across system reboots
    For devices using Native Multipathing Plug-in's (NMP) round robin path selection policy, if you set the value of the useANO setting to TRUE, it is reset to FALSE after a system reboot.

    This issue is resolved in this release. The useANO setting persists after the system reboot.
  • ESX 4.1 continuously logs SATA internal error messages on a Dell PowerEdge system
    On a Dell PowerEdge R815 or R715 system that uses SATA SB600, SB700, or SB800 controller, ESX 4.1 continuously logs SATA internal error messages similar to the following to /var/log/messages file.

    cpu2:4802)<6>ata1:soft resetting port
    cpu1:4802)<6>ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
    cpu0:4802)<6>ata1.00: configured for UDMA/100
    cpu0:4802)<6>ata1: EH complete
    cpu0:4802)<3>ata1.00: exception Emask 0x40 SAct 0x0 SErr 0x800 action 0x2
    cpu0:4802)<3>ata1.00: (irq_stat 0x40000001)
    cpu0:4802)<3>ata1.00: tag 0 cmd 0xa0 Emask 0x40 stat 0x51 err 0x24 (internal error)


    If media is not ready or not present in the SATA CD-ROM drive, an internal error invokes the interrupt handler resulting in excessive logging.

    This issue is resolved in this release.
  • ESX host connected to a tape drive accessed using the aic79xx driver fails
    An ESX host connected to a tape drive accessed using the aic79xx driver might fail with a purple screen and display an error message similar to the following when the driver tries to access a freed memory area:

    Loop 1 frame=0x4100c059f950 ip=0x418030a936d9 cr2=0x0 cr3=0x400b9000

    This issue is resolved in this release.
  • Path status of logical unit numbers (LUNs) connected to ESX/ESXi host is not updated even after they are reconnected to the ESX/ESXi host
    In a LUN that has multiple connections to ESX host, if one of the cables is unplugged, the path connecting ESX host and storage becomes inactive and remains in that state even after the cable is reconnected. A refresh or a rescan will not update the path status. Only a reboot will activate the connection.

    This issue is resolved in this release.

  • ESX host connected through an Adaptec card with AIC79xx driver might fail with a purple diagnostic screen
    An ESX host with an Adaptec U320 HBA that uses the AIC79xx driver might fail with a purple diagnostic screen due to an issue with the driver in handling hardware error interrupts. An error message similar to the following might be written to vmkernel.log:

    hwerrint, Discard Timer has timed out

    This issue is resolved in this release. Now the ESX host does not fail. However, in case of HBA hardware failure, the hwerrint, Discard Timer has timed out error message might still be logged.

  • Deleting a large file on the NFS datastore with ESX succeeds, but reports the error: Error caused by file /vmfs/volumes/<datastore-uuid>/<filename> (KB 1035332).

  • Virtual machine might not power on after failover in an HA cluster
    A virtual machine might not power on after failing over in an HA cluster. The power-on process stops responding when the virtual machine accesses the vswap file. Messages similar to the following might be written to vmware.log:

    Aug 02 06:48:41.572: vmx| CreateVM: Swap: generating normal swap file name.
    Aug 02 06:48:41.573: vmx| Swap file path: '/vmfs/volumes/<swap file name>'


    This issue is resolved in this release.

Supported Hardware

  • MAI KEYLOK II USB device attached to an ESX host is not accessible on Linux virtual machines
    When you attach a MAI KEYLOK II USB device to an ESX host on CentOS 5.5 or RHEL 5.4 or Ubuntu 9.10 or Ubuntu10.04 operating systems, the Linux virtual machines present in the guest operating system cannot access it. The device is visible in the guest operating system, but the Linux virtual machines cannot use it.

    This issue is resolved in this release.

Upgrade and Installation

  • During a fresh install of ESX 4.1 Update 1 or earlier releases you cannot change the block size or the size of the VMFS volume
    When you install ESX/ ESXi 4.1 Update1 or earlier releases with advanced setup, you do not have an option to modify the partition size and VMFS block size. By default the VMFS volume is created for the full partition.

    This issue is resolved in this release. ESX/ESXi now allows you to specify the VMFS block size during GUI, text, and kickstart installation.
  • When you upgrade ESX from 3.5 to 4.1 the CPU utilization increases in NetWare 5.1 Service Pack 8 virtual machine
    Immediately after you upgrade ESX from 3.5 to 4.1, the CPU utilization in Netware 5.1 Service Pack 8 virtual machine increases by more than 50 percent.

    This issue is resolved in this release.
  • Megaraid_sas driver has been upgraded from 4.0.14.1-18 to 5.30
    The megaraid_sas driver has been upgraded from 4.0.14.1-18 to 5.30. The upgrade adds 11 new PCI IDs to the megaraid_sas driver.The upgrade resolves the iMR chip and gen2 chip issues.
  • ESX 4.1 installation fails on a Hitachi BS320 AC51A3 blade server with the LSI SAS Mezzanine controller (LSI 1064E)
    The issue occurs due to an experimental feature introduced in ESX 4.1, a FireWire serial bus scan. See https://www.vmware.com/support/policies/experimental.html for the official VMware stance on experimental features in our products.

    This issue is resolved in this release by disabling FireWire. FireWire is not officially supported in ESX 4.1 and later.

  • Upgrade from ESX 4.0.x to ESX 4.1 using VMware Update Manager fails on IBM x3650 M2 servers
    Upgrade from ESX 4.0.x to ESX 4.1 using VMware Update Manager might fail on IBM x3650 M2 servers with an error message similar to the following:

    Software or system configuration of <host name> is incompatible. Check scan results for details.

    This issue is resolved in this release.

vCenter Server, vSphere Client, and vSphere Web Access

  • vSphere Client does not display the service tag of ESX/ESXi hosts based on certain platforms
    vSphere Client and vSphere PowerCLI might not display the service tag of ESX/ESXi hosts if the hosts are not based on Dell hardware platforms.

    This issue is resolved in this release.

Virtual Machine Management

  • vSphere Client displays invalid virtual machine statistics
    After you create, power on, and delete a virtual machine, statistics such as CPU and memory usage of that virtual machine displayed in the vSphere Client performance chart might be invalid.

    This issue is resolved in this release.

  • After upgrading ESX 4.0 to ESX 4.0 update 1, only one port on a multi port serial PCI card works
    When you upgrade an ESX host from ESX 4.0 to ESX 4.0 update 1, only one port of the serial card works even though all the ports on the serial card were working fine before the upgrade. When you power-on the virtual machine, on the console you might see the following error message:
    "serial0: The "/dev/ttyS1" file does not appear to be a serial port: Input/output error. Virtual device serial0 will start disconnected."

    This issue is resolved in this release.
  • Customization of a virtual machine hot clone running Windows 2008 R2 guest operating system fails and the clone reboots continuously
    The customization of the hot clone of a Windows 2008 R2 guest operating system fails with the "auto check not found" error message, and the virtual machine reboots continuously.

    This issue is resolved in this release.
  • In vCenter Server, a cloned Windows 2000 Professional virtual machine displays Windows 2000 as the guest operating system in the vmx file instead of Windows 2000 Professional
    In vCenter Server, create a new Windows 2000 Professional virtual machine. Using vCenter clone the virtual machine and check the vmx file of the new virtual machine. The guest operating system is displayed as Windows 2000, cloned virtual machine should display Windows 2000 Professional as the guest operating system.

    This issue is resolved in this release. The fix interchanges the entries for the operating systems.

  • In ESX 3.5/4.0, when you browse ESX host through Managed Object Browser (MOB), the CPU and Memory reservation value are displayed as “unset”
    When you browse ESX 3.5/4.0 host through Managed Object Browser (MOB), (Content > ha-folder-root > ha-datacenter > ha-folder-vm > Click particular VM value -> summary -> config) instead of connecting it to vCenter server, then the CPU reservation value [virtualMachineConfigSummary.cpuReservation] and memory reservation value [virtualMachineConfigSummary.memoryReservation] for the virtual machines is displayed as “unset”.

    This issue is resolved by retrieving the reservation info from a configuration file.

  • In ESX/ESXi 4.0, the maxSample performance statistic property in PerfQuerySpec displays incorrect value
    When you query for performance statistics,the maxSample property in PerfQuerySpec returns two values instead of one. This happens even after you set the maxSample property to return a single value.

    This issue is resolved in this release.
  • vSphere Client displays incorrect provisioned space for a powered-off virtual machine
    The ESX host does not consider the memory reservation while calculating the provisioned space for a powered-off virtual machine. As a result, the vSphere Client might display a discrepancy in the values for provisioned space while the virtual machine is powered-on or powered-off.

    This issue is resolved in this release.
  • Removing a snapshot causes the VMware hostd management agent to fail
    If you remove a virtual machine snapshot, the VMware hostd management agent might fail and display a backtrace similar to the following:

    [2010-02-23 09:26:36.463 F6525B90 error 'App']
    Exception: Assert Failed: "_config != __null && _view != __null" @ bora/vim/hostd/vmsvc/vmSnapshot.cpp:1494


    This is because the <vm_name>-aux.xml located in the same directory as the virtual machine configuration file is empty. When a virtual machine is created or registered on a host, the contents of the <vm_name>-aux.xml file is read and the _view object is populated. If the XML file is empty the _view object is not populated. This results in an error when consolidating the snapshot.

    This issue is resolved in this release.
  • ESX host stops responding when SNMP queries are sent using a MIB file
    An ESX host might stop responding if you enable the embedded SNMP agent on the host and send SNMP queries using the VMWARE-VMINFO-MIB.mib MIB file to virtual machines that are being migrated, cloned, created, or deleted.

    This issue is resolved in this release.
  • Virtual machine running on snapshots becomes unresponsive if the Limit IOPS value for the virtual disk is changed
    If you change the Limit IOPS value for a virtual disk from Unlimited to any other value on a virtual machine which is running with snapshot(s) or creating a snapshot, the virtual machine might become unresponsive every few seconds.

    This issue is resolved in this release.
  • VMware hostd service might fail during a quiesced snapshot operation

    This issue is resolved in this release.

  • Virtual machine sometimes powers off while creating or deleting snapshots
    While performing snapshot operations, if you simultaneously perform another task such as browsing a datastore, the virtual machine might be abruptly powered off. Error messages similar to the following are written to vmware.log:

    vmx| [msg.disk.configureDiskError] Reason: Failed to lock the file
    vmx| Msg_Post: Error
    vmx| [msg.checkpoint.continuesync.fail] Error encountered while restarting virtual machine after taking snapshot. The virtual machine will be powered off.


    This issue occurs when another process accesses the same file that is required by the virtual machine for one operation.

    This issue is resolved in this release.

vMotion and Storage vMotion

  • When you perform vMotion on multiple virtual machines, ESX host displays Out of Memory warning messages
    When you perform vMotion on multiple virtual machines present on two ESX hosts, the memory on one ESX host gets over committed and the page allocation fails. This results in a log spew with the following warning:

    WARNING: vmklinux26: AllocPages: gfp_mask=0xd4, order=0x0, vmk_PktSlabAllocPage returned 'Out of memory' in the vmkernel log during vMotion

    This issue is resolved in this release.
  • If you try to put High Availability (HA) and Distributed Resource Scheduler (DRS) in maintenance mode or perform vMotion operation, vMotion fails with Out of Memory error message
    When you perform concurrent vmotions or put a 4.1 host in maintenance mode which is a part of DRS enabled cluster using vCenter 4.1 or vSphere 4.1, then evacuation of virtual machines fails with the following error messages:

    Migration to host <> failed with error Out of memory (195887124). vMotion migration [184468762:1286284186972455] write function failed.

    This issue is resolved in this release.
  • vMotion fails due to locked swap file
    A vMotion operation might fail with error messages indicating locked swap files in working directory under NAS datastore.

    This issue is resolved in this release.

  • Virtual machine with large amounts of RAM (32GB and higher) loses pings during vMotion

    This issue is resolved in this release.

  • NUMA imbalance after vMotion of virtual machines (KB 2000740).

VMware Tools

  • Installation of VMware Tools on a virtual machine with Windows NT 4.0 operating system results in an error message
    Attempts to install VMware Tools on a virtual machine with Windows NT 4.0 operating system succeeds. However, the vSphere Client displays the tools status as VMware Tools: Out of date.

    This issue is resolved in this release.

  • VMware Tool upgrade fails as some folders in /tmp are deleted in some Linux guest operating systems
    When you upgrade VMware Tools from ESX 3.5 to ESX 4.0 the upgrade might fail because, some Linux distributions periodically delete old files and folders in /tmp. VMware Tools upgrade requires this directory in /tmp for auto upgrades.

    This issue is resolved in this release.

  • Windows virtual machine loses network connectivity after upgrading VMware Tools
    When you upgrade VMware Tools, which has Host Guest File System (HGFS) installed, from ESX 3.5 to ESX 4.x the HGFS driver might not be uninstalled properly. As a result, the Windows virtual machine's network Provider Order tab under Network Connections > Advanced > Advanced Settings displays incorrect information, and the virtual machine might lose network connectivity.

    This issue is resolved in this release. Now the earlier version of the HGFS driver and all related registry entries are uninstalled properly during upgrade.
  • When you take a quiesced snapshot of Windows 2008 R2 virtual machine the additional disk in the virtual machine fails
    On a Windows 2008 R2 virtual machine, when you add a dynamic disk and take a quiesced snapshot, disk manager displays a failed disk or missing disk message. This issue is applicable for the following Windows operating systems.
    • Windows 2003
    • Windows Vista
    • Windows 2008
    • Windows 7

    This issue is resolved in this release.
  • Windows HGFS provider causes a deadlock, if an application concurrently calls WNetAddConnection2 API from multiple threads
    The Windows HGFS provider Dll results in a deadlock for applications such as eEye Retina due to incorrect provider implementation of Windows WNetAddConnection2 or WNetCancelConnection APIs in a multi-threaded environment.

    This issue is resolved in this release.
  • Installation of VMware Tools on an RHEL 2.1 virtual machine fails with an error message
    When you try to install VMware Tools on an RHEL 2.1 virtual machine running on ESX 4.1 by running the vmware-install.pl script, the install process fails with the following error message:

    Creating a new initrd boot image for the kernel. Error opening /tmp/vmware-fonts2/system_fonts.conf Execution aborted.

    This issue is resolved in this release.
  • Extraneous errors are displayed when restarting Linux virtual machines after installing VMware Tools
    After you install VMware Tools for Linux and restart the guest operating system, the device manager for the Linux kernel (udev) might report extraneous errors similar to the following:

    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{model}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'AT


    This issue is resolved in this release. Now the VMware Tools Installer for Linux detects the device and only writes system-specific rules.
  • Configuration file entries are overwritten on Linux virtual machines while installing VMware Tools
    When you install or update VMware Tools on Linux virtual machines, the VMware Tools installer might overwrite any entries in the configuration files (such as /etc/updated.conf file for Redhat and Ubuntu, and /etc/sysconfig/locate for SUSE) made by third party development tools. This might affect cron jobs running updatedb on these virtual machines.

    This issue is resolved in this release.
  • Disabled CUPS (Common UNIX Printing System) service starts automatically, when VMware Tools is installed or upgraded on SUSE Linux Enterprise Server 10 Service Pack 3, x86 virtual machine
    By default, the CUPS service on a virtual machine is disabled however, when you start the VMware Tools upgrade process on SUSE Linux Enterprise Server 10 Service Pack 3 x86 guest operating system, the CUPS service starts automatically. Disable the CUPS service in SUSE Linux Enterprise 10 and Red Hat Enterprise Linux 5 using the following commands:
    • On SUSE Linux Enterprise 10
    • Run the following commands and disable the CUPS service:
      chkconfig --level 2345 cups off
      chkconfig --level 2345 cupsrenice off

      Check the service status using the command: service cups status chkconfig -l|grep -i cups
      Make sure that the service is disabled.

    • On Red Hat Enterprise Linux 5
    • Run the following commands:
      chkconfig --level 2345 cups off
      system-config-services.
  • Kernel modules of VMware Tools are not loaded while switching kernels
    When you install VMware Tools and switch between kernels, vsock and vmmemctl modules are not loaded at boot. The following error message appears when you run a dmesg command or when you try to manually load the modules for the wrong kernel:

    vmmemctl: disagrees about version of symbol module_layout
    vsock: disagrees about version of symbol module_layout

    This issue is resolved in this release. The fix in ESX 4.1 Update 2 rebuilds the VMware Tools modules while switching kernels.

  • Virtual Machine Communication Interface (VMCI) Socket on a Linux guest operating system stops responding when a queue pair is detached
    If a peer of a VMCI stream socket connection (for example, stream socket server running in a server virtual machine) detaches from a queue pair while the connection state is connecting, then the other peer (for example, stream socket client with a blocking connect) might fail to connect.
    The peer detach might take place if one of the following event occurs:
    • The virtual machine is unavailable.
    • There is a busmem invalidation failure reported from within the guest operating system.

    This issue is resolved in this release. The fix in ESX 4.1 Update 2 treats the peer detach as a reset and propagates the following error message to the other peer:
    Connection reset by peer
  • VMware Tools service (vmtoolsd) fails on 64 bit Windows virtual machines when the virtual memory allocation order is forced from top to down using a registry key
    In Windows, VirtualAlloc can be forced to allocate memory from top to down using the AllocationPreference registry key as mentioned in the following link: http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx.
    On such virtual machines, VMware Tools service fails.

    This issue is resolved in this release.
  • VMware Tools service (vmtoolsd) might fail after you install VMware Tools on a Linux guest with long operating system name
    If a Linux guest operating system reports full operating system name that is greater than 100 characters, VMware Tools service might fail. For more information, see KB 1034633.

    This issue is resolved in this release. The fix increases the maximum allowed size of operating system name to 255 characters.
  • X11 configuration is changed after installing VMware Tools
    After you install VMware Tools on a SuSe Linux Enterprise Server (SLES) virtual machine, X11 configuration is changed. As a result, the keyboard locale setting is changed to Albanian, the mouse and monitor configuration is blank, and VNC fails.

    This issue is resolved in this release.

  • VMware Tools installation fails to complete on Solaris 10 64 bit virtual machines
    Installing VMware Tools on a Solaris 10 64-bit virtual machine fails. Attempts to start the VMware Tools service by running the VMware Tools configuration script (vmware-config-tools.pl) fail with an error message similar to the following:

    Guest operating system daemon:Killed
    Unable to start services for VMware Tools
    Execution aborted.


    Attempts to start the VMware Tools daemon from command line fail with an error message similar to the following:

    ./vmtoolsd-wrapper
    ld.so.1: vmtoolsd: fatal: libvmtools.so: open failed: No such file or directory


    This issue is resolved in this release.

  • The Windows virtual machine with VMware Tools reports Event ID 105 in the Event Viewer tab (KB 1037755).

  • VMware Tools does not perform auto upgrade when a Microsoft Windows 2000 virtual machine is restarted
    When you configure VMware Tools for auto upgrading during power cycle, by selecting the Check and upgrade Tools before each power-on option under the Advanced pane in Virtual Machine Properties window, VMware Tools does not perform auto upgrade in Microsoft Windows 2000 guest operating systems.

    This issue is resolved in this release.

Top of Page

Known Issues

This section describes known issues in this release in the following subject areas:

Known issues not previously documented are marked with the * symbol.

Backup

  • VCB service console commands generate error messages in ESX service console
    When you run VCB service console commands in the service console of ESX hosts, error messages similar to the following might be displayed:

    Closing Response processing in unexpected state:3
    canceling invocation: server=TCP:localhost:443, moref=vim.SessionManager:ha-sessionmgr, method=logout

    Closing Response processing in unexpected state:3
    [.... error 'App'] SSLStreamImpl::BIORead (58287920) failed: Thread pool shutdown in progress
    [.... error 'App'] SSLStreamImpl::DoClientHandshake (58287920) SSL_connect failed with BIO Erro


    You can ignore these messages. These messages do not impact the results of VCB service console commands.

    Workaround: None.

CIM and API

  • SFCC library does not set the SetBIOSAttribute method in the generated XML file
    When Small Footprint CIM Client (SFCC) library tries to run the
    SetBIOSAttribute method of the CIM_BIOSService class through SFCC, a XML file containing the following error will be returned by SFCC: ERROR CODE="13" DESCRIPTION="The value supplied is incompatible with the type". This issue occurs when the old SFCC library does not support setting method parameter type in the generated XML file. Due to this issue, you cannot invoke the SetBIOSAttribute method. SFCC library in ESX 4.1 hosts does not set the method parameter type in the socket stream XML file that is generated.

    A few suggested workarounds are:
    • IBM updates the CIMOM version
    • IBM patches the CIMOM version with this fix
    • IBM uses their own version of SFCC library

Guest Operating System

  • Installer window is not displayed properly during RHEL 6.1 guest operating system installation (KB 2003588). *
  • Guest operating system might become unresponsive after you hot-add memory more than 3GB
    Redhat 5.4-64 guest operating system might become unresponsive if you start with an IDE device attached, and perform a hot-add operation to increase memory from less than 3GB to more than 3GB.

    Workaround: Do not use hot-add to change the virtual machine's memory size from less than or equal to 3072MB to more than 3072MB. Power off the virtual machine to perform this reconfiguration. If the guest operating system is already unresponsive, restart the virtual machine. This problem occurs only when the 3GB mark is crossed while the operating system is running.
  • Windows NT guest operating system installation error with hardware version 7 virtual machines
    When you install Windows NT 3.51 in a virtual machine that has hardware version 7, the installation process stops responding. This happens immediately after the blue startup screen with the Windows NT 3.51 version appears. This is a known issue in the Windows NT 3.51 kernel. Virtual machines with hardware version 7 contain more than 34 PCI buses, and the Windows NT kernel supports hosts that have a limit of 8 PCI buses.

    Workaround: If this is a new installation, delete the existing virtual machine and create a new one. During virtual machine creation, select hardware version 4. You must use the New Virtual Machine wizard to select a custom path for changing the hardware version. If you created the virtual machine with hardware version 4 and then upgraded it to hardware version 7, use VMware vCenter Converter to downgrade the virtual machine to hardware version 4.
  • Installing VMware Tools OSP packages on SLES 11 guest operating systems displays a message stating that the packages not supported
    When you install VMware Tools OSP packages on a SUSE Linux Enterprise Server 11 guest operating system, an error message similar to the following is displayed:
    The following packages are not supported by their vendor.

    Workaround: Ignore the message. The OSP packages do not contain a tag that marks them as supported by the vendor. However, the packages are supported.

Miscellaneous

  • An ESX/ESXi 4.1 U2 host with vShield Endpoint 1.0 installed fails with a purple diagnostic screen mentioning VFileFilterReconnectWork (KB 2009452). *

  • Running resxtop or esxtop for extended periods might result in memory problems
    Memory usage by
    resxtop or esxtop might increase over time depending on what happens on the ESX host being monitored. That means that if a default delay of 5 seconds between two displays is used, resxtop or esxtop might shut down after around 14 hours.

    Workaround: Although you can use the -n option to change the total number of iterations, you should consider running resxtop only when you need the data. If you do have to collect resxtop or esxtop statistics over a long time, shut down and restart resxtop or esxtop periodically instead of running one resxtop or esxtop instance for weeks or months.
  • Group ID length in vSphere Client shorter than group ID length in vCLI
    If you specify a group ID using the vSphere Client, you can use only nine characters. In contrast, you can specify up to ten characters if you specify the group ID by using the
    vicfg-user vCLI.

    Workaround: None


  • Warning message appears when you run esxcfg-pciid command
    When you try to run the esxcfg-pciid command in the service console to list the Ethernet controllers and adapters, you might see a warning message similar to the following:
    Vendor short name AMD Inc does not match existing vendor name Advanced Micro Devices [AMD]
    kernel driver mapping for device id 1022:7401 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7409 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7411 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7441 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown


    This issue occurs when both the platform device-descriptor files and the driver-specific descriptor files contain descriptions for the same device.

    Workaround: You can ignore this warning message.

Networking

  • Certain versions of VMXNET 3 driver fail to initialize the device when the number of vCPUs is not a power of two (KB 2003484). *

  • Network connectivity and system fail while control operations are running on physical NICs
    In some cases, when multiple X-Frame II s2io NICs are sharing the same PCI-X bus, control operations, such as changing the MTU, on the physical NIC cause network connectivity to be lost and the system to fail.

    Workaround: Avoid having multiple X-Frame II s2io NICs in slots that share the same PCI-X bus. In situations where such a configuration is necessary, avoid performing control operations on the physical NICs while virtual machines are doing network I/O.
  • Poor TCP performance might occur in traffic-forwarding virtual machines with LRO enabled
    Some Linux modules cannot handle LRO-generated packets. As a result, having LRO enabled on a VMXNET2 or VMXNET3 device in a traffic-forwarding virtual machine that is running a Linux guest operating system can cause poor TCP performance. LRO is enabled by default on these devices.

    Workaround: In traffic-forwarding virtual machines running Linux guest operating systems, set the module load time parameter for the VMXNET2 or VMXNET3 Linux driver to include disable_lro=1.
  • Memory problems occur when a host uses more than 1016 dvPorts on a vDS
    Although the maximum number of allowed dvPorts per host on vDS is 4096, memory problems can start occurring when the number of dvPorts for a host approaches 1016. When this occurs, you cannot add virtual machines or virtual adapters to the vDS.

    Workaround: Configure a maximum of 1016 dvPorts per host on a vDS.
  • Reconfiguring VMXNET3 NIC might cause virtual machine to wake up
    Reconfiguring a VMXNET3 NIC while Wake-on-LAN is enabled and the virtual machine is asleep causes the virtual machine to resume.

    Workaround: Put the virtual machine back into sleep mode manually after reconfiguring (for example, after performing a hot-add or hot-remove) a VMXNET3 vNIC.
  • Recently created VMkernel and service console network adapters disappear after a power cycle
    If an ESX host is power cycled within an hour of creating a new VMkernel or service console adapter on a vDS, the new adapter might disappear.

    Workaround: If you need to power cycle an ESX host within an hour of creating a VMkernel or service console adapter, run
    esxcfg-boot -r in the host's CLI before starting the host.

Server Configuration

  • Upgrading to ESX 4.1.x fails when LDAP is configured on the host and the LDAP server is not reachable
    Upgrade from ESX 4.x to ESX 4.1.x fails when you have configured LDAP on the ESX host and the LDAP server is not reachable.

    Workaround: To work around this issue, perform one of the following tasks:

    • Set the following parameters in the /etc/ldap.conf file.
      • To allow connections to the LDAP server to timeout, set bind_policy to soft.
      • To set the LDAP server connect timeout duration in seconds, set bind_timelimit to 30.
      • To set the LDAP per query timeout duration in seconds, set timelimit to 30.


    • Disable and then enable LDAP after the upgrade is completed.
      1. Disable LDAP by running the esxcfg-auth --disableldap command from the service console before the upgrade.
      2. Enable LDAP by running the esxcfg-auth --enableldap --enableldapauth --ldapserver=xx.xx.xx.xx --ldapbasedn=xx.xx.xx command from the service console after the upgrade.

Storage

  • Cannot configure iSCSI over NIC with long logical-device names
    Running the command
    esxcli swiscsi nic add -n from a remote command line interface or from a service console does not configure iSCSI operation over a VMkernel NIC whose logical-device name exceeds 8 characters. Third-party NIC drivers that use vmnic and vmknic names that contain more than 8 characters cannot work with iSCSI port binding feature in ESX hosts and might display exception error messages in the remote command line interface. Commands such as esxcli swiscsi nic list, esxcli swiscsi nic add, esxcli swiscsi vmnic list from the service console fail because they are unable to handle long vmnic names created by the third-party drivers.

    Workaround: The third-party NIC driver needs to restrict their vmnic names to less than or equal to 8 bytes to be compatible with iSCSI port binding requirement.
    Note: If the driver is not used for iSCSI port binding, the driver can still use up to names of 32 bytes. This will also work with iSCSI without the port binding feature.


  • Large number of storage-related messages in VMkernel log file
    When ESX starts on a host with several physical paths to storage devices, the VMkernel log file records a large number of storage-related messages similar to the following:

    Nov 3 23:10:19 vmkernel: 0:00:00:44.917 cpu2:4347)Vob: ImplContextAddList:1222: Vob add (@&!*@*@(vob.scsi.scsipath.add)Add path: %s) failed: VOB context overflow
    The system might log similar messages during storage rescans. The messages are expected behavior and do not indicate any failure. You can safely ignore them.

    Workaround: Turn off logging if you do not want to see the messages.
  • Persistent reservation conflicts on shared LUNs might cause ESX hosts to take longer to boot
    You might experience significant delays while starting hosts that share LUNs on a SAN. This might be because of conflicts between the LUN SCSI reservations.

    Workaround: To resolve this issue and speed up the boot process, change the timeout for synchronous commands during boot time to 10 seconds by setting the Scsi.CRTimeoutDuringBoot parameter to 10000.

    To modify the parameter from the vSphere Client:
    1. In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced Settings under Software.
    2. Select SCSI.
    3. Change the Scsi.CRTimeoutDuringBoot parameter to 10000.

Supported Hardware

  • ESX might fail to boot when allowInterleavedNUMANodes boot option is FALSE
    On an IBM eX5 host with a MAX 5 extension, ESX fails to boot and displays a
    SysAbort message on the service console. This issue might occur when the allowInterleavedNUMANodes boot option is not set to TRUE. The default value for this option is FALSE.

    Workaround: Set the
    allowInterleavedNUMANodes boot option to TRUE. See KB 1021454 for more information about how to configure the boot option for ESX hosts.
  • PCI device mapping errors on HP ProLiant DL370 G6
    When you run I/O operations on the HP ProLiant DL370 G6 server, you might encounter a purple screen or see alerts about Lint1 Interrupt or NMI on the console. The HP ProLiant DL370 G6 server has two Intel I/O hub (IOH) and a BIOS defect in the ACPI Direct Memory Access remapping (DMAR) structure definitions, which causes some PCI devices to be described under the wrong DMA remapping unit. Any DMA access by such incorrectly described PCI devices triggers an IOMMU fault, and the device receives an I/O error. Depending on the device, this I/O error might result either in a Lint1 Interrupt or NMI alert message on the console, or in a system failure with a purple screen.


    Workaround: Update the BIOS to 2010.05.21 or a later version.
  • ESX installations on HP systems require the HP NMI driver
    ESX 4.1 instances on HP systems require the HP NMI driver to ensure proper handling of non-maskable interrupts (NMIs). The NMI driver ensures that NMIs are properly detected and logged. Without this driver, NMIs, which signal hardware faults, are ignored on HP systems running ESX.
    Caution: Failure to install this driver might result in silent data corruption.

    Workaround: Download and install the NMI driver. The driver is available as an offline bundle from the HP Web site. Also, see KB 1021609.
  • Virtual machines might become read-only when run on an iSCSI datastore deployed on EqualLogic storage
    Virtual machines might become read-only if you use an EqualLogic array with a later firmware version. The firmware might occasionally drop I/O from the array queue, causing virtual machines to become read-only after marking the I/O as failed.


    Workaround: Upgrade EqualLogic Array Firmware to version 4.1.4 or later.
  • After you upgrade a storage array, the status for hardware acceleration in the vSphere Client changes to supported after a short delay
    When you upgrade a storage array's firmware to a version that supports VAAI functionality, vSphere 4.1 does not immediately register the change. The vSphere Client temporarily displays Unknown as the status for hardware acceleration.


    Workaround: This delay is harmless. The hardware acceleration status changes to supported after a short period of time.
  • Slow performance during virtual machine power-on or disk I/O on ESX on the HP G6 Platform with P410i or P410 Smart Array Controller
    Some hosts might show slow performance during virtual machine power-on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to
    /var/log/messages:
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
    "naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
    exists on the device.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0


    Workaround: Install the HP 256MB P-series Cache Upgrade module from the HP website.

Upgrade and Installation

  • Upgrade to ESX 4.1 Update 2 fails if you apply the pre-upgrade bulletin pre-upgrade-from-ESX4.0-to-4.1.0-1.4.348481-release.zip *
    After you apply the pre-upgrade bulletin pre-upgrade-from-ESX4.0-to-4.1.0-1.4.348481-release.zip on hosts with patches or updates released after September 2010, which includes ESX400-201009001 and ESX 4.0 Update 3, when you attempt to upgrade to ESX 4.1 Update 2, the upgrade fails and results in the following error:

    Encountered error RunCommandError:
    This is an unexpected error. Please report it as a bug.
    Error Message - Command '['/usr/bin/vim-cmd', 'hostsvc/runtimeinfo']'
    terminated due to signal 6


    This issue does not occur if you apply the pre-upgrade bulletin pre-upgrade-from-esx4.0-to-4.1-502767.zip.

    Workaround: Apply the esxupdate bulletin pre-upgrade-from-esx4.0-to-4.1-502767.zip before applying the upgrade bundle.

    Note: Use this bulletin only if you are performing an upgrade using the esxupdate utility. You do not need to apply this bulletin for an upgrade using the VMware Update Manager.

  • Host upgrade to ESX/ESXi 4.1 Update 1 fails if you upgrade by using Update Manager 4.1(KB 1035436).

  • Installation of the vSphere Client might fail with an error
    When you install vSphere Client, the installer might attempt to upgrade an out-of-date Microsoft Visual J# runtime. The upgrade is unsuccessful and the vSphere Client installation fails with the error: The Microsoft Visual J# 2.0 Second Edition installer returned error code 4113.

    Workaround: Uninstall all earlier versions of Microsoft Visual J#, and then install the vSphere Client. The installer includes an updated Microsoft Visual J# package.

  • ESX service console displays error messages when upgrading from ESX 4.0 or ESX 4.1 to ESX 4.1.x
    When you upgrade from ESX 4.0 or ESX 4.1 release to ESX 4.1.x, the service console might display error messages similar to the following:
    On the ESX 4.0 host: Error during version check: The system call API checksum doesn’t match"
    On the ESX 4.1 host: Vmkctl & VMkernel Mismatch,Signature mismatch between Vmkctl & Vmkernel

    You can ignore the messages.

    Workaround: Reboot the ESX 4.1.x host.

  • esxupdate - a command output does not display inbox drivers when upgrading ESX host from ESX 4.0 Update 2 to ESX 4.1.x
    When you upgrade the ESX host from ESX 4.0 Update 2 to ESX 4.1.x by using the esxupdate utility, the esxupdate -a command output does not display inbox drivers.

    Workaround
    Run the esxupdate -b <ESX410-Update01> info command to view information about all inbox and asynchronous driver-bulletins available for the ESX 4.1.x release.

  • Upgrading to ESX 4.1.x fails when an earlier version of IBM Management Agent 6.2 is configured on the host
    When you upgrade a host from ESX 4.x to ESX 4.1.x, upgrade fails and error messages appear in ESX and VUM:

    • In ESX, the host logs the following message in the esxupdate.log file: DependencyError: VIB rpm_vmware-esx-vmware-release-4_4.1.0-0.0.260247@i386 breaks host API vmware-esx-vmware-release-4 <= 4.0.9.
    • In VUM, the Task & Events tab displays the following message: Remediation did not succeed : SingleHostRemediate: esxupdate error, version: 1.30, "operation: 14: There was an error resolving dependencies.

    This issue occurs when the ESX 4.x host is running an earlier version of IBM Management Agent 6.2.

    Workaround: Install IBM Management Agent 6.2 on the ESX 4.x host and then upgrade it to ESX 4.1.x.

  • Scanning the ESX host against the ESX410-Update01 or ESX410-201101226-SG bulletin displays an incompatible status message
    When you use VUM to perform a scan against an ESX host containing the ESX410-Update01 or ESX410-201101226-SG bulletin, the scan result might show the status as incompatible.

    Workaround:
    • Ignore the incompatible status message and continue with the remediation process.
    • Remove the incompatible status message by installing the ESX410-201101203-UG bulletin and perform the scan.

vMotion and Storage vMotion

  • Hot-plug operations fail after the swap file is relocated
    Hot-plug operations fail for powered-on virtual machines in a DRS cluster or on a standalone host, and result in the error failed to resume destination; VM not found after the swap file location is changed.

    Workaround: Perform one of the following tasks:
    • Reboot the affected virtual machines to register the new swap file location with them, and then perform the hot-plug operations.
    • Migrate the affected virtual machines using vMotion.
    • Suspend the affected virtual machines.

vSphere Command-Line Interface

  • Running vicfg-snmp -r or vicfg-snmp -D on ESX systems fails
    On an ESX system, when you try to reset the current SNMP settings by running the command
    vicfg-snmp -r command or try to disable the SNMP agent by running the command vicfg-snmp -D command, the command fails. The failure occurs because the command tries to run the esxcfg-firewall command, which becomes locked and stops responding. With esxcfg-firewall not responding, the vicfg-snmp -r or vicfg-snmp -D command results in a timeout and results in an error. The problem does not occur on ESXi systems.

    Workaround: Starting the ESX system removes the lock file and applies the previously executed
    vicfg-snmp command that caused the lock. However, attempts to run vicfg-snmp -r or vicfg-snmp -D still result in an error.

VMware Tools

  • Unable to use VMXNET network interface card, after installing VMware Tools in RHEL3 with latest errata kernel on ESX 4.1 U1 *
    Some drivers in VMware Tools pre-built with RHEL 3.9 modules do not function correctly with the 2.4.21-63 kernel because of ABI incompatibility. As a result, some device drivers,such as vmxnet and vsocket, do not load when you install VMware Tools on REHL3.9.

    Workaround: Boot into the 2.4.21-63 kernel. Install the kernel-source and gcc package for the 2.4.21-63 kernel. Run the command vmware-config-tools.pl, --compile. This compiles the modules for this kernel, the resulting modules should work with the running kernel.

  • Windows guest operating systems display incorrect NIC device status after a virtual hardware upgrade *
    When you upgrade ESX host from ESX 3.5 to ESX 4.1 along with the hardware version of the ESX from 4 to 7 on Windows guest operating systems, the device status of the NIC is displayed as
    This hardware device is not connected to the computer (Code 45).

    Workaround: Uninstall and reinstall the NIC. Also uninstall any corresponding NICs that are displayed as ghosted in Device Manager when following the steps mentioned in: http://support.microsoft.com/kb/315539.

Top of Page