VMware

VMware ESXi 4.1 Update 2 Release Notes

ESXi 4.1 Update 2 Installable | 27 OCT 2011 | Build 502767
ESXi 4.1 Update 2 Embedded | 27 OCT 2011 | Build 502767
VMware Tools | 27 OCT 2011 | Build 493255

Last Document Update: 12 DEC 2011

These release notes include the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESXi:

  • Support for new processors – ESXi 4.1 Update 2 supports AMD Opteron 6200 series (Interlagos) and AMD Opteron 4200 series (Valencia).

    Note: For the AMD Opteron 6200 and 4200 series (Family 15h) processors, ESX/ESXi 4.1 Update 2 treats each core within a compute unit as an independent core, except while applying licenses. For the purpose of licensing, ESX/ESXi treats each compute unit as a core. For example, a processor with 8 compute units can provide the processor equivalent of 16 cores on ESX/ESXi 4.1 Update 2. However, ESX/ESXi 4.1 Update 2 only requires an 8 core license for each 16-core processor.
  • Support for additional guest operating system ESX 4.1 Update 2 adds support for Ubuntu 11.10 guest operating system. For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.

Resolved Issues In addition, this release delivers a number of bug fixes that are documented in the Resolved Issues section.

Top of Page

Earlier Releases of ESXi 4.1

Features and known issues from earlier releases of ESXi4.1 are described in the release notes for each release. To view release notes for earlier releases of ESXi 4.1, click one of the following links:

Top of Page

Before You Begin

ESXi, vCenter Server, and vSphere Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Client, and optional VMware products.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 1.2.1 adds support for ESXi 4.1 Update 2 and vCenter Server 4.1 Update 2 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility

  • Learn about hardware compatibility

    The Hardware Compatibility Lists are available in the Web-based Compatibility Guide. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides, and provides the option to search the guides, and save the search results in PDF format. For example, with this guide, you can verify whether your server, I/O, storage, and guest operating systems, are compatible.

    Subscribe to be notified of Compatibility Guide updates through This is the RSS image that serves as a link to an RSS feed.

  • Learn about vSphere compatibility:

    VMware vSphere Compatibility Matrixes (PDF)

Installation and Upgrade

Read the ESXi Installable and vCenter Server Setup Guide for step-by-step guidance on installing and configuring ESXi Installable and vCenter Server or the ESXi Embedded and vCenter Server Setup Guide for step-by-step guidance on setting up ESXi Embedded and vCenter Server.

After successful installation of ESXi Installable or successful boot of ESXi Embedded, several configuration steps are essential. In particular, licensing, networking, and security configuration steps are necessary. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.

If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide for instructions about installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.

Management Information Base (MIB) files related to ESXi are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0.x. All MIB files can be downloaded from the VMware Web site at http://www.vmware.com/download.

Upgrading VMware Tools

VMware ESXi 4.1 Update 2 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESXi related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

Upgrading or Migrating to ESXi 4.1 Update 2

ESXi 4.1 Update 2 offers the following options for upgrading:

  • VMware vCenter Update Manager. vSphere module that supports direct upgrades from ESXi 3.5 Update 5, ESXi 4.0.x, and ESXi 4.1 to ESXi 4.1 Update 1.
  • vihostupdate. Command-line utility that supports direct upgrades from ESXi 4.0 and ESXi 4.1 Update 1 to ESXi 4.1 Update 2. This utility requires the vSphere CLI. For more details, see vSphere Upgrade Guide. To apply the VEM bundle, perform the workaround of using the vihostupdate utility. This enables to add ESXi 4.1 Update 2 Embedded host into Cisco Nexus 1000 AV 2 vDS.

Supported Upgrade Paths for Host Upgrade to ESXi 4.1 Update 2 :

Upgrade Deliverables

Supported Upgrade Tools
Supported Upgrade Paths to ESXi 4. 1 Update 2

ESXi 3.5 Update 5

ESXi 4.0:
Includes
ESXi 4.0 Update 1
ESXi 4.0 Update 2

ESXi 4.0 Update 3

ESXi 4.1:
Includes
ESXi 4.1 Update 1

upgrade-from-ESXi3.5-to-4.1_update02.502767.zip

 

VMware vCenter Update Manager with host upgrade baseline

Yes

No

No

upgrade-from-esxi4.0-to-4.1-update02-502767.zip
  • VMware vCenter Update Manager with host upgrade baseline
  • vihostupdate

No

Yes

No

update-from-esxi4.1-4.1_update02.zip
  • VMware vCenter Update Manager with patch baseline
  • vihostupdate
No
No
Yes

Notes:

Upgrading vSphere Client

After you upgrade vCenter Server or the ESX/ESXi host to vSphere 4.1 Update 2, you are prompted to upgrade the vSphere Client to vSphere Client 4.1 Update 2. The vSphere Client upgrade is mandatory. You must use only the upgraded vSphere Client to access vSphere 4.1 Update 2.

Note: You must use vSphere Client 4.1 Update 2 to access vCenter Servers that are part of a linked mode group with at least one vCenter Server 4.1 Update 2 instance.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released prior to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

In addition to ZIP file format, the ESXi 4.1 Update 2 release, both embedded and installable, is distributed as a patch that can be applied to existing installations of ESXi 4.1 software.

Patch Release ESXi410-Update02 contains the following individual bulletins:

ESXi410-201110201-SG: Updates ESXi 4.1 Firmware
ESXi410-201110202-UG: Updates ESXi 4.1Tools

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

Resolved issues previously documented as known issues are marked with the † symbol.

For a list of resolved issues that might occur if you upgrade from vSphere 4.1 Update 2 to vSphere 5.0, see KB 2007404.

CIM and API

  • RefreshDatastore API does not display any error when invoked on a datastore that is offline
    When you disconnect a storage cable from ESXi4.0, browse to the datastore created on this FC SAN using MOB (Managed Object Browser) and invoke the RefreshDatastore method on the MoRef (Managed Object Reference) of the defunct datastore. The RefreshDatastore API does not display any error.

    This issue is resolved in this release.
  • ESXi host displays the memory type as unknown
    If you check the Hardware Status of an ESXi host on HP DL980 G7 server connected through vCenter 4.1, the memory type is displayed as Unknown. This happens because, the Common Interface Model (CIM) does not support DDR3 memory type.

    This issue is resolved in this release.
  • Installing ESXi 4.1 Update 1 embedded and rebooting the system results in a user world core dump
    After installing ESXi 4.1 Update 1 and performing a reboot a user world core dump is generated and the following error message is displayed on the Alt-F11 console screen:

    CoreDump: 1480:Userworld sfcb-qlgc /var/core/sfcb-qlgc-zdump.003.
    The qlogic-fchba-provider-410.1.3.5-260247 (version 1.3.5) in esx41u2 shipped with ESXi 4.1 Update 2 resolves this issue.
  • CIM server sends invalid alerts to IBM Systems Director
    The CIM server (sfcbd) process on the ESXi host might send invalid OMC_IpmiAlertIndication alerts related to missing CPUs to IBM Systems Director software. This issue has been observed on IBM blade servers such as IBM LS22 7901-AC1.

    This issue is resolved in this release.

  • vCenter Server displays only one ProviderEnabled configuration item for all installed OEM VIBs
    After installing OEM provider VIBs, vCenter Server displays only the ProviderEnabled configuration item /UserVars/CIMoemProviderEnabled for all OEM provider VIBs that you have installed.

    This issue is resolved in this release. Now if you install OEM provider VIBs, /UserVars/CIMoem-<originalname>ProviderEnabled configuration items are created for each VIB. You can enable/disable each provider separately.

  • Filter_deserialzation and ws_deserialize_duration functions related to Openwsman have to be fixed as per partner request
    VMware partner has requested VMware to resolve filter_deserialzation and ws_deserialize_duration functions related to Openwsman in the next ESX release. The new ESX release resolves the Openwsman issues.

    This issue is resolved in this release.

  • IPMI logs in ESX/ESXi 4.x consume excessive disk space after a BMC reset or firmware update (KB 2000089).

  • The vSphere Client Health Status tab displays incorrect voltages and temperatures on Nehalem-EX based server
    On Nehalem-EX based servers, the Health Status tab of the vSphere Client displays incorrect voltages and temperatures due to IPMI driver timeout. You must restart the sfcbd service to add esxcfg-advcfg properties: CIMOemIpmiRetries and CIMOemIpmiRetryTime to change the default value in IPMI driver as shown in the following example:

    esxcfg-advcfg -A CIMOemIpmiRetries --add-desc 'oem ipmi retries' --add-default 4 --add-type int --add-min 0 --add-max 100
    esxcfg-advcfg -A CIMOemIpmiRetryTime --add-desc 'oem ipmi retry time' --add-default 1000 --add-type int --add-min 0 --add-max 10000/etc/init.d/sfcbd-watchdog restart

Guest Operating System

  • vMotion intermittently fails with a timeout message
    vMotion intermittently fails with a timeout message and logs the following error message in hostd.log file:

    error -1: Failed to launch virtual machine process. Failed to launch peer process.

    This message is logged because the name resolution call is performed before the signal handlers are installed.

    This issue is resolved in this release. The fix performs the name resolution call after the signal handlers are installed.

  • Certain Linux virtual machines configured with the clock=pit parameter fail to boot
    Certain Linux virtual machines such as SLES 10 SP2 configured with the clock=pit parameter might fail to boot. This issue is known to occur when you use hardware virtualization on Intel systems.

    This issue is resolved in this release.

Miscellaneous

  • If the number of CPU cores per socket in a physical machine increases ESXi host takes more time to complete certain tasks
    On some server platforms, the number of CPU-cores per processor socket that is visible to software can be adjusted through BIOS configuration options. As this number increases, ESXi or ESXi 4.1.x Embedded running on such a server will require more time to complete the following tasks:
    • Boot the host machine.
    • Adding host to a HA cluster.
    • Collecting vm-support diagnostic data.

    The additional required time for the above tasks becomes more noticeable as the number of cores is increased to 8 or more per processor socket.

    This issue is resolved in this release. For more information, see KB 2006081.
  • When you reset a Windows 2003 Service Pack 2 R2 virtual machine using vSphere 4.1, the virtual machine fails with a blue screen
    In vSphere 4.1 client, if you select the Restart Guest option for a symmetric multiprocessing virtual machine with Windows 2003 Service Pack 2 R2 running on uniprocessor kernel, the virtual machine fails and displays a blue screen.

    This issue is resolved in this release.
  • Large syslog messages are truncated on ESXi hosts
    On ESXi hosts, log messages in excess of about 2048 bytes might not be written to /var/log/messages.

    This issue is resolved in this release.
  • The datastore browser is unable to manage symbolic links on NFS volumes when connected to ESXi host through vSphere Client 4.1
    When you connect to the ESXi host through vSphere Client 4.1, the Datastore browser displays incorrect or inconsistent files for symlinks on NFS volumes.

    This issue is resolved in this release. The Symlink path is not canonicalized now.
  • The messages in vpxalog files are getting split up into multiple lines
    In ESXi 4.1, the Syslog server is unable to automatically process the vpxa Syslog messages as the messages are split into multiple lines.

    This issue is resolved in this release.
  • The vSphere client does not display an alert message if Syslog is not configured
    If the configuration information is not found while loading the syslog.conf file, an alert message related to configuration issues is not displayed in the Summary tab of the vSphere client.

    This issue is resolved in this release. You can now see the following message in the Summary tab of the vSphere Client if Syslog is not configured:

    Warning: Syslog not configured. Please check Syslog options under Configuration.Software.Advanced Settings in vSphere client.
  • When one controller takes over another controller, the datastores that reside on LUNs of the taken over controller might become inactive
    The datastores that reside on LUNs of the taken over controller might become inactive and remain inactive until you perform a rescan manually.

    This issue is resolved in this release. A manual rescan of datastores is not required when controllers change.
  • In ESXi 4.1, hostd process might frequently fail
    The objects shared between task and internationalization filter results in the frequent failure of hostd process.

    This issue is resolved in this release. The fix in ESXi 4.1 Update 2 clones the object instead of sharing the object.
  • Modifying snapshots using the command vim-cmd fails in certain snapshot tree structures
    Modifying snapshots using the commands vim-cmd vmsvc/snapshot.remove or vim-cmd vmsvc/snapshot.revert
     will fail when applied against certain snapshot tree structures.

    This issue is resolved in this release. Now a unique identifier, snapshotId, is created for every snapshot associated to a virtual machine. You can get the snapshotId by running the command vim-cmd vmsvc/snapshot.get <vmid>. You can use the following new syntax when working with the same commands:

    Revert to snapshot: vim-cmd vmsvc/snapshot.revert <vmid> <snapshotId> [suppressPowerOff/suppressPowerOn]
    Remove a snapshot: vim-cmd vmsvc/snapshot.remove <vmid> <snapshotId>
  • vCenter Server displays warning message when Remote Tech Support Mode (SSH) or Local Tech Support Mode is enabled
    When you enable Remote Tech Support Mode (SSH) or Local Tech Support Mode, a warning message similar to the following is displayed in vCenter Server

    Configuration Issues
    The Local Tech Support Mode for the host localhost.localdomain has been enabled.
    Remote Tech Support Mode(SSH) for the host <server> has been enabled


    This issue is resolved in this release. Now you can disable this warning by setting the UserVars.SuppressShellWarning parameter in vCenter Server.
  • Virtual machine configured with fixed passthrough device fails to power on
    A virtual machine configured with 14 or more PCIe devices might fail to power on, if one of them is a fixed passthrough (FPT) device. Sometimes the virtual machine would boot successfully once, but fail to power on in subsequent reboots. An error message similar to the following is written to vmware.log:

    Mar 25 20:56:17.659: vcpu-0| Msg_Post: Error
    Mar 25 20:56:17.659: vcpu-0| [msg.pciPassthru.mmioOutsidePCIHole] PCIPassthru 005:00.0: Guest tried to map 32 device pages (with base address of 0x0) to a range occupied by main memory. This is outside of the PCI Hole. Add pciHole.start = "0" to the configuration file and then power on the VM.


    This issue is resolved in this release.

  • CPU shares are not enforced in an under-committed ESXi host
    In an under-committed ESXi host, the CPU shares for a virtual machine in a resource pool having limits on its CPU resources are not enforced. This might lead to performance issues.

    This issue is resolved in this release.

  • esxtop tool displays abnormal values in its memory view
    In the memory view of esxtop tool, you might observe abnormal values for the amount of memory allocated on a NUMA node for a virtual machine. For example, for a virtual machine with 6GB memory, the total remote memory or the total local memory might be shown as 4TB.

    This issue is resolved in this release.

  • ESXi host fails with Usage error in dlmalloc error
    An ESXi host might fail with a purple diagnostic screen that displays an error message similar to the following:

    #0 Panic (fmt=0x41800c703488 "PANIC %s:%d - Usage error in dlmalloc")

    You might see this error in the UserEventfdClose path of the backtrace information.

    This issue is resolved in this release.

  • Some HP servers running ESX/ESXi 4.1 hosts with Automatic Power On option disabled in BIOS power on automatically when the power cable is connected
    Some HP Servers running ESX/ESXi 4.1 hosts with the Automatic Power On option disabled in BIOS automatically power on when you connect the power cable. This issue has been observed on HP Servers DL380 G5/G6/G7 and DL360 G7.

    This issue is resolved in this release.

  • Hostd and vpxa log messages are duplicated on /var/log/messages
    The log messages from the hostd and vpxa management services on an ESXi host might fill up /var/log/messages. As a result, you might not find sufficient kernel logs to troubleshoot kernel issues.

    This issue is resolved in this release. Starting with this release, vpxa and hostd logs are not written to /var/log/messages. You can enable this by setting logall=<non zero> in the file /etc/syslog.conf and restarting the syslogd service.

  • The Machine Check Exception purple diagnostic screen does not guide users in case of certain hardware issues
    This issue is resolved in this release. Now the following message displays in the Machine Check Exception purple diagnostic screen:

    System has encountered a Hardware Error - Please contact the hardware vendor.

  • Updates to the third-party mpt2sas driver
    In this release, the third-party mpt2sas driver is updated.

  • Updates to the third-party openldap libraries 
    In this release, the third-party openldap libraries are updated to version 2.3.43-12.el5_6.7 for the latest bug fixes.

Networking

  • When you delete a vNetwork Distributed Switch ESXi host displays error message
    Attempts to remove a vNetwork Distributed Switch from the configuration section, it results in the following error message:

    Call "HostNetworkSystem.UpdateNetworkConfig" for object "networkSystem-3739" on vCenter Server "loganvc29.eng.vmware.com" failed.
    Operation failed, diagnostics report: DVS DvsPortset-0 has 1 shadow or zombie port.


    This issue is resolved in this release.
  • The first packet that e1000 vNIC sends has an invalid MAC address
    The latest guest operating system drivers write zeros to RAL/RAH before setting a valid MAC address. As a result, the first packet that the e1000 vNIC sends has the following MAC address: 00:00:00:xx:xx:xx.

    This issue is resolved in this release. The e1000 vNIC now sends out a packet only after a valid MAC address (RAL is nonzero) is set.
  • ESXi host configured with vNetwork Distributed Switch (vDS) disconnects from the vCenter Server and does not reconnect even after multiple attempts
    The portset global lock is present for port enable, but not for port disable. When port disable modifies vDS properties, it conflicts with other port states that are modifying vDS properties. As a result of the conflict, the network connection is lost and the ESXi host disconnects from the vCenter Server. Also, a Limit exceeded error message appears in the VMkernel log.

    This issue is resolved in this release. The fix adds a portset global lock for port disable.
  • Network connectivity might fail when VLANs are configured with physical NICs using be2net or ixgbe drivers
    On a vNetwork Distributed Switch, when you configure a single VLAN or a VLAN ID range for dvUplink portgroups, network connectivity might fail for the single VLAN or for the VLAN that is assigned the highest VLAN ID from the VLAN ID range, if you have installed a be2net or ixgbe driver for a physical NIC.

    This issue is resolved in this release.

  • Upgrades igb driver to version 2.1.11.1.

  • ESXi host might fail with a purple diagnostic screen due to an issue with the NetXen adapter
    Due to an issue with the NetXen adapter driver software, if adapter firmware auto-reset and NetQueue delete operations run concurrently, the host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

  • Network connectivity might fail if VLANS and PVLANs are configured on the same vNetwork Distributed Switch
    Virtual machines on a vNetwork Distributed Switch (vDS) configured with VLANs might lose network connectivity upon boot if you configure Private VLANs on the vDS. However, disconnecting and reconnecting the uplink solves the problem.This issue has been observed on be2net NICs and ixgbe vNICs.

    This issue is resolved in this release.

  • Purple diagnostic screen with vShield or third-party vSphere integrated firewall products (KB 2004893).

  • Upgrades the tg3 driver to version 3.110h.v41.1.

  • DHCP server runs out of IP addresses
    An issue in the DHCP client might cause the DHCP client to send DHCPRELEASE messages using the broadcast MAC address of the server. However, the release messages might be dropped by an intermediate DHCP proxy router. Eventually the DHCP server runs out of IP addresses.

    This issue is resolved in this release. Now the release messages are sent using unicast addresses.

Security

  • Resolves an integer overflow issue in the SFCB
    This release resolves an integer overflow issue in the SFCB that arises when the httpMaxContentLength is changed from its default value to 0 in in /etc/sfcb/sfcb.cfg. The integer overflow could allow remote attackers to cause a denial of service (heap memory corruption) or possibly execute arbitrary code via a large integer in the Content-Length HTTP header.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2010-2054 to this issue.
  • Updates to the third-party glibc library
    In this release, the glibc third-party library is updated to 2.5-58.el5_6.2 to resolve multiple security issues.  
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2011-0536, CVE-2010-0296, CVE-2011-1071, and CVE-2011-1095 to these issues.
  • Updates the Intel e1000 and e1000e drivers
    This resolves a security issue in the e1000 and e1000e Linux drivers for Intel PRO/1000 adapters that allows a remote attacker to bypass packet filters and send manipulated packets.

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2009-4536 to this issue.

Server Configuration

  • Syslog operations might fail when remote host is inaccessible
    Syslog service does not start if it is configured to logon to a remote host which cannot be resolved through DNS when the syslogd daemon is started. This causes remote logging and local logging processes to fail.

    This issue is resolved in this release. Now if the remote host cannot be resolved, local logging is unaffected. When the syslogd daemon starts, it retries resolving and connecting to the remote host every 10 minutes.

Storage

  • FalconStor host bus adapter (HBA) failover results in All Paths Down (APD) state, when using QLogic QME2472 host bus adapters with ESXi 4.0
    The failover with FalconStor host bus adapter results in all paths down state when one of the IPStor controllers is powered off. Qlogic has released an updated driver that addresses WWPN spoofing, FalconStor arrays use this method for handling failover.

    This issue is resolved in this release.
  • ESX/ESXi host fails to detect the iSCSI capabilities of NC382i adapter after an upgrade
    If you do not configure swiscsi using Broadcom dependent adapter and upgrade ESX/ESXi from 4.0 to 4.1, ESX/ESXi host fails to detect the iSCSI capabilities of NC382i adapter.

    This issue is resolved in this release.
  • When you reboot a Fibre Channel switch in ESX/ESXi host the link status of the switch is not restored
    When you enforce Qlogic ISP2532 HBA to work in 4GB mode and reboot the Fibre Channel switch, the link status of the Fibre Channel switch is not restored.

    This issue is resolved in this release.
  • The target information for Fiber Channel logical unit numbers (LUNs) of 3par array connected to ESXi host is not displayed in vSphere
    When viewing multipath information from the VMware Infrastructure Client connected to ESXi host or vCenter Server, you might not see the target information for some paths of working LUNs.

    This issue is resolved in this release by replacing the empty values for port numbers with the new path information.
  • In ESX/ESXi, virtual machines cannot detect the raw device mapping files that reside on Dell MD36xxi storage array
    Virtual machines cannot detect the raw device mapping files that reside on Dell MD36xxi storage array if the Dell MD36xxi storage array is not added to the claim rule set.

    This issue is resolved in this release. The fix adds DELL MD36xxi, MD3600f, and MD3620f storage arrays to the claim rule set of the Native Multipathing Plug-in's (NMP) Storage Array Type Plug-in (SATP). Also, these storage arrays are handled by VMW_SATP_LSI SATP. For more information, see KB 1037925.
  • Snapshots of upgraded VMFS volumes fail to mount on ESXi 4.x hosts
    Snapshots of VMFS3 volumes upgraded from VMFS2 with block size greater than 1MB might fail to mount on ESXi 4.x hosts. The esxcfg-volume -l command to list the detected VMFS snapshot volumes fail with the following error message:

    ~ # esxcfg-volume -l
    Error: No filesystem on the device


    This issue is resolved in this release. Now you can mount or re-signature snapshots of VMFS3 volumes upgraded from VMFS2.
  • ESXi hosts with QLogic fibre channel in-box driver fail with a purple diagnostic screen when scanning for LUNs
    While processing the reported data, the LUN discovery function does not check whether the number of LUNs reported exceeds the maximum number of LUNs that are supported. As a result, ESXi hosts with QLogic fibre channel in-box driver might encounter an Exception 14 and fail with a purple diagnostic screen.

    This issue is resolved in this release. The LUN discovery function now checks whether the number of LUNs reported exceeds the maximum number of LUNs supported (256).
  • The Use Active Unoptimized paths (useANO) setting for devices is not persistent across system reboots
    For devices using Native Multipathing Plug-in's (NMP) round robin path selection policy, if you set the value of the useANO setting to TRUE, it is reset to FALSE after a system reboot.

    This issue is resolved in this release. The useANO setting persists after the system reboot.
  • ESXi 4.1 continuously logs SATA internal error messages on a Dell PowerEdge system
    On a Dell PowerEdge R815 or R715 system that uses SATA SB600, SB700, or SB800 controller, ESXi 4.1 continuously logs SATA internal error messages similar to the following to /var/log/messages file.

    cpu2:4802)<6>ata1:soft resetting port
    cpu1:4802)<6>ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
    cpu0:4802)<6>ata1.00: configured for UDMA/100
    cpu0:4802)<6>ata1: EH complete
    cpu0:4802)<3>ata1.00: exception Emask 0x40 SAct 0x0 SErr 0x800 action 0x2
    cpu0:4802)<3>ata1.00: (irq_stat 0x40000001)
    cpu0:4802)<3>ata1.00: tag 0 cmd 0xa0 Emask 0x40 stat 0x51 err 0x24 (internal error)


    If media is not ready or not present in the SATA CD-ROM drive, an internal error invokes the interrupt handler resulting in excessive logging.

    This issue is resolved in this release.
  • ESXi host connected to a tape drive accessed using the aic79xx driver fails
    An ESXi host connected to a tape drive accessed using the aic79xx driver might fail with a purple screen and display an error message similar to the following when the driver tries to access a freed memory area:

    Loop 1 frame=0x4100c059f950 ip=0x418030a936d9 cr2=0x0 cr3=0x400b9000

    This issue is resolved in this release.
  • Path status of logical unit numbers (LUNs) connected to ESX/ESXi host is not updated even after they are reconnected to the ESX/ESXi host
    In a LUN that has multiple connections to ESXi host, if one of the cables is unplugged, the path connecting ESXi host and storage becomes inactive and remains in that state even after the cable is reconnected. A refresh or a rescan will not update the path status. Only a reboot will activate the connection.

    This issue is resolved in this release.

  • ESXi host connected through an Adaptec card with AIC79xx driver might fail with a purple diagnostic screen
    An ESXi host with an Adaptec U320 HBA that uses the AIC79xx driver might fail with a purple diagnostic screen due to an issue with the driver in handling hardware error interrupts. An error message similar to the following might be written to vmkernel.log:

    hwerrint, Discard Timer has timed out

    This issue is resolved in this release. Now the ESXi host does not fail. However, in case of HBA hardware failure, the hwerrint, Discard Timer has timed out error message might still be logged.

  • Deleting a large file on the NFS datastore with ESX succeeds, but reports the error: Error caused by file /vmfs/volumes/<datastore-uuid>/<filename> (KB 1035332).

  • Virtual machine might not power on after failover in an HA cluster
    A virtual machine might not power on after failing over in an HA cluster. The power-on process stops responding when the virtual machine accesses the vswap file. Messages similar to the following might be written to vmware.log:

    Aug 02 06:48:41.572: vmx| CreateVM: Swap: generating normal swap file name.
    Aug 02 06:48:41.573: vmx| Swap file path: '/vmfs/volumes/<swap file name>'


    This issue is resolved in this release.

Supported Hardware

  • MAI KEYLOK II USB device attached to an ESXi host is not accessible on Linux virtual machines
    When you attach a MAI KEYLOK II USB device to an ESXi host on CentOS 5.5 or RHEL 5.4 or Ubuntu 9.10 or Ubuntu10.04 operating systems, the Linux virtual machines present in the guest operating system cannot access it. The device is visible in the guest operating system, but the Linux virtual machines cannot use it.

    This issue is resolved in this release.

Upgrade and Installation

  • During a fresh install of ESXi 4.1 Update1 or earlier releases you cannot change the block size or the size of the VMFS volume
    When you install ESX/ ESXi 4.1 Update1 or earlier releases with advanced setup, you do not have an option to modify the partition size and VMFS block size. By default the VMFS volume is created for the full partition.

    This issue is resolved in this release. ESX/ESXi now allows you to specify the VMFS block size during GUI, text, and kickstart installation.
  • ESXi 4.1 installation fails on a Hitachi BS320 AC51A3 blade server with the LSI SAS Mezzanine controller (LSI 1064E)
    The issue occurs due to an experimental feature introduced in ESXi 4.1, a FireWire serial bus scan. See https://www.vmware.com/support/policies/experimental.html for the official VMware stance on experimental features in our products.

    This issue is resolved in this release by disabling FireWire. FireWire is not officially supported in ESXi 4.1 and later.
  • Warning message displays during ESXi 4.1 U2 scripted installation if you specify the --fstype command in the kickstart file
    In earlier releases of ESXi 4.1, the command --fstype was optional for ESXi scripted installation and you could only assign vmfs3 as the value. Starting with this release, the --fstype command is removed in scripted installation. Now a warning message similar to the following displays if you specify the --fstype command in kickstart file during scripted installation:
     
    warning:nfs:<host name>/ks.cfg:line xxx: argument "--fstype" to command "part" does not take a value
     
    However, the installation completes successfully.

  • Upgrade from ESXi 4.0.x to ESXi 4.1 using VMware Update Manager fails on IBM x3650 M2 servers
    Upgrade from ESXi 4.0.x to ESXi 4.1 using VMware Update Manager might fail on IBM x3650 M2 servers with an error message similar to the following:

    Software or system configuration of <host name> is incompatible. Check scan results for details.

    This issue is resolved in this release.

  • Megaraid_sas driver has been upgraded from 4.0.14.1-18 to 5.30
    The megaraid_sas driver has been upgraded from 4.0.14.1-18 to 5.30. The upgrade adds 11 new PCI IDs to the megaraid_sas driver.The upgrade resolves the iMR chip and gen2 chip issues.

vCenter Server, vSphere Client, and vSphere Web Access

  • vSphere Client does not display the service tag of ESX/ESXi hosts based on certain platforms
    vSphere Client and vSphere PowerCLI might not display the service tag of ESX/ESXi hosts if the hosts are not based on Dell hardware platforms.

    This issue is resolved in this release.

Virtual Machine Management

  • vSphere Client displays invalid virtual machine statistics
    After you create, power on, and delete a virtual machine, statistics such as CPU and memory usage of that virtual machine displayed in the vSphere Client performance chart might be invalid.

    This issue is resolved in this release.

  • Customization of a virtual machine hot clone running Windows 2008 R2 guest operating system fails and the clone reboots continuously
    The customization of the hot clone of a Windows 2008 R2 guest operating system fails with the "auto check not found" error message, and the virtual machine reboots continuously.

    This issue is resolved in this release.
  • In vCenter Server, a cloned Windows 2000 Professional virtual machine displays Windows 2000 as the guest operating system in the vmx file instead of Windows 2000 Professional
    In vCenter Server, create a new Windows 2000 Professional virtual machine. Using vCenter clone the virtual machine and check the vmx file of the new virtual machine. The guest operating system is displayed as Windows 2000, cloned virtual machine should display Windows 2000 Professional as the guest operating system.

    This issue is resolved in this release. The fix interchanges the entries for the operating systems.

  • In ESXi 3.5/4.0, when you browse ESXi host through Managed Object Browser (MOB), the CPU and Memory reservation value are displayed as “unset”
    When you browse ESXi 3.5/4.0 host through Managed Object Browser (MOB), (Content > ha-folder-root > ha-datacenter > ha-folder-vm > Click particular VM value -> summary -> config) instead of connecting it to vCenter server, then the CPU reservation value [virtualMachineConfigSummary.cpuReservation]
    and memory reservation value [virtualMachineConfigSummary.memoryReservation] for the virtual machines is displayed as “unset”.

    This issue is resolved by retrieving the reservation info from a configuration file.
  • In ESX/ESXi 4.0, the maxSample performance statistic property in PerfQuerySpec displays incorrect value
    When you query for performance statistics,the maxSample property in PerfQuerySpec returns two values instead of one. This happens even after you set the maxSample property to return a single value.

    This issue is resolved in this release.
  • vSphere Client displays incorrect provisioned space for a powered-off virtual machine
    The ESXi host does not consider the memory reservation while calculating the provisioned space for a powered-off virtual machine. As a result, the vSphere Client might display a discrepancy in the values for provisioned space while the virtual machine is powered-on or powered-off.

    This issue is resolved in this release.
  • Removing a snapshot causes the VMware hostd management agent to fail
    If you remove a virtual machine snapshot, the VMware hostd management agent might fail and display a backtrace similar to the following:

    [2010-02-23 09:26:36.463 F6525B90 error 'App']
    Exception: Assert Failed: "_config != __null && _view != __null" @ bora/vim/hostd/vmsvc/vmSnapshot.cpp:1494


    This is because the <vm_name>-aux.xml located in the same directory as the virtual machine configuration file is empty. When a virtual machine is created or registered on a host, the contents of the <vm_name>-aux.xml file is read and the _view object is populated. If the XML file is empty the _view object is not populated. This results in an error when consolidating the snapshot.

    This issue is resolved in this release.
  • ESXi host stops responding when SNMP queries are sent using a MIB file
    An ESXi host might stop responding if you enable the embedded SNMP agent on the host and send SNMP queries using the VMWARE-VMINFO-MIB.mib MIB file to virtual machines that are being migrated, cloned, created, or deleted.

    This issue is resolved in this release.
  • Virtual machine running on snapshots becomes unresponsive if the Limit IOPS value for the virtual disk is changed
    If you change the Limit IOPS value for a virtual disk from Unlimited to any other value on a virtual machine which is running with snapshot(s) or creating a snapshot, the virtual machine might become unresponsive every few seconds.

    This issue is resolved in this release.

  • VMware hostd service might fail during a quiesced snapshot operation

    This issue is resolved in this release.

  • Virtual machine sometimes powers off while creating or deleting snapshots
    While performing snapshot operations, if you simultaneously perform another task such as browsing a datastore, the virtual machine might be abruptly powered off. Error messages similar to the following are written to vmware.log:

    vmx| [msg.disk.configureDiskError] Reason: Failed to lock the file
    vmx| Msg_Post: Error
    vmx| [msg.checkpoint.continuesync.fail] Error encountered while restarting virtual machine after taking snapshot. The virtual machine will be powered off.


    This issue occurs when another process accesses the same file that is required by the virtual machine for one operation.

    This issue is resolved in this release.

vMotion and Storage vMotion

  • When you perform vMotion on multiple virtual machines, ESXi host displays Out of Memory warning messages
    When you perform vMotion on multiple virtual machines present on two ESXi hosts, the memory on one ESXi host gets over committed and the page allocation fails. This results in a log spew with the following warning:

    WARNING: vmklinux26: AllocPages: gfp_mask=0xd4, order=0x0, vmk_PktSlabAllocPage returned 'Out of memory' in the vmkernel log during vMotion

    This issue is resolved in this release.
  • If you try to put High Availability (HA) and Distributed Resource Scheduler (DRS) in maintenance mode or perform vMotion operation, vMotion fails with Out of Memory error message
    When you perform concurrent vmotions or put a 4.1 host in maintenance mode which is a part of DRS enabled cluster using vCenter 4.1 or vSphere 4.1, then evacuation of virtual machines fails with the following error messages:

    Migration to host <> failed with error Out of memory (195887124). vMotion migration [184468762:1286284186972455] write function failed.

    This issue is resolved in this release.
  • vMotion fails due to locked swap file
    A vMotion operation might fail with error messages indicating locked swap files in working directory under NAS datastore.

    This issue is resolved in this release.

  • Virtual machine with large amounts of RAM (32GB and higher) loses pings during vMotion

    This issue is resolved in this release.

  • NUMA imbalance after vMotion of virtual machines (KB 2000740).

VMware Tools

  • Installation of VMware Tools on a virtual machine with Windows NT 4.0 operating system results in an error message
    Attempts to install VMware Tools on a virtual machine with Windows NT 4.0 operating system succeeds. However, the vSphere Client displays the tools status as VMware Tools: Out of date.

    This issue is resolved in this release.

  • VMware Tool upgrade fails as some folders in /tmp are deleted in some Linux guest operating systems
    When you upgrade VMware Tools from ESXi 3.5 to ESXi 4.0 the upgrade might fail because, some Linux distributions periodically delete old files and folders in /tmp. VMware Tools upgrade requires this directory in /tmp for auto upgrades.

    This issue is resolved in this release.

  • Windows virtual machine loses network connectivity after upgrading VMware Tools
    When you upgrade VMware Tools, which has Host Guest File System (HGFS) installed, from ESXi 3.5 to ESXi 4.x the HGFS driver might not be uninstalled properly. As a result, the Windows virtual machine's network Provider Order tab under Network Connections > Advanced > Advanced Settings displays incorrect information, and the virtual machine might lose network connectivity.

    This issue is resolved in this release. Now the earlier version of the HGFS driver and all related registry entries are uninstalled properly during upgrade.
  • When you take a quiesced snapshot of Windows 2008 R2 virtual machine the additional disk in the virtual machine fails
    On a Windows 2008 R2 virtual machine, when you add a dynamic disk and take a quiesced snapshot, disk manager displays a failed disk or missing disk message. This issue is applicable for the following Windows operating systems.
    • Windows 2003
    • Windows Vista
    • Windows 2008
    • Windows 7

    This issue is resolved in this release.
  • Windows HGFS provider causes a deadlock, if an application concurrently calls WNetAddConnection2 API from multiple threads
    The Windows HGFS provider Dll results in a deadlock for applications such as eEye Retina due to incorrect provider implementation of Windows WNetAddConnection2 or WNetCancelConnection APIs in a multi-threaded environment.

    This issue is resolved in this release.
  • Installation of VMware Tools on an RHEL 2.1 virtual machine fails with an error message
    When you try to install VMware Tools on an RHEL 2.1 virtual machine running on ESXi 4.1 by running the vmware-install.pl script, the install process fails with the following error message:

    Creating a new initrd boot image for the kernel. Error opening /tmp/vmware-fonts2/system_fonts.conf Execution aborted.

    This issue is resolved in this release.
  • Extraneous errors are displayed when restarting Linux virtual machines after installing VMware Tools
    After you install VMware Tools for Linux and restart the guest operating system, the device manager for the Linux kernel (udev) might report extraneous errors similar to the following:

    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{model}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'SUBSYSTEMS'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'ATTRS{vendor}'
    May 4 16:06:11 rmavm401 udevd[23137]: add_to_rules: unknown key 'AT


    This issue is resolved in this release. Now the VMware Tools Installer for Linux detects the device and only writes system-specific rules.
  • Configuration file entries are overwritten on Linux virtual machines while installing VMware Tools
    When you install or update VMware Tools on Linux virtual machines, the VMware Tools installer might overwrite any entries in the configuration files (such as /etc/updated.conf file for Redhat and Ubuntu, and /etc/sysconfig/locate for SUSE) made by third party development tools. This might affect cron jobs running updatedb on these virtual machines.

    This issue is resolved in this release.
  • Disabled CUPS (Common UNIX Printing System) service starts automatically, when VMware Tools is installed or upgraded on SUSE Linux Enterprise Server 10 Service Pack 3, x86 virtual machine
    By default, the CUPS service on a virtual machine is disabled however, when you start the VMware Tools upgrade process on SUSE Linux Enterprise Server 10 Service Pack 3 x86 guest operating system, the CUPS service starts automatically. Disable the CUPS service in SUSE Linux Enterprise 10 and Red Hat Enterprise Linux 5 using the following commands:
    • On SUSE Linux Enterprise 10
    • Run the following commands and and disable the CUPS service:
      chkconfig --level 2345 cups off
      chkconfig --level 2345 cupsrenice off

      Check the service status using the command: service cups status chkconfig -l|grep -i cups
      Make sure that the service is disabled.

    • On Red Hat Enterprise Linux 5
    • Run the following commands:
      chkconfig --level 2345 cups off
      system-config-services.
  • Kernel modules of VMware Tools are not loaded while switching kernels
    When you install VMware Tools and switch between kernels, vsock and vmmemctl modules are not loaded at boot. The following error message appears when you run a dmesg command or when you try to manually load the modules for the wrong kernel:

    vmmemctl: disagrees about version of symbol module_layout
    vsock: disagrees about version of symbol module_layout

    This issue is resolved in this release. The fix in ESXi 4.1 Update 2 rebuilds the VMware Tools modules while switching kernels.

  • Virtual Machine Communication Interface (VMCI) Socket on a Linux guest operating system stops responding when a queue pair is detached
    If a peer of a VMCI stream socket connection (for example, stream socket server running in a server virtual machine) detaches from a queue pair while the connection state is connecting, then the other peer (for example, stream socket client with a blocking connect) might fail to connect. The peer detach might take place if one of the following event occurs:
    • The virtual machine is unavailable.
    • There is a busmem invalidation failure reported from within the guest operating system.

    This issue is resolved in this release. The fix in ESXi 4.1 Update 2 treats the peer detach as a reset and propagates the following error message to the other peer:
    Connection reset by peer
  • VMware Tools service (vmtoolsd) fails on 64 bit Windows virtual machines when the virtual memory allocation order is forced from top to down using a registry key
    In Windows, VirtualAlloc can be forced to allocate memory from top to down using the AllocationPreference registry key as mentioned in the following link: http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx.
    On such virtual machines, VMware Tools service fails.

    This issue is resolved in this release.
  • VMware Tools service (vmtoolsd) might fail after you install VMware Tools on a Linux guest with long operating system name
    If a Linux guest operating system reports full operating system name that is greater than 100 characters, VMware Tools service might fail. For more information, see KB 1034633.

    This issue is resolved in this release. The fix increases the maximum allowed size of operating system name to 255 characters.
  • X11 configuration is changed after installing VMware Tools
    After you install VMware Tools on a SuSe Linux Enterprise Server (SLES) virtual machine, X11 configuration is changed. As a result, the keyboard locale setting is changed to Albanian, the mouse and monitor configuration is blank, and VNC fails.

    This issue is resolved in this release.

  • VMware Tools installation fails to complete on Solaris 10 64 bit virtual machines
    Installing VMware Tools on a Solaris 10 64-bit virtual machine fails. Attempts to start the VMware Tools service by running the VMware Tools configuration script (vmware-config-tools.pl) fail with an error message similar to the following:

    Guest operating system daemon:Killed
    Unable to start services for VMware Tools
    Execution aborted.


    Attempts to start the VMware Tools daemon from command line fail with an error message similar to the following:

    ./vmtoolsd-wrapper
    ld.so.1: vmtoolsd: fatal: libvmtools.so: open failed: No such file or directory


    This issue is resolved in this release.

  • The Windows virtual machine with VMware Tools reports Event ID 105 in the Event Viewer tab (KB 1037755).

 

Top of Page

Known Issues

This section describes known issues in this release in the following subject areas:

Known issues not previously documented are marked with the * symbol.

CIM and API

  • The configuration item /UserVars/CIMoemProviderEnabled is not deleted when you upgrade to ESXi 4.1 Update 2 *
    Workaround: Delete /UserVars/CIMoemPrividerEnabled by running the command:

    esxcfg-advcfg-L /UserVars/CIMoemProviderEnabled

  • OEM ProviderEnabled configuration items are enabled by default when you upgrade to ESXi 4.1 Update 2 *
    Workaround:
    1. Run the following command to disable OEM Providers:
       esxcfg-advcfg -s 0 /UserVars/CIMoem-<originalname>ProviderEnabled  
    2. Restart the sfcbd service by running the command:
        /etc/init.d/sfcbd-watchdog restart

  • SFCC library does not set the SetBIOSAttribute method in the generated XML file
    When Small Footprint CIM Client (SFCC) library tries to run the
    SetBIOSAttribute method of the CIM_BIOSService class through SFCC, a XML file containing the following error will be returned by SFCC: ERROR CODE="13" DESCRIPTION="The value supplied is incompatible with the type". This issue occurs when the old SFCC library does not support setting method parameter type in the generated XML file. Due to this issue, you cannot invoke the SetBIOSAttribute method. SFCC library in ESXi 4.1 hosts does not set the method parameter type in the socket stream XML file that is generated.

    A few suggested workarounds are:
    • IBM updates the CIMOM version
    • IBM patches the CIMOM version with this fix
    • IBM uses their own version of SFCC library

Guest Operating System

  • Installer window is not displayed properly during RHEL 6.1 guest operating system installation (KB 2003588). *

  • Guest operating system might become unresponsive after you hot-add memory more than 3GB
    RedHat 5.4-64 guest operating system might become unresponsive if you start with an IDE device attached, and perform a hot-add operation to increase memory from less than 3GB to more than 3GB.

    Workaround: Do not use hot-add to change the virtual machine's memory size from less than or equal to 3072MB to more than 3072MB. Power off the virtual machine to perform this reconfiguration. If the guest operating system is already unresponsive, restart the virtual machine. This problem occurs only when the 3GB mark is crossed while the operating system is running.
  • Windows NT guest operating system installation error with hardware version 7 virtual machines
    When you install Windows NT 3.51 in a virtual machine that has hardware version 7, the installation process stops responding. This happens immediately after the blue startup screen with the Windows NT 3.51 version appears. This is a known issue in the Windows NT 3.51 kernel. Virtual machines with hardware version 7 contain more than 34 PCI buses, and the Windows NT kernel supports hosts that have a limit of 8 PCI buses.

    Workaround: If this is a new installation, delete the existing virtual machine and create a new one. During virtual machine creation, select hardware version 4. You must use the New Virtual Machine wizard to select a custom path for changing the hardware version. If you created the virtual machine with hardware version 4 and then upgraded it to hardware version 7, use VMware vCenter Converter to downgrade the virtual machine to hardware version 4.
  • Installing VMware Tools OSP packages on SLES 11 guest operating systems displays a message stating that the packages not supported
    When you install VMware Tools OSP packages on a SUSE Linux Enterprise Server 11 guest operating system, an error message similar to the following is displayed:
    The following packages are not supported by their vendor.

    Workaround: Ignore the message. The OSP packages do not contain a tag that marks them as supported by the vendor. However, the packages are supported.
  • Compiling modules for VMware kernel is supported only for the running kernel
    VMware currently supports compiling kernel modules only for the currently running kernel.

    Workaround: Boot the kernel before compiling modules for it.


  • No network connectivity after deploying and powering on a virtual machine
    If you deploy a virtual machine created by using the Customization Wizard on an ESXi host, and power on the virtual machine, the virtual machine might lose network connectivity.

    Workaround:
    After deploying each virtual machine on the ESXi host, select the Connect at power on option in the Virtual Machine Properties window before you power on the virtual machine.

Miscellaneous

  • An ESX/ESXi 4.1 U2 host with vShield Endpoint 1.0 installed fails with a purple diagnostic screen mentioning VFileFilterReconnectWork (KB 2009452). *

  • Running resxtop or esxtop for extended periods might result in memory problems
    Memory usage by
    resxtop or esxtop might increase over time depending on what happens on the ESXi host being monitored. That means that if a default delay of 5 seconds between two displays is used, resxtop or esxtop might shut down after around 14 hours.

    Workaround: Although you can use the -n option to change the total number of iterations, you should consider running resxtop only when you need the data. If you do have to collectresxtop or esxtop statistics over a long time, shut down and restart resxtop or esxtop periodically instead of running oneresxtop or esxtop instance for weeks or months.
  • Group ID length in vSphere Client shorter than group ID length in vCLI
    If you specify a group ID using the vSphere Client, you can use only nine characters. In contrast, you can specify up to ten characters if you specify the group ID by using the
    vicfg-user vCLI.

    Workaround: None


  • Warning message appears when you run esxcfg-pciid command
    When you try to run the esxcfg-pciid command to list the Ethernet controllers and adapters, you might see a warning message similar to the following:
    Vendor short name AMD Inc does not match existing vendor name Advanced Micro Devices [AMD]
    kernel driver mapping for device id 1022:7401 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7409 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7411 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7441 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown


    This issue occurs when both the platform device-descriptor files and the driver-specific descriptor files contain descriptions for the same device.

    Workaround: You can ignore this warning message.
  • Adding ESXi 4.1.x Embedded host into Cisco Nexus 1000V Release 4.0(4)SV1(3a) fails
    You might not be able to add an ESXi 4.1.x Embedded host to a Cisco Nexus 1000V Release 4.0(4)SV1(3a) through vCenter Server.

    Workaround
    To add an ESXi 4.1.x Embedded host into Cisco Nexus 1000V Release 4.0(4)SV1(3a), use the vihostupdate utility to apply the VEM bundle on ESXi hosts.
    Perform the following steps to add an ESXi 4.1.x Embedded host:
    1. Set up Cisco Nexus 1000V Release 4.0(4)SV1(3a).
    2. Set up vCenter Server with VUM plug-in installed.
    3. Connect Cisco Nexus 1000V Release 4.0(4)SV1(3a) to vCenter Server.
    4. Create a datacenter and add ESXi 4.1.x Embedded host to vCenter Server.
    5. Add ESXi 4.1.x compatible AV.2 VEM bits to an ESXi host by running the following command from vSphere CLI:
      vihostupdate.pl --server <Server IP> -i -b <VEM offline metadata path>
      The following prompts will be displayed on the vCLI:
      Enter username:
      Enter password:
      Please wait patch installation is in progress ...
    6. After the update of patches, navigate to Networking View in vCenter Server, and add the host in Cisco Nexus 1000V Release 4.0(4)SV1(3a).
    7. Verify that ESXi 4.1.x host is added to Cisco Nexus 1000V Release 4.0(4)SV1(3a).

Networking

  • Certain versions of VMXNET 3 driver fail to initialize the device when the number of vCPUs is not a power of two (KB 2003484). *
  • Network connectivity and system fail while control operations are running on physical NICs
    In some cases, when multiple X-Frame II s2io NICs are sharing the same PCI-X bus, control operations, such as changing the MTU, on the physical NIC cause network connectivity to be lost and the system to fail.

    Workaround: Avoid having multiple X-Frame II s2io NICs in slots that share the same PCI-X bus. In situations where such a configuration is necessary, avoid performing control operations on the physical NICs while virtual machines are doing network I/O.
  • Poor TCP performance might occur in traffic-forwarding virtual machines with LRO enabled
    Some Linux modules cannot handle LRO-generated packets. As a result, having LRO enabled on a VMXNET2 or VMXNET3 device in a traffic forwarding virtual machine running a Linux guest operating system can cause poor TCP performance. LRO is enabled by default on these devices.

    Workaround: In traffic-forwarding virtual machines running Linux guest operating systems, set the module load time parameter for the VMXNET2 or VMXNET3 Linux driver to include disable_lro=1.
  • Memory problems occur when a host uses more than 1016 dvPorts on a vDS
    Although the maximum number of allowed dvPorts per host on vDS is 4096, memory problems can start occurring when the number of dvPorts for a host approaches 1016. When this occurs, you cannot add virtual machines or virtual adapters to the vDS.

    Workaround: Configure a maximum of 1016 dvPorts per host on a vDS.
  • Reconfiguring VMXNET3 NIC might cause virtual machine to wake up
    Reconfiguring a VMXNET3 NIC while Wake-on-LAN is enabled and the virtual machine is asleep causes the virtual machine to resume.

    Workaround: Put the virtual machine back into sleep mode manually after reconfiguring (for example, after performing a hot-add or hot-remove) a VMXNET3 vNIC.

Storage

  • Cannot configure iSCSI over NIC with long logical-device names
    Running the command
    esxcli swiscsi nic add -nfrom a vSphere Command-Line Interface (vCLI) interface does not configure iSCSI operation over a VMkernel NIC whose logical-device name exceeds 8 characters. Third-party NIC drivers that use vmnic and vmknic names that contain more than 8 characters cannot work with iSCSI port binding feature in ESXi hosts and might display exception error messages in the remote command line interface. Commands such as esxcli swiscsi nic list, esxcli swiscsi nic add, esxcli swiscsi vmnic list from the vCLI interface fail because they are unable to handle long vmnic names created by the third-party drivers.

    Workaround: The third-party NIC driver needs to restrict their vmnic names to less than or equal to 8 bytes to be compatible with iSCSI port binding requirement.
    Note: If the driver is not used for iSCSI port binding, the driver can still use up to names of 32 bytes. This will also work with iSCSI without the port binding feature.


  • Large number of storage-related messages in /var/log/messages log file
    When ESXi starts on a host with several physical paths to storage devices, the VMkernel log file records a large number of storage-related messages similar to the following:

    Nov 3 23:10:19 vmkernel: 0:00:00:44.917 cpu2:4347)Vob: ImplContextAddList:1222: Vob add (@&!*@*@(vob.scsi.scsipath.add)Add path: %s) failed: VOB context overflow
    The system might log similar messages during storage rescans. The messages are expected behavior and do not indicate any failure. You can safely ignore them.

    Workaround: Turn off logging if you do not want to see the messages.
  • Persistent reservation conflicts on shared LUNs might cause ESXi hosts to take longer to boot
    You might experience significant delays while starting hosts that share LUNs on a SAN. This might be because of conflicts between the LUN SCSI reservations.

    Workaround: To resolve this issue and speed up the boot process, change the timeout for synchronous commands during boot time to 10 seconds by setting the Scsi.CRTimeoutDuringBoot parameter to 10000.

    To modify the parameter from the vSphere Client:
    1. In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced Settings under Software.
    2. Select SCSI.
    3. Change the Scsi.CRTimeoutDuringBoot parameter to 10000.

Supported Hardware

  • ESXi might fail to boot when allowInterleavedNUMANodes boot option is FALSE
    On an IBM eX5 host with a MAX 5 extension, ESXi fails to boot and displays a
    SysAbort message. This issue might occur when the allowInterleavedNUMANodes boot option is not set to TRUE. The default value for this option is FALSE.

    Workaround: Set the
    allowInterleavedNUMANodes boot option to TRUE. See KB 1021454 for more information about how to configure the boot option for ESXi hosts.
  • PCI device mapping errors on HP ProLiant DL370 G6
    When you run I/O operations on the HP ProLiant DL370 G6 server, you might encounter a purple screen or see alerts about Lint1 Interrupt or NMI. The HP ProLiant DL370 G6 server has two Intel I/O hub (IOH) and a BIOS defect in the ACPI Direct Memory Access remapping (DMAR) structure definitions, which causes some PCI devices to be described under the wrong DMA remapping unit. Any DMA access by such incorrectly described PCI devices triggers an IOMMU fault, and the device receives an I/O error. Depending on the device, this I/O error might result either in a Lint1 Interrupt or NMI alert message, or in a system failure with a purple screen.


    Workaround: Update the BIOS to 2010.05.21 or a later version.
  • ESXi installations on HP systems require the HP NMI driver
    ESXi 4.1 instances on HP systems require the HP NMI driver to ensure proper handling of non-maskable interrupts (NMIs). The NMI driver ensures that NMIs are properly detected and logged. Without this driver, NMIs, which signal hardware faults, are ignored on HP systems running ESXi.

    Caution: Failure to install this driver might result in silent data corruption.

    Workaround: Download and install the NMI driver. The driver is available as an offline bundle from the HP Web site. Also, see KB 1021609.
  • Virtual machines might become read-only when run on an iSCSI datastore deployed on EqualLogic storage
    Virtual machines might become read-only if you use an EqualLogic array with a later firmware version. The firmware might occasionally drop I/O from the array queue, causing virtual machines to become read-only after marking the I/O as failed.


    Workaround: Upgrade EqualLogic Array Firmware to version 4.1.4 or later.
  • After you upgrade a storage array, the status for hardware acceleration in the vSphere Client changes to supported after a short delay
    When you upgrade a storage array's firmware to a version that supports VAAI functionality, vSphere 4.1 does not immediately register the change. The vSphere Client temporarily displays Unknown as the status for hardware acceleration.


    Workaround: This delay is harmless. The hardware acceleration status changes to supported after a short period of time.
  • Slow performance during virtual machine power-on or disk I/O on ESXi on the HP G6 Platform with P410i or P410 Smart Array Controller
    Some hosts might show slow performance during virtual machine power-on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to /var/log/messages:

    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
    "naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
    exists on the device.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0


    Workaround: Install the HP 256MB P-series Cache Upgrade module from the HP website.

Upgrade and Installation

  • Multi-path Upgrade from ESXi 3.5 to ESXi 4.0.x to ESXi 4.1 Update 2 using VMware vCenter Update Manager fails *
    After you upgrade from ESXi 3.5 to ESXi 4.0.x using VMware vCenter Update Manager, attempts to upgrade the ESXi installation to ESXi 4.1 Update 2 fails with an error message similar to the following:

    VMware vCenter Update Manager had an unknown error. Check the Tasks and Events tab and log files for details

    The upgrade fails for the following upgrade paths:

    • ESXi 3.5 to ESXi 4.0 Update 1 to ESXi 4.1 Update 2
    • ESXi 3.5 to ESXi 4.0 Update 2 to ESXi 4.1 Update 2
    • ESXi 3.5 to ESXi 4.0 Update 3 to ESXi 4.1 Update 2
    • ESXi 3.5 to ESXi 4.0 to ESXi 4.1 Update 2

    Workaround: Restart the host after upgrading to ESXi 4.0.x and then upgrade to ESXi 4.1 Update 2.

  • Host upgrade to ESX/ESXi 4.1 Update 1 fails if you upgrade by using Update Manager 4.1 (KB 1035436)

  • Installation of the vSphere Client might fail with an error
    When you install vSphere Client, the installer might attempt to upgrade an out-of-date Microsoft Visual J# runtime. The upgrade is unsuccessful and the vSphere Client installation fails with the error: The Microsoft Visual J# 2.0 Second Edition installer returned error code 4113.

    Workaround: Uninstall all earlier versions of Microsoft Visual J#, and then install the vSphere Client. The installer includes an updated Microsoft Visual J# package.
  • Simultaneous access to two ESXi installations on USB flash drives causes the system to display panic messages
    If you boot a system that has access to multiple installations of ESXi with the same build number on two different USB flash drives, the system displays panic messages.

    Workaround: Detach one of the USB flash drives and reboot the system.

vMotion and Storage vMotion

  • vMotion is disabled after a reboot of ESXi 4.1 host
    If you enable vMotion on an ESXi host and reboot the ESXi host, vMotion is no longer enabled after the reboot process is completed.


    Workaround: To resolve the issue, reinstall the latest version of ESXi image provided by your system vendor.

  • Hot-plug operations fail after the swap file is relocated
    Hot-plug operations fail for powered-on virtual machines in a DRS cluster or on a standalone host, and result in the error failed to resume destination; VM not found after the swap file location is changed.

    Workaround: Perform one of the following tasks:
    • Reboot the affected virtual machines to register the new swap file location with them, and then perform the hot-plug operations.
    • Migrate the affected virtual machines using vMotion.
    • Suspend the affected virtual machines.

VMware Tools

  • Unable to use VMXNET network interface card, after installing VMware Tools in RHEL3 with latest errata kernel on ESXi 4.1 U1 *
    Some drivers in VMware Tools pre-built with RHEL 3.9 modules do not function correctly with the 2.4.21-63 kernel because of ABI incompatibility. As a result, some device drivers,such as vmxnet and vsocket, do not load when you install VMware Tools on REHL3.9.

    Workaround: Boot into the 2.4.21-63 kernel. Install the kernel-source and gcc package for the 2.4.21-63 kernel. Run the command vmware-config-tools.pl, --compile. This compiles the modules for this kernel, the resulting modules should work with the running kernel.

  • Windows guest operating systems display incorrect NIC device status after a virtual hardware upgrade *
    When you upgrade ESXi host from ESXi 3.5 to ESXi 4.1 along with the hardware version of the ESXi from 4 to 7 on Windows guest operating systems, the device status of the NIC is displayed as
    This hardware device is not connected to the computer (Code 45).

    Workaround: Uninstall and reinstall the NIC. Also uninstall any corresponding NICs that are displayed as ghosted in Device Manager when following the steps mentioned in: http://support.microsoft.com/kb/315539.

  • VMware Tools does not perform auto upgrade when a Microsoft Windows 2000 virtual machine is restarted
    When you configure VMware Tools for auto upgrading during power cycle, by selecting the Check and upgrade Tools before each power-on option under the Advanced pane in Virtual Machine Properties window, VMware Tools does not perform auto upgrade in Microsoft Windows 2000 guest operating systems.


    Workaround:
    Manually upgrade VMware Tools in the Microsoft Windows 2000 guest operating system.

 

Top of Page