VMware

VMware ESXi 4.1 Update 3 Release Notes

ESXi 4.1 Update 3 Installable | 30 Aug 2012 | Build 800380
ESXi 4.1 Update 3 Embedded | 30 Aug 2012 | Build 800380
VMware Tools | 30 Aug 2012 | Build 784891

Last Document Update: 4 Apr 2013

These release notes include the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESXi:

  • Support for additional guest operating systems This release updates support for many guest operating systems.For a complete list of guest operating systems supported with this release, see the VMware Compatibility Guide.
  • Resolved Issues This release also delivers a number of bug fixes that have been documented in the Resolved Issues section.

Top of Page

Earlier Releases of ESXi 4.1

Features and known issues from earlier releases of ESXi 4.1 are described in the release notes for each release. To view release notes for earlier releases of ESXi 4.1, click one of the following links:

Top of Page

Before You Begin

ESXi, vCenter Server, and vSphere Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Client, and optional VMware products.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 1.2.2 adds support for ESXi 4.1 Update 3 and vCenter Server 4.1 Update 3 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility

  • Learn about hardware compatibility

    The Hardware Compatibility Lists are available in the Web-based Compatibility Guide. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides, and provides the option to search the guides, and save the search results in PDF format. For example, with this guide, you can verify whether your server, I/O, storage, and guest operating systems, are compatible.

    Subscribe to be notified of Compatibility Guide updates through This is the RSS image that serves as a link to an RSS feed.

  • Learn about vSphere compatibility:

    VMware Product Interoperability Matrix

Installation and Upgrade

Read the ESXi Installable and vCenter Server Setup Guide for step-by-step guidance on installing and configuring ESXi Installable and vCenter Server or the ESXi Embedded and vCenter Server Setup Guide for step-by-step guidance on setting up ESXi Embedded and vCenter Server.

After successful installation of ESXi Installable or successful boot of ESXi Embedded, several configuration steps are essential. In particular, licensing, networking, and security configuration steps are necessary. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.

If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide for instructions about installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.

Management Information Base (MIB) files related to ESXi are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0.x. All MIB files can be downloaded from the VMware Web site at http://www.vmware.com/download.

Upgrading VMware Tools

VMware ESXi 4.1 Update 3 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESXi related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

Upgrading or Migrating to ESXi 4.1 Update 3

ESXi 4.1 Update 3 offers the following options for upgrading:

  • VMware vCenter Update Manager. vSphere module that supports direct upgrades from ESXi 3.5 Update 5, ESXi 4.0.x, ESXi 4.1, ESXi 4.1 Update 1, and ESXi 4.1 Update 2 to ESXi 4.1 Update 3.
  • vihostupdate. Command-line utility that supports direct upgrades from ESXi 4.0, ESXi 4.1 Update 1, and ESXi 4.1 Update 2 to ESXi 4.1 Update 3. This utility requires the vSphere CLI. For more details, see vSphere Upgrade Guide. To apply the VEM bundle, perform the workaround of using the vihostupdate utility. This enables to add ESXi 4.1 Update 3 Embedded host into Cisco Nexus 1000 AV 2 vDS.

Supported Upgrade Paths for Host Upgrade to ESXi 4.1 Update 3 :

Upgrade Deliverables

Supported Upgrade Tools
Supported Upgrade Paths to ESXi 4.1 Update 3

ESXi 3.5 Update 5

ESXi 4.0:
Includes
ESXi 4.0 Update 1
ESXi 4.0 Update 2

ESXi 4.0 Update 3
ESXi 4.0 Update 4

ESXi 4.1:
Includes
ESXi 4.1 Update 1

ESXi 4.1 Update 2

upgrade-from-ESXi3.5-to-4.1_update03.800380.zip

 

VMware vCenter Update Manager with host upgrade baseline

Yes

No

No

upgrade-from-esxi4.0-to-4.1-update03-800380.zip
  • VMware vCenter Update Manager with host upgrade baseline
  • vihostupdate

No

Yes

No

update-from-esxi4.1-4.1_update03.zip
  • VMware vCenter Update Manager with patch baseline
  • vihostupdate

No

No

Yes

ESXi 4.1 to 4.1.x using Patch definitions downloaded from VMware portal (Online) VMware vCenter Update Manager with Patch baseline

No

No

Yes

Notes:

Upgrading vSphere Client

After you upgrade vCenter Server or the ESX/ESXi host to vSphere 4.1 Update 3, you must upgrade the vSphere Client to vSphere Client 4.1 Update 3. Use the upgraded vSphere Client to access vSphere 4.1 Update 3.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released prior to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

In addition to ZIP file format, the ESXi 4.1 Update 3 release, both embedded and installable, is distributed as a patch that can be applied to existing installations of ESXi 4.1 software.

Patch Release ESXi410-Update03 contains the following individual bulletins:

ESXi410-201208201-UG: Updates the ESXi 4.1 Firmware
ESXi410-201208202-UG: Updates the ESXi 4.1 Tools

Patch Release ESXi410-Update03 Security-only contains the following individual bulletins:

ESXi410-201208101-SG: Updates ESXi 4.1 Security-only Firmware
ESXi410-201208102-SG: Updates the ESXi 4.1 Tools

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

CIM and API

  • The present upper limit of 256 file descriptors for Emulex CIM provider is insufficient
    The Emulex CIM provider exceeds the SFCB allocation of 256 file descriptors, resulting in the exhaustion of socket resources on ESXi hosts.

    This issue is resolved by increasing the socket limit and optimizing the preallocated socket pairs.
  • ESXi 4.1 Update 2 System Event Log (SEL) is empty on certain servers
    The System Event Log in the vSphere Client might be empty if ESXi 4.1Update 2 is run on certain physical servers. The host's IPMI logs (/var/log/ipmi/0/sel) might also be empty.
    An error message similar to the following might be written to /var/log/messages:
    Dec 8 10:36:09 esx-200 sfcb-vmware_raw[3965]: IpmiIfcSelReadAll: failed call to IpmiIfcSelReadEntry cc = 0xff

    This issue is resolved in this release.

Guest Operating System

  • SMP virtual machine fails with monitor panic while running kexec
    When a Linux kernel crashes, the linux kexec feature might be used to enable booting into a special kdump kernel and gathering crash dump files. An SMP Linux guest configured with kexec might cause the virtual machine to fail with a monitor panic during this reboot. Error messages such as the following might be logged:

    vcpu-0| CPU reset: soft (mode 2)
    vcpu-0| MONITOR PANIC: vcpu-0:VMM fault 14: src=MONITOR rip=0xfffffffffc28c30d regs=0xfffffffffc008b50


    This issue is resolved in this release.
  • Guest operating system of a virtual machine reports Kernel panic error when you try to install Solaris 10 with default memory size on some ESXi versions
    When you try to install Solaris 10 on ESXi, the guest operating system of the virtual machine reports a kernel panic error with the following message:
    panic[cpu0]/thread=fffffffffbc28340 ..page_unlock:...

    This issue is resolved in this release by increasing the default memory size to 3GB.

Miscellaneous

  • On ESXi the iSCSI initiator login timeout value allocated for software iSCSI and dependent iSCSI adapters is insufficient
    When multiple logins are attempted simultaneously on an ESXi host, the login process fails due to insufficient login timeout value.

    Allowing the users to configure the login timeout value solves the issue.
  • On visorfs file systems ESXi host does not capture vdf output for vm-support utility
    The option to capture vdf output is not available in ESXi, without this option the user might not be able to know the Ramdisk space usage.

    Including vdf –h command in vm-support solves this issue.
  • ESXi host becomes unresponsive due to USB log spew for IBM devices
    An ESXi host might become unresponsive due to constant spew of USB log messages similar to the following for non-passthrough IBM devices such as RSA2 or RNDIS/CDC Ether. This issue occurs even if no virtual machine is configured to use the USB passthrough option.

    USB messages: usb X-Y: usbdev_release : USB passthrough device opened for write but not in use: 0, 1

    This issue is resolved in this release.
  • Hot-removal of SCSI disk fails with error
    After you hot-add a SCSI disk successfully, hot-removing the same disk might fail with a disk not present error. Error messages similar to the following are written to the vmx log file:

    2010-06-22T19:40:26.214Z| vmx| scsi2:11: Cannot retrieve shares: A one-of constraint has been violated (-19)
    2010-06-22T19:40:26.214Z| vmx| scsi2:11: Cannot retrieve sched/limit/: A one-of constraint has been violated (-19)
    2010-06-22T19:40:26.214Z| vmx| scsi2:11: Cannot retrieve sched/bandwidthCap/: A one-of constraint has been violated (-19)
    2010-06-22T19:40:33.285Z| vmx| [msg.disk.hotremove.doesntexist] scsi2:11 is not present.
    2010-06-22T19:40:33.285Z| vmx| [msg.disk.hotremove.Failed] Failed to remove scsi2:11.


    This issue is resolved in this release.
  • Cannot join ESXi host to Active Directory when DNS domain suffix differs from the Active Directory domain name

    This issue is resolved in this release.

Networking

  • Virtual machine network limits do not work correctly when the limit is set to a value higher than 2048Mbps
    On an ESXi host, if you configure the Network I/O Control (NetIOC) to set the Host Limit for Virtual Machine Traffic to a value higher than 2048Mbps, the bandwidth limit is not enforced.

    This issue is resolved in this release.
  • ESXi host fails with a purple screen after a failed vMotion operation
    An ESXi host might fail with a purple diagnostic screen that displays a Exception 14 error after a failed vMotion operation.

    @BlueScreen: #PF Exception 14 in world 4362:vemdpa IP 0x418006cf1edc addr 0x588
    3:06:49:28.968 cpu8:4362)Code start: 0x418006c00000 VMK uptime: 3:06:49:28.968
    3:06:49:28.969 cpu8:4362)0x417f80857ac8:[0x418006cf1edc]Port_BlockNotify@vmkernel:nover+0xf stack: 0x4100afa10000
    3:06:49:28.969 cpu8:4362)0x417f80857af8:[0x418006d5c81d]vmk_PortLinkStatusSet@vmkernel:nover+0x58 stack: 0x417fc88e3ad8
    3:06:49:28.970 cpu8:4362)0x417f80857b18:[0x41800723a310]svs_set_vnic_link_state@esx:nover+0x27 stack: 0x4100afb3f530
    3:06:49:28.971 cpu8:4362)0x417f80857b48:[0x418007306a9f]sf_platform_set_link_state@esx:nover+0x96 stack: 0x417f80857b88
    3:06:49:28.971 cpu8:4362)0x417f80857b88:[0x41800725a31e]sf_set_port_admin_state@esx:nover+0x149 stack: 0x41800000002c
    3:06:49:28.972 cpu8:4362)0x417f80857cb8:[0x4180072bb5f0]sf_handle_dpa_call@esx:nover+0x657 stack: 0x417f80857cf8


    This issue has been observed in environments where the Cisco Nexus 1000v switch is used.

    This issue is resolved in this release.
  • IP address range is not displayed for VLANs
    If you run the esxcfg-info command, Network Hint does not display some VLAN IP address ranges on a physical NIC. The IP address range is also not displayed on the vCenter Server UI. An error message similar to the following is written to vmkernel.log:
    Dec 17 03:38:31 vmmon2 vmkernel: 8:19:26:44.179 cpu6:4102)NetDiscover: 732: Too many vlans for srcPort 0x2000002; won't track vlan 273

    This issue is resolved in this release.

  • PCI device driver e1000e does not support alternate MAC address feature on Intel 82571EB Serializer-Deserializer
    The PCI device Intel 82571EB Serializer-Deserializer with Device ID 1060 supports alternate MAC address feature however, the device driver e1000e of the same device does not support the feature.

    This issue is resolved in this release.
  • IBM server fails with purple diagnostic screen while trying to inject slow path packets
    If the metadata associated with slowpath packet is copied without checking whether enough data is mapped, then the metadata moves out of the frame mapped area and causes a page fault. Mapping necessary data to include the metadata before copying it resolves the issue.
  • When you disable coalescing on ESXi, the host fails with a purple screen
    In ESXi, when vmxnet3 is used as vNIC in some virtual machines and you turnoff packet coalescing, the ESXi host fails with a purple screen as the virtual machine is booting up.

    Correcting the coalescing checking and assertion logic resolves this issue.
  • When Load Based Teaming changes vNIC port mapping vmkernel fails to send Reverse Address Resolution Protocol
    If route based on pNIC Load is the Teaming Policy of dvs portgroup, and vNIC-to-pNIC mapping is changed when some pNICs are saturated, vmkernel fails to send out RARP packets to update physical switch about this change, which results in virtual machines loosing network connectivity.

    This issue is resolved in this release.
  • vSwitch configuration appears blank on ESXi host
    The networking configuration for an ESXi host might appear blank on the vSphere Client. Running the command esxcfg-vswitch -l from the local Tech Support Mode console fails with the error:

    Failed to read advanced option subtree UserVars: Error interacting with configuration file
    /etc/vmware/esx.conf: Unlock of ConfigFileLocker failed : Error interacting with configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail. Lock should be released by (0)


    Error messages similar to the following are written to hostd.log:

    [2011-04-28 14:22:09.519 49B40B90 verbose 'App'] Looking up object with name = "firewallSystem" failed.
    [2011-04-28 14:22:09.610 49B40B90 verbose 'NetConfigProvider'] FetchFn: List of pnics opted out
    [2011-04-28 14:22:09.618 49B40B90 info 'HostsvcPlugin'] Failed to read advanced option subtree UserVars: Error interacting with configuration file /etc/vmware/esx.conf: Unlock of ConfigFileLocker failed : Error interacting with configuration file /etc/vmware/esx.conf: I am being asked to delete a .LOCK file that I'm not sure is mine. This is a bad thing and I am going to fail. Lock should be released by (0)


    This issue is resolved in this release.
  • Network connectivity to a virtual machine configured to use IPv6 might fail after installing VMware Tools
    Network connectivity to guest operating systems using kernel versions 2.6.34 and higher, and configured to use IPv6 might not work after you install VMware Tools.

    This issue is resolved in this release.
  • vSphere Client might not display IPv6 addresses on some guest operating systems
    On some guest operating systems, IPv6 addresses might not be displayed in the vSphere Client as well as the command vmware-vim-cmd.

    This issue is resolved in this release.

  • Running the esxcli network connection list command on an ESXi host results in an error message
    The esxcli network connection list command might result in an error message similar to the following when the ESXi host is running raw IP connections, such as vSphere HA (FDM) agent and ICMP ping:

    terminate called after throwing an instance of 'VmkCtl::Lib::SysinfoException' what(): Sysinfo error on operation returned status : Bad parameter. Please see the VMkernel log for detailed error information Aborted

    This issue is resolved in this release.

Security

  • Update to ThinPrint agent removes DLL call
    This update removes a call to a non-existing ThinPrint DLL as a security hardening measure.
    VMware would like to thank Moshe Zioni from Comsec Consulting for reporting this issue to us.

Server Configuration

  • ESXi host on which page-sharing is disabled fails with a purple screen
    If you perform a VMotion operation to an ESXi host on which the boot-time option page-sharing is disabled, the ESXi host might fail with a purple screen.
    Disabling page-sharing severely affects performance of the ESXi host. Because page-sharing should never be disabled, starting with this release, the page-sharing configuration option is removed.
  • ESXi host logs an incorrect C1E state
    The vmkernel.log and the dmesg command might show the message C1E enabled by the BIOS. The message may also be shown even when C1E has been disabled by the BIOS, and it may not be shown even when C1E has been enabled by the BIOS.

Storage

  • Storage log messages for PSA components need some enhancement
    The error logging mechanism in ESXi host does not log all the storage error messages, because of this troubleshooting storage issues becomes difficult.

    Enhancing the log messages for the PSA components resolves this issue.
  • Reverting to a snapshot fails when virtual machines reference a shared VMDK file
    In an environment with two powered on virtual machines on the same ESXi host that reference a shared VMDK file, attempts to revert to a snapshot on either virtual machine might fail and the vSphere Client might display a File lock error. This issue occurs with both VMFS and NFS datastores.

    This issue is resolved in this release.
  • Corrupt VMFS volume causes VMFS heap memory exhaustion
    When an ESXi host encounters a corrupt VMFS volume, VMFS driver might leak memory causing VMFS heap exhaustion. This stops all VMFS operations causing orphaned virtual machines and missing datastores. vMotion operations might not work and attempts to start new virtual machines might fail with errors about missing files and memory exhaustion. This issue might affect all ESXi hosts that share the corrupt LUN and have running virtual machines on that LUN.

    This issue is resolved in this release.
  • VirtualCenter Agent Service fails during cold migration
    VirtualCenter Agent Service (vpxa) might fail during cold migration of a virtual machine. Error messages similar to the following are written to vpxd.log:

    [2011-11-02 12:06:34.557 03296 info 'App' opID=CFA0C344-00000198] [VpxLRO] -- BEGIN task-342826 -- vm-2851 -- vim.VirtualMachine.relocate -- 8D19CD22-FD15-44B9-9384-1DB4C1A7F7A2(ED8C34F5-CE61-4260-A8C1-D9CA5C2A1C4B)
    [2011-11-02 12:20:05.509 03296 error 'App' opID=CFA0C344-00000198] [VpxdVmprovUtil] Unexpected exception received during NfcCopy
    [2011-11-02 12:20:05.535 03296 error 'App' opID=CFA0C344-00000198] [migrate] (SINITAM02) Unexpected exception (vmodl.fault.HostCommunication) while relocating VM. Aborting.


    This issue is resolved in this release.
  • VMW_SATP_LSI plug-in timeout issue results in path thrashing
    Under certain circumstances Logical Units(LU) on storage controllers claimed by VMW_SATP_LSI plugin might not respond to the plugin issued path evaluation commands within the plugin timeout period of 5 seconds. When two or more vSphere hosts share access to these affected LU, this might result in path thrashing (see Understanding Path Thrashing).

    In this release, the timeout value in the VMW_SATP_LSI plug-in is increased to 10 seconds. Before installing this update consult your storage vendor to determine the guest operating system I/O timeout value.
  • Cannot create greater than 2TB-512B datastore on ESXi 4.x host using vSphere Client
    Prior to this release it was possible to create a greater than 2TB-512B datastore using the vSphere Command-Line Interface. However this is not a supported configuration.

    Now an attempt to create a datastore greater than 2TB-512 using the vSphere CLI fails gracefully.
  • Warning messages are logged during heartbeat reclaim operation
    VMFS might issue I/Os to a volume when a VMFS heartbeat reclaim operation is in progress or a virtual reset operation is performed on an underlying device. As a result, warning messages similar to the following are logged:

    WARNING: ScsiDeviceIO: 2360: Failing WRITE command (requiredDataLen=512 bytes) to write-quiesced partition naa.9999999999

    Further, an alert message is reported on the ESX console.

    These warnings and alerts are harmless and can be ignored.

    In this release, the alert messages are removed and warnings are changed to log messages.
  • Updated Installing certain versions of VMware Tools results in log spew
    When you install certain versions of VMware Tools such as version 8.3.7, a spew of messages similar to the following might be written to vmkernel.log:

    Nov 22 11:55:06 [hostname] vmkernel: 107:01:39:59.667 cpu12:21263)VSCSIFs: 329: handle 9267(vscsi0:0):Invalid Opcode (0xd1)
    Nov 22 11:55:06 [hostname] vmkernel: 107:01:39:59.687 cpu5:9487)VSCSIFs: 329: handle 9261(vscsi0:0):Invalid Opcode (0xd1)


    This issue is resolved in this release.
  • Default SATP Plugin is changed for ALUA supported LSI Arrays
    On ESXi 4.1Update2 hosts, the default Storage Array Type Plugin (SATP) for LSI arrays was VMW_SATP_LSI, which did not support Asymmetric Logical Unit Access (ALUA) functionality. Starting with this release, SATP Plugin for LSI arrays that supports ALUA is changed to VMW_SATP_ALUA, so that TPGS/ALUA arrays are automatically claimed by the default VMW_SATP_ALUA satp plugin.The following storage arrays are claimed by VMW_SATP_ALUA:
    Vendor Model Description
    • LSI INF-01-00
    • IBM ^1814* DS4000
    • IBM ^1818* DS5100/DS5300
    • IBM ^1746* IBM DS3512/DS3524
    • DELL MD32xx Dell MD3200
    • DELL MD32xxi Dell MD3200i
    • DELL MD36xxi Dell MD3600i
    • DELL MD36xxf Dell MD3600f
    • SUN LCSM100_F
    • SUN LCSM100_I
    • SUN LCSM100_S
    • SUN STK6580_6780 Sun StorageTek 6580/6780
    • SUN SUN_6180 Sun Storage 6180
    • SGI IS500 SGI InfiniteStorage 4000/4100
    • SGI IS600 SGI InfiniteStorage 4600
  • ESXi host might report about corrupted VMFS volume when you delete files from directories which have more than 468 files on an ESXi host
    An attempt to delete a file from a directory with more than 468 files or delete the directory itself might fail, and the ESXi host might erroneously report that the VMFS is corrupted.The ESXi host logs error messages similar to the following to the /var/log/messages:

    cpu10:18599)WARNING: Fil3: 10970: newLength 155260 but zla 2
    cpu10:18599)Fil3: 7054: Corrupt file length 155260, on FD <70, 93>, not truncating

    This issue is resolved in this release.
  • ESXi host stops responding when VMW_SATP_LSI module runs out of heap memory
    This issue occurs on servers that have access to LUNs which are claimed by VMW_SATP_LSI module. A memory leak that exists in VMW_SATP_LSI module forces the module to run out of memory. Error messages similar to the following are logged to vmkernel.log file:

    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: Heap: 2218: Heap VMW_SATP_LSI already at its maximumSize. Cannot expand.
    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: Heap: 2481: Heap_Align(VMW_SATP_LSI, 316/316 bytes, 8 align) failed. caller: 0x41800a9e91e5
    Feb 22 14:18:22 [host name] vmkernel: 2:03:59:01.391 cpu5:4192)WARNING: VMW_SATP_LSI: satp_lsi_IsInDualActiveMode: Out of memory.


    The memory leak in the VMW_SATP_LSI module has been resolved in this release.
  • ESXi host might fail with a purple screen while resignaturing a VMFS volume
    An ESXi host might fail with a purple diagnostic screen that displays error messages similar to the following during a VMFS volume resignaturing operation.

    #DE Exception 0 in world 20519269:helper22-6 @ 0x418024b26a33
    117:05:20:07.444 cpu11:20519269)Code start: 0x418024400000 VMK uptime: 117:05:20:07.444
    117:05:20:07.444 cpu11:20519269)0x417f84b2f290:[0x418024b26a33]Res3_ExtendResources@esx:nover+0x56 stack: 0x4100ab400040
    117:05:20:07.445 cpu11:20519269)0x417f84b2f2e0:[0x418024af9a58]Vol3_Extend@esx:nover+0x9f stack: 0x0
    117:05:20:07.445 cpu11:20519269)0x417f84b2f4f0:[0x418024afd3f6]Vol3_Open@esx:nover+0xdc9 stack: 0x417f84b2f668
    117:05:20:07.446 cpu11:20519269)0x417f84b2f6a0:[0x4180246225d1]FSS_Probe@vmkernel:nover+0x3ec stack: 0x417f84b2f6f0
    117:05:20:07.446 cpu11:20519269)0x417f84b2f6f0:[0x41802463d0e6]FDS_AnnounceDevice@vmkernel:nover+0x1dd stack: 0x3133306161336634


    This issue is resolved in this release.
  • ESXi host fails with purple screen and the Out of memory for timers error message during VMware View recompose operation
    An ESXi host might fail with a purple diagnostic screen that displays error messages and stack trace similar to the following when you perform a recompose operation on VMware View:

    @BlueScreen: Out of memory for timers
    0:20:06:44.618 cpu38:4134)Code start: 0x418033600000 VMK uptime: 0:20:06:44.618
    0:20:06:44.619 cpu38:4134)0x417f80136cf8:[0x418033658726]Panic@vmkernel:nover+0xa9 stack: 0x417f80136d78
    0:20:06:44.619 cpu38:4134)0x417f80136d28:[0x41803367958e]TimerAlloc@vmkernel:nover+0x10d stack: 0x9522bf175903
    0:20:06:44.619 cpu38:4134)0x417f80136d78:[0x418033679fbb]Timer_AddTC@vmkernel:nover+0x8a stack: 0x4100b8317660
    0:20:06:44.620 cpu38:4134)0x417f80136e08:[0x41803384d964]SCSIAsyncDeviceCommandCommon@vmkernel:nover+0x2f7 stack: 0x41037db8c
    0:20:06:44.620 cpu38:4134)0x417f80136e58:[0x41803383fbed]FDS_CommonAsyncIO@vmkernel:nover+0x48 stack: 0x410092dea0e8


    This issue is resolved in this release.
  • ESXi host might fail with a purple diagnostic screen due to an issue in VMFS module.
    An ESXi host might fail with a purple diagnostic screen that displays error messages similar to the following because of an issue in VMFS module.

    @BlueScreen: #PF Exception 14 in world 8008405:vmm0:v013313 IP 0x418001562b6d addr 0x28
    34:15:27:55.853 cpu9:8008405)Code start: 0x418000e00000 VMK uptime: 34:15:27:55.853
    34:15:27:55.853 cpu9:8008405)0x417f816af398:[0x418001562b6d]PB3_Read@esx:nover+0xf0 stack: 0x41000e1c9b60
    34:15:27:55.854 cpu9:8008405)0x417f816af468:[0x4180015485df]Fil3ExtendHelper@esx:nover+0x172 stack: 0x0
    34:15:27:55.854 cpu9:8008405)0x417f816af538:[0x41800154ded4]Fil3_SetFileLength@esx:nover+0x383 stack: 0xa00000001
    34:15:27:55.854 cpu9:8008405)0x417f816af5a8:[0x41800154e0ea]Fil3_SetFileLengthWithRetry@esx:nover+0x6d stack: 0x417f816af5e8
    34:15:27:55.854 cpu9:8008405)0x417f816af638:[0x41800154e38b]Fil3_SetAttributes@esx:nover+0x246 stack: 0x41027fabeac0
    34:15:27:55.854 cpu9:8008405)0x417f816af678:[0x41800101de7e]FSS_SetFileAttributes@vmkernel:nover+0x3d stack: 0x1000b000
    34:15:27:55.855 cpu9:8008405)0x417f816af6f8:[0x418001434418]COWUnsafePreallocateDisk@esx:nover+0x4f stack: 0x4100a81b4668
    34:15:27:55.855 cpu9:8008405)0x417f816af728:[0x418001434829]COWIncrementFreeSector@esx:nover+0x68 stack: 0x3
    34:15:27:55.855 cpu9:8008405)0x417f816af7b8:[0x418001436b1a]COWWriteGetLBNAndMDB@esx:nover+0x471 stack: 0xab5db53a0
    34:15:27:55.855 cpu9:8008405)0x417f816af908:[0x41800143761f]COWAsyncFileIO@esx:nover+0x8aa stack: 0x41027ff88180
    34:15:27:55.855 cpu9:8008405)0x417f816af9a8:[0x41800103d875]FDS_AsyncIO@vmkernel:nover+0x154 stack: 0x41027fb585c0
    34:15:27:55.856 cpu9:8008405)0x417f816afa08:[0x4180010376cc]DevFSFileIO@vmkernel:nover+0x13f stack: 0x4100077c3fc8


    This issue is resolved in this release.
  • Data corruption occurs on Emulex LPe12000 driver when dealing with 4G DMA boundary address
    In ESXi host, when the Emulex LPe12000 driver fails to set the dma_boundary value in the host template, the dma_boundary value is set to zero. This causes the SG list addresses to go beyond the address boundary defined for the driver, resulting in data corruption.

    This issue is resolved in this release.

Supported Hardware

  • Cannot change power policy of an ESXi host on IBM BladeCenter HX5 UEFI server
    When you try to change the power policy of an ESXi host running on a IBM BladeCenter HX5 UEFI server, the Power Management Settings on the vSphere Client displays the following message:

    Technology: Not Available
    Active Policy: Not Supported.


    This issue is resolved in this release.

vCenter Server, vSphere Client, and vSphere Web Access

  • Hostd and vpxa services fail and ESXi host disconnects from vCenter Server
    An sfcb-vmware_base TIMEOUT error might cause the hostd and vpxa services to fail and the ESXi host to disconnect intermittently from vCenter Server. Error messages similar to the following are written to /var/log/messages:

    Jan 30 12:25:17 sfcb-vmware_base[2840729]: TIMEOUT DOING SHARED SOCKET RECV RESULT (2840729)
    Jan 30 12:25:17 sfcb-vmware_base[2840729]: Timeout (or other socket error) waiting for response from provider
    Jan 30 12:25:17 sfcb-vmware_base[2840729]: Request Header Id (1670) != Response Header reqId (0) in request to provider 685 in process 3. Drop response.
    Jan 30 12:25:17 vmkernel: 7:19:02:45.418 cpu32:2836462)User: 2432: wantCoreDump : hostd-worker -enabled : 1


    This issue is resolved in this release.
  • vSphere Client displays incorrect data for a virtual machine
    The vSphere Client overview performance charts might display data for a virtual machine even for the period when the virtual machine was powered off.

    This issue is resolved in this release.

Virtual Machine Management

  • VMX file might become corrupted during quiesced snapshot operation
    When you create a quiesced snapshot of a virtual machine using either the VSS service, VMware Tools SYNC driver or a backup agent, hostd writes to the .vmx file. As a result, the .vmx file becomes blank.

    This issue is resolved in this release.
  • Virtual machine fails with monitor panic if paging is disabled
    An error messages similar to the following is written to vmware.log:

    Aug 16 14:17:39.158: vcpu-0| MONITOR PANIC: vcpu-1:VMM64 fault 14: src=MONITOR rip=0xfffffffffc262277 regs=0xfffffffffc008c50

    This issue is resolved in this release.
  • Windows 2003 virtual machine on ESXi host with NetBurst-based CPU takes a long time to restart
    Restarting a Windows 2003 Server virtual machine that has shared memory pages takes approximately 5 to 10 minutes if you have a NetBurst-based CPU installed on the ESXi host. However, you can shut down and power on the same virtual machine without experiencing any delay.

    This issue is resolved in this release.
  • Sometimes the reconfiguration task of virtual machine fails due to a deadlock
    In some scenarios, the reconfiguration task of a virtual machine fails due to a deadlock. The deadlock occurs while executing the reconfigure and data store change operations.

    This issue is resolved in this release.
  • Deleting a virtual machine results in removal of unassociated virtual disks
    If you create a virtual machine snapshot and later delete the virtual machine, independent or non-independent virtual disks that were detached from the virtual machine earlier might also be deleted.

    This issue is resolved in this release.
  • PCI configuration space values for VMDirectIO between ESXi and virtual machine are inconsistent
    When you set the VMDirectIO path for a network interface adapter in pass-through mode and assign it to a virtual machine, the state of the Device Control register’s Interrupt Disable bit (INTx) is displayed as enabled for the virtual machine and disabled for ESXi. This is incorrect, because the INTx value should be in enabled state for both the cases.

    This issue is resolved in this release.
  • Bridge Protocol Data Unit frames sent from bridged NIC disables physical uplink
    When you enable BPDU guard on the physical switch port, BPDU frames sent from the bridged virtual NIC cause the physical uplink to get disabled and as a result, the uplink goes down.
    Identify the host, which sent out the BPDU packets and set esxcfg-advcfg -s 1 /Net/BlockGuestBPDU on that host. This filters out and blocks BPDU packets from that virtual NIC. The virtual machines with the bridged virtual NICs should be powered on only after this filter is turned on for the filter to take effect.

    This issue is resolved in this release.
  • Cannot remove the extraConfig settings for a virtual machine through API
    This issue is resolved in this release.

VMware HA and Fault Tolerance

  • Secondary FT virtual machine running on ESXi host might fail
    On an ESXi host, a secondary Fault Tolerance virtual machine installed with VMXNET 3 adapter might fail. Error messages similar to the following are written to vmware.log:

    Dec 15 16:11:25.691: vmx| GuestRpcSendTimedOut: message to toolbox timed out.
    Dec 15 16:11:25.691: vmx| Vix: [115530 guestCommands.c:2468]: Error VIX_E_TOOLS_NOT_RUNNING in VMAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest
    Dec 15 16:11:30.287: vcpu-0| StateLogger::Commiting suicide: Statelogger divergence
    Dec 15 16:11:31.295: vmx| VTHREAD watched thread 4 "vcpu-0" died


    This issue does not occur on a virtual machine installed with E1000 adapter.

    This issue is resolved in this release.

vMotion and Storage vMotion

  • When you live migrate Windows 2008 virtual machines from ESX4.0 to ESX4.1 and then perform a storage vMotion the quiesced snapshots fail
    A storage vMotion operation on ESXi 4.1 by default sets disk.enableUUID to true for a Windows 2008 virtual machine, thus enabling application quiescing. Subsequent quiesce snapshot operation will fail till the virtual machine undergoes a power cycle.

    This issue is resolved in this release.

VMware Tools

  • VMware Snapshot Provider service (vmvss) is not removed while uninstalling VMware Tools on Windows 2008 R2 guest operating systems

    This issue is resolved in this release.
  • Certain SLES virtual machines do not restart after VMware Tools upgrade
    After you upgrade VMware Tools on certain SLES virtual machines such as SLES 10 SP4 and SLES 11 SP2, attempts to restart the virtual machine might fail with a waiting for sda2....... not responding error message. This issue occurs because the INITRD_MODULES options in /etc/sysconfig/kernel are deleted during the VMware Tools uninstall process.

    This issue is resolved in this release. However, the issue might still occur if you upgrade from an earlier version of VMware Tools to the version of VMware Tools available in this release. See Technical Information Document (TID) 7005233 on the Novell website.
  • VMware Tools upgrade times out on ESXi 4.1 Update 1
    On virtual machines running on ESXi 4.1 Update 1, attempts to upgrade VMware Tools might time out. Error messages similar to the following are written to vmware.log:

    Nov 30 15:36:34.839: vcpu-0| TOOLS INSTALL finished copying upgrader binary into guest. Starting Upgrader in guest.
    Nov 30 15:36:34.840: vcpu-0| TOOLS INSTALL Sending "upgrader.create 1"
    Nov 30 15:36:34.902: vcpu-0| TOOLS INSTALL Received guest file root from upgrader during unexpected state...ignoring.
    Nov 30 15:36:34.904: vcpu-0| GuestRpc: Channel 6, unable to send the reset rpc.
    Nov 30 15:36:34.905: vcpu-0| GuestRpc: Channel 6 reinitialized.


    This issue is resolved in this release.
  • VMware Tools service fails while starting Windows 2008 R2 virtual machine
    The VMware Tools service (vmtoolsd.exe) fails during the Windows 2008 R2 guest operating system start up process. However, you can start this service manually after the operating system start up process is complete.

    This issue is resolved in this release.
  • Esxtop fails while attempting a batch capture on a server with 128 CPUs
    When you attempt a batch capture on a server with 128 logical CPUs, the esxtop fails. This happens due to the limited buffer size of the header. Increasing the buffer size of the header resolves this issue.
  • Uninstalling or upgrading VMware Tools removes custom entries in modprobe.conf file
    Any changes that you make to the /etc/modprobe.conf file might be overwritten when you uninstall or upgrade VMware Tools.

    This issue is resolved in this release.
  • Windows Server 2008 R2 64-bit Remote Desktop IP virtualization might not work on ESXi 4.0 Update 1
    IP virtualization, which allows you to allocate unique IP addresses to RDP sessions, might not work on a Windows Server 2008 R2 64-bit running on ESXi 4.0 Update 1. This happens because the vsock dlls were registered by separate 32-bit and 64-bit executable file. This makes the catalog IDs to be out-of-sync between 32-bit and 64-bit Winsock catalogs for vSock LSP.

    This issue is resolved in this release.
  • VMware Tools upgrade does not replace VMCI driver required for Remote Desktop IP virtualization
  • When you upgrade VMware Tools from an earlier version to a later version, IP virtualization fails. This happens because, the ESXi host fails to check for the new VMCI driver version and is unable install the vsock DLL files.

Top of Page

Known Issues

This section describes known issues in this release in the following subject areas:

Known issues not previously documented are marked with the * symbol.

CIM and API

  • The configuration item /UserVars/CIMoemProviderEnabled is not deleted when you upgrade to ESXi 4.1 Update 3
    Workaround: Delete /UserVars/CIMoemPrividerEnabled by running the command:

    esxcfg-advcfg -L /UserVars/CIMoemProviderEnabled

  • OEM ProviderEnabled configuration items are enabled by default when you upgrade to ESXi 4.1 Update 3
    Workaround:
    1. Run the following command to disable OEM Providers:
       esxcfg-advcfg -s 0 /UserVars/CIMoem-<originalname>ProviderEnabled  
    2. Restart the sfcbd service by running the command:
        /etc/init.d/sfcbd-watchdog restart

  • SFCC library does not set the SetBIOSAttribute method in the generated XML file
    When Small Footprint CIM Client (SFCC) library tries to run the
    SetBIOSAttribute method of the CIM_BIOSService class through SFCC, a XML file containing the following error will be returned by SFCC: ERROR CODE="13" DESCRIPTION="The value supplied is incompatible with the type". This issue occurs when the old SFCC library does not support setting method parameter type in the generated XML file. Due to this issue, you cannot invoke the SetBIOSAttribute method. SFCC library in ESXi 4.1 hosts does not set the method parameter type in the socket stream XML file that is generated.

    A few suggested workarounds are:
    • IBM updates the CIMOM version
    • IBM patches the CIMOM version with this fix
    • IBM uses their own version of SFCC library

Guest Operating System

  • Installer window is not displayed properly during RHEL 6.1 guest operating system installation (KB 2003588).

  • Guest operating system might become unresponsive after you hot-add memory more than 3GB
    RedHat 5.4-64 guest operating system might become unresponsive if you start with an IDE device attached, and perform a hot-add operation to increase memory from less than 3GB to more than 3GB.

    Workaround: Do not use hot-add to change the virtual machine's memory size from less than or equal to 3072MB to more than 3072MB. Power off the virtual machine to perform this reconfiguration. If the guest operating system is already unresponsive, restart the virtual machine. This problem occurs only when the 3GB mark is crossed while the operating system is running.
  • Windows NT guest operating system installation error with hardware version 7 virtual machines
    When you install Windows NT 3.51 in a virtual machine that has hardware version 7, the installation process stops responding. This happens immediately after the blue startup screen with the Windows NT 3.51 version appears. This is a known issue in the Windows NT 3.51 kernel. Virtual machines with hardware version 7 contain more than 34 PCI buses, and the Windows NT kernel supports hosts that have a limit of 8 PCI buses.

    Workaround: If this is a new installation, delete the existing virtual machine and create a new one. During virtual machine creation, select hardware version 4. You must use the New Virtual Machine wizard to select a custom path for changing the hardware version. If you created the virtual machine with hardware version 4 and then upgraded it to hardware version 7, use VMware vCenter Converter to downgrade the virtual machine to hardware version 4.
  • Installing VMware Tools OSP packages on SLES 11 guest operating systems displays a message stating that the packages not supported
    When you install VMware Tools OSP packages on a SUSE Linux Enterprise Server 11 guest operating system, an error message similar to the following is displayed:
    The following packages are not supported by their vendor.

    Workaround: Ignore the message. The OSP packages do not contain a tag that marks them as supported by the vendor. However, the packages are supported.
  • Compiling modules for VMware kernel is supported only for the running kernel
    VMware currently supports compiling kernel modules only for the currently running kernel.

    Workaround: Boot the kernel before compiling modules for it.


  • No network connectivity after deploying and powering on a virtual machine
    If you deploy a virtual machine created by using the Customization Wizard on an ESXi host, and power on the virtual machine, the virtual machine might lose network connectivity.

    Workaround:
    After deploying each virtual machine on the ESXi host, select the Connect at power on option in the Virtual Machine Properties window before you power on the virtual machine.

Miscellaneous

  • An ESX/ESXi 4.1 U2 host with vShield Endpoint 1.0 installed fails with a purple diagnostic screen mentioning VFileFilterReconnectWork (KB 2009452).

  • Running resxtop or esxtop for extended periods might result in memory problems
    Memory usage by
    resxtop or esxtop might increase over time depending on what happens on the ESXi host being monitored. That means that if a default delay of 5 seconds between two displays is used, resxtop or esxtop might shut down after around 14 hours.

    Workaround: Although you can use the -n option to change the total number of iterations, you should consider running resxtop only when you need the data. If you do have to collectresxtop or esxtop statistics over a long time, shut down and restart resxtop or esxtop periodically instead of running oneresxtop or esxtop instance for weeks or months.
  • Group ID length in vSphere Client shorter than group ID length in vCLI
    If you specify a group ID using the vSphere Client, you can use only nine characters. In contrast, you can specify up to ten characters if you specify the group ID by using the
    vicfg-user vCLI.

    Workaround: None


  • Warning message appears when you run esxcfg-pciid command
    When you try to run the esxcfg-pciid command to list the Ethernet controllers and adapters, you might see a warning message similar to the following:
    Vendor short name AMD Inc does not match existing vendor name Advanced Micro Devices [AMD]
    kernel driver mapping for device id 1022:7401 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7409 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7411 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7441 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown


    This issue occurs when both the platform device-descriptor files and the driver-specific descriptor files contain descriptions for the same device.

    Workaround: You can ignore this warning message.
  • Adding ESXi 4.1.x Embedded host into Cisco Nexus 1000V Release 4.0(4)SV1(3a) fails
    You might not be able to add an ESXi 4.1.x Embedded host to a Cisco Nexus 1000V Release 4.0(4)SV1(3a) through vCenter Server.

    Workaround
    To add an ESXi 4.1.x Embedded host into Cisco Nexus 1000V Release 4.0(4)SV1(3a), use the vihostupdate utility to apply the VEM bundle on ESXi hosts.
    Perform the following steps to add an ESXi 4.1.x Embedded host:
    1. Set up Cisco Nexus 1000V Release 4.0(4)SV1(3a).
    2. Set up vCenter Server with VUM plug-in installed.
    3. Connect Cisco Nexus 1000V Release 4.0(4)SV1(3a) to vCenter Server.
    4. Create a datacenter and add ESXi 4.1.x Embedded host to vCenter Server.
    5. Add ESXi 4.1.x compatible AV.2 VEM bits to an ESXi host by running the following command from vSphere CLI:
      vihostupdate.pl --server <Server IP> -i -b <VEM offline metadata path>
      The following prompts will be displayed on the vCLI:
      Enter username:
      Enter password:
      Please wait patch installation is in progress ...
    6. After the update of patches, navigate to Networking View in vCenter Server, and add the host in Cisco Nexus 1000V Release 4.0(4)SV1(3a).
    7. Verify that ESXi 4.1.x host is added to Cisco Nexus 1000V Release 4.0(4)SV1(3a).

Networking

  • Certain versions of VMXNET 3 driver fail to initialize the device when the number of vCPUs is not a power of two (KB 2003484).
  • Network connectivity and system fail while control operations are running on physical NICs
    In some cases, when multiple X-Frame II s2io NICs are sharing the same PCI-X bus, control operations, such as changing the MTU, on the physical NIC cause network connectivity to be lost and the system to fail.

    Workaround: Avoid having multiple X-Frame II s2io NICs in slots that share the same PCI-X bus. In situations where such a configuration is necessary, avoid performing control operations on the physical NICs while virtual machines are doing network I/O.
  • Poor TCP performance might occur in traffic-forwarding virtual machines with LRO enabled
    Some Linux modules cannot handle LRO-generated packets. As a result, having LRO enabled on a VMXNET2 or VMXNET3 device in a traffic forwarding virtual machine running a Linux guest operating system can cause poor TCP performance. LRO is enabled by default on these devices.

    Workaround: In traffic-forwarding virtual machines running Linux guest operating systems, set the module load time parameter for the VMXNET2 or VMXNET3 Linux driver to include disable_lro=1.
  • Memory problems occur when a host uses more than 1016 dvPorts on a vDS
    Although the maximum number of allowed dvPorts per host on vDS is 4096, memory problems can start occurring when the number of dvPorts for a host approaches 1016. When this occurs, you cannot add virtual machines or virtual adapters to the vDS.

    Workaround: Configure a maximum of 1016 dvPorts per host on a vDS.
  • Reconfiguring VMXNET3 NIC might cause virtual machine to wake up
    Reconfiguring a VMXNET3 NIC while Wake-on-LAN is enabled and the virtual machine is asleep causes the virtual machine to resume.

    Workaround: Put the virtual machine back into sleep mode manually after reconfiguring (for example, after performing a hot-add or hot-remove) a VMXNET3 vNIC.

Storage

  • Cannot configure iSCSI over NIC with long logical-device names
    Running the command
    esxcli swiscsi nic add -nfrom a vSphere Command-Line Interface (vCLI) interface does not configure iSCSI operation over a VMkernel NIC whose logical-device name exceeds 8 characters. Third-party NIC drivers that use vmnic and vmknic names that contain more than 8 characters cannot work with iSCSI port binding feature in ESXi hosts and might display exception error messages in the remote command line interface. Commands such as esxcli swiscsi nic list, esxcli swiscsi nic add, esxcli swiscsi vmnic list from the vCLI interface fail because they are unable to handle long vmnic names created by the third-party drivers.

    Workaround: The third-party NIC driver needs to restrict their vmnic names to less than or equal to 8 bytes to be compatible with iSCSI port binding requirement.
    Note: If the driver is not used for iSCSI port binding, the driver can still use up to names of 32 bytes. This will also work with iSCSI without the port binding feature.


  • Large number of storage-related messages in /var/log/messages log file
    When ESXi starts on a host with several physical paths to storage devices, the VMkernel log file records a large number of storage-related messages similar to the following:

    Nov 3 23:10:19 vmkernel: 0:00:00:44.917 cpu2:4347)Vob: ImplContextAddList:1222: Vob add (@&!*@*@(vob.scsi.scsipath.add)Add path: %s) failed: VOB context overflow
    The system might log similar messages during storage rescans. The messages are expected behavior and do not indicate any failure. You can safely ignore them.

    Workaround: Turn off logging if you do not want to see the messages.
  • Persistent reservation conflicts on shared LUNs might cause ESXi hosts to take longer to boot
    You might experience significant delays while starting hosts that share LUNs on a SAN. This might be because of conflicts between the LUN SCSI reservations.

    Workaround: To resolve this issue and speed up the boot process, change the timeout for synchronous commands during boot time to 10 seconds by setting the Scsi.CRTimeoutDuringBoot parameter to 10000.

    To modify the parameter from the vSphere Client:
    1. In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced Settings under Software.
    2. Select SCSI.
    3. Change the Scsi.CRTimeoutDuringBoot parameter to 10000.

Supported Hardware

  • ESXi might fail to boot when allowInterleavedNUMANodes boot option is FALSE
    On an IBM eX5 host with a MAX 5 extension, ESXi fails to boot and displays a
    SysAbort message. This issue might occur when the allowInterleavedNUMANodes boot option is not set to TRUE. The default value for this option is FALSE.

    Workaround: Set the
    allowInterleavedNUMANodes boot option to TRUE. See KB 1021454 for more information about how to configure the boot option for ESXi hosts.
  • PCI device mapping errors on HP ProLiant DL370 G6
    When you run I/O operations on the HP ProLiant DL370 G6 server, you might encounter a purple screen or see alerts about Lint1 Interrupt or NMI. The HP ProLiant DL370 G6 server has two Intel I/O hub (IOH) and a BIOS defect in the ACPI Direct Memory Access remapping (DMAR) structure definitions, which causes some PCI devices to be described under the wrong DMA remapping unit. Any DMA access by such incorrectly described PCI devices triggers an IOMMU fault, and the device receives an I/O error. Depending on the device, this I/O error might result either in a Lint1 Interrupt or NMI alert message, or in a system failure with a purple screen.


    Workaround: Update the BIOS to 2010.05.21 or a later version.
  • ESXi installations on HP systems require the HP NMI driver
    ESXi 4.1 instances on HP systems (G7 and earlier) require the HP NMI driver to ensure proper handling of non-maskable interrupts (NMIs). The NMI driver ensures that NMIs are properly detected and logged to IML. Without this driver, NMIs, which signal hardware faults, are ignored on HP systems running ESXi.
    Caution: Failure to install this driver might result in NMI events being ignored by the OS. Ignoring NMI events may lead to system instability.

    Workaround: Download and install the NMI driver. The driver is available as an offline bundle from the HP Web site. Also, see KB 1021609.
  • Virtual machines might become read-only when run on an iSCSI datastore deployed on EqualLogic storage
    Virtual machines might become read-only if you use an EqualLogic array with a later firmware version. The firmware might occasionally drop I/O from the array queue, causing virtual machines to become read-only after marking the I/O as failed.


    Workaround: Upgrade EqualLogic Array Firmware to version 4.1.4 or later.
  • After you upgrade a storage array, the status for hardware acceleration in the vSphere Client changes to supported after a short delay
    When you upgrade a storage array's firmware to a version that supports VAAI functionality, vSphere 4.1 does not immediately register the change. The vSphere Client temporarily displays Unknown as the status for hardware acceleration.


    Workaround: This delay is harmless. The hardware acceleration status changes to supported after a short period of time.
  • Slow performance during virtual machine power-on or disk I/O on ESXi on the HP G6 Platform with P410i or P410 Smart Array Controller
    Some hosts might show slow performance during virtual machine power-on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to /var/log/messages:

    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
    "naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
    exists on the device.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0


    This issue is caused by the lack of a battery-backed cache module in the host.
    Without the battery-backed cache module, the controller operates in Zero Memory Raid mode, severely limiting the number of simultaneous commands that can be processed by the controller.

    Workaround: Install the HP 256MB P-series Cache Upgrade module from the HP website.

Upgrade and Installation

  • Multi-path Upgrade from ESXi 3.5 to ESXi 4.0.x to ESXi 4.1 Update 3 using VMware vCenter Update Manager fails
    After you upgrade from ESXi 3.5 to ESXi 4.0.x using VMware vCenter Update Manager, attempts to upgrade the ESXi installation to ESXi 4.1 Update 3 fails with an error message similar to the following:

    VMware vCenter Update Manager had an unknown error. Check the Tasks and Events tab and log files for details

    The upgrade fails for the following upgrade paths:

    • ESXi 3.5 to ESXi 4.0 Update 1 to ESXi 4.1 Update 3
    • ESXi 3.5 to ESXi 4.0 Update 2 to ESXi 4.1 Update 3
    • ESXi 3.5 to ESXi 4.0 Update 3 to ESXi 4.1 Update 3
    • ESXi 3.5 to ESXi 4.0 Update 4 to ESXi 4.1 Update 3
    • ESXi 3.5 to ESXi 4.0 to ESXi 4.1 Update 3

    Workaround: Restart the host after upgrading to ESXi 4.0.x and then upgrade to ESXi 4.1 Update 3.

  • Host upgrade to ESX/ESXi 4.1 Update 1 fails if you upgrade by using Update Manager 4.1 (KB 1035436)

  • Installation of the vSphere Client might fail with an error
    When you install vSphere Client, the installer might attempt to upgrade an out-of-date Microsoft Visual J# runtime. The upgrade is unsuccessful and the vSphere Client installation fails with the error: The Microsoft Visual J# 2.0 Second Edition installer returned error code 4113.

    Workaround: Uninstall all earlier versions of Microsoft Visual J#, and then install the vSphere Client. The installer includes an updated Microsoft Visual J# package.
  • Simultaneous access to two ESXi installations on USB flash drives causes the system to display panic messages
    If you boot a system that has access to multiple installations of ESXi with the same build number on two different USB flash drives, the system displays panic messages.

    Workaround: Detach one of the USB flash drives and reboot the system.

vMotion and Storage vMotion

  • vMotion is disabled after a reboot of ESXi 4.1 host
    If you enable vMotion on an ESXi host and reboot the ESXi host, vMotion is no longer enabled after the reboot process is completed.


    Workaround: To resolve the issue, reinstall the latest version of ESXi image provided by your system vendor.

  • Hot-plug operations fail after the swap file is relocated
    Hot-plug operations fail for powered-on virtual machines in a DRS cluster or on a standalone host, and result in the error failed to resume destination; VM not found after the swap file location is changed.

    Workaround: Perform one of the following tasks:
    • Reboot the affected virtual machines to register the new swap file location with them, and then perform the hot-plug operations.
    • Migrate the affected virtual machines using vMotion.
    • Suspend the affected virtual machines.

VMware Tools

  • Unable to use VMXNET network interface card, after installing VMware Tools in RHEL3 with latest errata kernel on ESXi 4.1 U1
    Some drivers in VMware Tools pre-built with RHEL 3.9 modules do not function correctly with the 2.4.21-63 kernel because of ABI incompatibility. As a result, some device drivers,such as vmxnet and vsocket, do not load when you install VMware Tools on REHL3.9.

    Workaround: Boot into the 2.4.21-63 kernel. Install the kernel-source and gcc package for the 2.4.21-63 kernel. Run the command vmware-config-tools.pl, --compile. This compiles the modules for this kernel, the resulting modules should work with the running kernel.

  • Windows guest operating systems display incorrect NIC device status after a virtual hardware upgrade
    When you upgrade ESXi host from ESXi 3.5 to ESXi 4.1 along with the hardware version of the ESXi from 4 to 7 on Windows guest operating systems, the device status of the NIC is displayed as
    This hardware device is not connected to the computer (Code 45).

    Workaround: Uninstall and reinstall the NIC. Also uninstall any corresponding NICs that are displayed as ghosted in Device Manager when following the steps mentioned in: http://support.microsoft.com/kb/315539.

  • VMware Tools does not perform auto upgrade when a Microsoft Windows 2000 virtual machine is restarted
    When you configure VMware Tools for auto upgrading during power cycle, by selecting the Check and upgrade Tools before each power-on option under the Advanced pane in Virtual Machine Properties window, VMware Tools does not perform auto upgrade in Microsoft Windows 2000 guest operating systems.


    Workaround:
    Manually upgrade VMware Tools in the Microsoft Windows 2000 guest operating system.

 

Top of Page