VMware

VMware ESXi 4.1 Update 1 Release Notes

ESXi 4.1 Update 1 Installable | 10 February 2011 | 348481
ESXi 4.1 Update 1 Embedded | 10 February 2011 | 348481
VMware Tools | 10 February 2011 | Build 341836

Last Document Update: 18 July 2011

These release notes include the following topics:

What's New

The following information describes some of the enhancements available in this release of VMware ESXi:

  • Enablement of Trusted Execution Technology (TXT) ESXi 4.1 Update 1 can be configured to boot with Intel Trusted Execution Technology (TXT). This boot option can protect ESXi in some cases where system binaries are corrupted or have been tampered with. TXT is currently available on Intel Xeon processor 5600 series servers. For more information, see KB 1033811.
  • Improvement in scalability — ESXi 4.1 Update 1 supports up to 160 logical processors.
  • Support for additional guest operating systems ESXi 4.1 Update 1 provides support for RHEL 6, RHEL 5.6, SLES 11 SP1 for VMware, Ubuntu 10.10, and Solaris 10 Update 9 guest operating systems. For a complete list of guest operating systems supported in this release, see the VMware Compatibility Guide.
  • Inclusion of additional drivers ESXi 4.1 Update 1 includes the 3ware SCSI 2.26.08.036vm40 and Neterion vxge 2.0.28.21239-p3.0.1.2 drivers. For earlier releases, these drivers are only available as separate downloads.

Resolved Issues In addition, this release delivers a number of bug fixes that are documented in the Resolved Issues section.

Top of Page

Prior Releases of ESXi 4.1 Update 1

The earlier release of ESXi 4.1 Update 1 is ESXi 4.1. Features and known issues of ESXi 4.1 are described in the release notes available at VMware vSphere 4.1 Release Notes—ESXi Edition.

Top of Page

Before You Begin

ESXi, vCenter Server, and vSphere Client Version Compatibility

The VMware vSphere Compatibility Matrixes provide details of the compatibility of current and earlier versions of VMware vSphere components, including ESXi, vCenter Server, the vSphere Client, and other VMware products.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 1.2 adds support for ESXi 4.1 Update 1 and vCenter Server 4.1 Update 1 releases. For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

Hardware Compatibility

  • Learn about hardware compatibility

    The Hardware Compatibility Lists are available in the Web-based Compatibility Guide. The Web-based Compatibility Guide is a single point of access for all VMware compatibility guides, and provides the option to search the guides, and save the search results in PDF format. For example, with this guide, you can verify whether your server, I/O, storage, and guest operating systems, are compatible.

    Subscribe to be notified of Compatibility Guide updates through This is the RSS image that serves as a link to an RSS feed.

  • Learn about vSphere compatibility:

    VMware vSphere Compatibility Matrixes (PDF)

Installation and Upgrade

Read the ESXi Installable and vCenter Server Setup Guide for step-by-step guidance on installing and configuring ESXi Installable and vCenter Server or the ESXi Embedded and vCenter Server Setup Guide for step-by-step guidance on setting up ESXi Embedded and vCenter Server.

After successful installation of ESXi Installable or successful boot of ESXi Embedded, several configuration steps are essential. In particular, licensing, networking, and security configuration steps are necessary. Refer to the following guides in the vSphere documentation for guidance on these configuration tasks.

If you have VirtualCenter 2.x installed, see the vSphere Upgrade Guide for instructions about installing vCenter Server on a 64-bit operating system and preserving your VirtualCenter database.

Management Information Base (MIB) files related to ESXi are not bundled with vCenter Server. Only MIB files related to vCenter Server are shipped with vCenter Server 4.0.x. All MIB files can be downloaded from the VMware Web site at http://www.vmware.com/download.

Upgrading VMware Tools

VMware ESXi 4.1 Update 1 contains the latest version of VMware Tools. VMware Tools is a suite of utilities that enhances the performance of the virtual machine's guest operating system. Refer to the VMware Tools Resolved Issues for a list of issues resolved in this release of ESXi related to VMware Tools.

To determine an installed VMware Tools version, see Verifying a VMware Tools build version (KB 1003947).

Upgrading or Migrating to ESXi 4.1 Update 1

ESXi 4.1 Update 1 offers the following options for upgrading:

  • VMware vCenter Update Manager. vSphere module that supports direct upgrades from ESXi 3.5 Update 5, ESXi 4.0.x, and ESXi 4.1 to ESXi 4.1 Update 1.
  • vihostupdate. Command-line utility that supports direct upgrades from ESXi 4.0 to ESXi 4.1 Update 1. This utility requires the vSphere CLI. For more details, see vSphere Upgrade Guide. To apply the VEM bundle, perform the workaround of using the vihostupdate utility. This enables to add ESXi 4.1 Update 1 Embedded host into Cisco Nexus 1000 AV 2 vDS.

Supported Upgrade Paths for Host Upgrade to ESXi 4.1 Update 1:

Upgrade Deliverables

Supported Upgrade Tools
Supported Upgrade Paths to ESXi 4. 1 Update 1

ESXi 3.5 Update 5

ESXi 4.0
Includes:
ESXi 4.0 Update 1
ESXi 4.0 Update 2

ESXi 4.0 Update 3

ESXi 4.1

upgrade-from-ESXi3.5-to-4.1_update01.348481.ZIP
VMware vCenter Update Manager with host upgrade baseline

Yes

No

No

upgrade-from-esxi4.0-to-4.1-update01-348481.ZIP
  • VMware vCenter Update Manager with host upgrade baseline
  • vihostupdate

No

Yes

No

update-from-esxi4.1-4.1_update01.ZIP
  • VMware vCenter Update Manager with patch baseline
  • vihostupdate
No
No
Yes
ESXi 4.1 to ESXi 4.1.x using the patch definitions downloaded from the VMware portal (online)
VMware vCenter Update Manager with patch baseline
No
No
Yes

Notes:

Patches Contained in this Release

This release contains all bulletins for ESXi that were released prior to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

In addition to ZIP file format, the ESXi 4.1 Update 1 release, both embedded and installable, is distributed as a patch that can be applied to existing installations of ESXi 4.1 software.

Patch Release ESXi410-Update01 contains the following individual bulletins:

ESXi410-201101201-SG Updates ESXi 4.1 Firmware
ESXi410-201101202-UG Updates ESXi 4.1 VMware Tools

ESXi 4.1 Update 1 also contains all fixes in the following previously released bundles:

ESXi410-201010401-SG Updates Firmware
ESXi410-201010402-BG Updates VMware Tools

See the documentation listed on the download page for more information on the contents of each patch.

Resolved Issues

This section describes resolved issues in this release in the following subject areas:

Resolved issues previously documented as known issues are marked with the † symbol.

Backup

  • Cannot take quiesced snapshots of Microsoft Windows Server 2008 R2 virtual machine running vCenter Server 4.1
    When creating a snapshot of a Microsoft Windows Server 2008 R2 virtual machine that has vCenter Server 4.1 installed, the snapshot operation might fail to complete. This issue occurs on Microsoft Windows Server 2008 R2 virtual machines when the ADAM database is installed. The issue is resolved in this release.

CIM and API

  • vCenter Server incorrectly reports the service tag of the blade chassis
    For blade servers that are running ESXi, vCenter Server incorrectly reports the service tag of the blade chassis instead of that for the blade. On a Dell or IBM blade server that is managed by vCenter Server, the service tag number is listed in the System section of the vCenter Server under vCenter Server > Configuration tab under processors. This issue occurs due to the incorrect value for the SerialNumber property of the Fixed CIM OMC_Chassis instance. The issue is resolved in this release.

Guest Operating System

  • Windows guest operating system might fail with vmx_fb.dll error
    Windows guest operating systems installed with VMware Windows XP display driver model (XPDM) driver might fail with a
    vmx_fb.dll error and display a blue screen. The issue is resolved in this release.
  • CPUID information returned by virtual hardware differs from CPUID of physical hardware
    Software running on guest operating systems might use CPUID information to determine characteristics of underlying (virtual or physical) CPU hardware. In some instances, CPUID information returned by virtual hardware differs from physical hardware. Based upon these differences, certain components of guest software might malfunction. In this release, the fix causes certain CPUID responses to more closely match that which physical hardware would return.
  • Installation of VMware Tools Operating System Specific Packages (OSPs) on SUSE Linux 9 results in screen resolution error
    When you install the VMware Tools OSP in SUSE Linux 9, the operating system configures the /etc/X11/XF86Config file of the X Window System and replaces the vesa video driver with the VMware video driver. If you are using a video driver other than vesa, the VMware Tools OSP does not replace it. Attempts to change the screen resolution by using the GUI or the Linux xrandr command fail. The issue is resolved in this release.
  • RPM installer for VMware Tools for RHEL3, RHEL4, and SLES9 fails to verify the package signature
    When you use the RPM Installer to install VMware Tools for RHEL3, RHEL4, or SLES9, signature verification fails with a V3 RSA/MD5 signature: NOKEY, key ID 66fd4949 warning message. This condition occurs because later than versions of the RPM cannot verify RSA signatures created by newer versions of the RPM. To perform the signature verification, download the VMWARE-PACKAGING-GPG-DSA-KEY pub key file from http://www.vmware.com/download/packages.html. After you import the key file, the warning message does not appear.
  • Hot add feature might fail if ESXi hosts has less available physical memory
    When you try to hot-add memory or CPU to a virtual machine whose reserved memory or CPU is more than half the available physical memory or CPU of the host machine, the operation might fail. After applying this fix, hot-add might fail sometimes when the available physical memory on an ESXi host is less than twice the overhead memory of the virtual machine.

    • To view the available physical memory or CPU in vSphere Client, select an ESXi host and click the Resource Allocation link. The available physical memory is displayed under Memory > Available Capacity. The available CPU is displayed under CPU > Available Capacity.
    • To view the overhead memory of a virtual machine, select the virtual machine and click the Resource Allocation link. The overhead memory is displayed under Memory > Overhead.
  • Updates for the WDDM driver
    In this release, the Windows Display Driver Model (WDDM) driver is updated to fix some infrequent issues where a Windows virtual machine fails and displays a blue screen.


  • Memory sizes in the virtual machine default settings of RHEL 32-bit and 64-bit guest operating systems updated
    The minimum, default, and maximum recommended memory sizes in the virtual machine default settings for RHEL 32-bit and 64-bit guest operating systems are updated as per the latest RHEL 6 operating system specifications at http://www.redhat.com/.


  • DOS-based client software might fail to boot a virtual machine after upgrading to ESXi 4.1
    When a DOS-based client software (for example, Altiris Deployment Solution) uses PXE to boot a DOS image, the PXE boot sequence might fail while bootstrapping the image from the server, and the boot loader might display a Status: 0xc0000001 error. The issue is resolved in this release.


  • SCSI WRITE_SAME CDB issued from guest operating systems fails even though storage array supports CDB
    Applications running on guest operating systems on an ESXi 4.1 host fail after displaying error messages. This issue occurs only on ESXi 4.1 when applications use the SCSI WRITE_SAME CDB. CDB reports a degraded performance when applications use an alternate write command. The issue is resolved in this release.

Miscellaneous

  • Warning displayed in /var/log/vmkwarning when ESXi starts
    Warning messages similar to the following might appear in
    /var/log/messages when an ESXi host starts:
    WARNING: AcpiShared: 194 SSDTd+: Table length 11108 > 4096
    This warning is generated when an ESXi host reads an ACPI table from the BIOS where the table size is more than 4096 bytes. This warning is harmless and you can ignore it. This fix downgrades the warning to a log.
  • Performance chart data for networking displays incorrect information
    The stacked per-virtual machine performance chart data for networking displays incorrect information. You can access the chart from Chart Options in the Advanced Settings on the Performance tab. The network transmit and receive statistics of a virtual machine connected to the Distributed Virtual Switch (DVS) are interchanged, reversed, and incorrectly displayed.
    The fix in ESXi 4.1 Update 1 ensures that the host agent on ESXi hosts collects the correct statistics and passes them to the performance charts UI. This fix also resolves receive, transmit, and usage network statistics at the host level. Before this fix, the values reported for each of these statistics were zero.
  • Asynchronous drivers added to ESXi 4.1 Update 1 patches
    Asynchronous drivers included in ESXi 4.1 Update 1 patches are integrated into ISO and patches.


  • ESXi host machines with more than 64 GB memory fail to boot when trusted boot is enabled
    ESXi 4.1 host machines with more than 64GB memory fail to boot if Trusted Platform Module (TPM) and Trusted Execution Technology (TXT) are available and enabled from the BIOS. The issue is resolved in this release.

  • vSphere Client displays incorrect BIOS version and release date
    The vSphere Client displays incorrect BIOS version and release date on the Processors page in the Configuration tab. The issue is resolved in this release.

Networking

  • ESXi hosts might not boot or cause some devices not to be accessible when using more than 6 bnx2 ports
    An error message similar to the following is displayed in /var/log/messages of ESXi:
    CPU10:4118 - intr vector: 290:out of interrupt vectors. Before applying this fix, bnx2 devices in MSI-X mode and jumbo frame configuration support only 6 ports. The issue is resolved in this release. In this release, the bnx2 driver allocates only 1 RX queue in MSI-X mode, supports 16 ports, and saves memory resources.
  • ESXi might fail on HP systems with loading 32.networking-drivers error
    ESXi might fail on some HP systems such as DL 980 G7 containing HP NC522SFP Dual Port 10GbE Gigabit Server Adapters and displays a message similar to
    loading 32.networking-drivers. This issue occurs when ESXi starts the NetXen driver usually during the boot up of ESXi hosts or during the installation of network drivers. This is dependent on some HP system configurations. After applying this fix, you can use NetXen NIC for Gigabit Ethernet connectivity with ESXi hosts.
  • e1000e 1.1.2-NAPI driver added
    In previous releases, Intel e1000e 1.1.2-NAPI driver was not bundled with ESXi but provided separately for download. In this release, e1000e 1.1.2-NAPI driver is bundled with ESXi.
  • ESXi hosts might fail with bnx2x
    If you use ESXi 4.1 with Broadcom bnx2x (in-box driver version 1.54.1.v41.1-1vmw), you might see the following symptoms:
    • The ESXi host might frequently disconnect from the network.
    • The ESXi host might stop responding with a purple diagnostic screen that displays messages similar to the following:
      [0x41802834f9c0]bnx2x_rx_int@esx:nover: 0x184f stack: 0x580067b28, 0x417f80067b97, 0x
      [0x418028361880]bnx2x_poll@esx:nover: 0x1cf stack: 0x417f80067c64, 0x4100bc410628, 0x
      [0x41802825013a]napi_poll@esx:nover: 0x10d stack: 0x417fe8686478, 0x41000eac2b90, 0x4
    • The ESXi host might stop responding with a purple diagnostic screen that displays messages similar to the following:
      0:18:56:51.183 cu10:4106)0x417f80057838:[0x4180016e7793]PktContainerGetPkt@vmkernel:nover+0xde stack: 0x1
      0:18:56:51.184 pu10:4106)0x417f80057868:[0x4180016e78d2]Pkt_SlabAlloc@vmkernel:nover+0x81 stack: 0x417f800578d8
      0:18:56:51.184 cpu10:4106)0x417f80057888:[0x4180016e7acc]Pkt_AllocWithUseSizeNFlags@vmkernel:nover+0x17 stack: 0x417f800578b8
      0:18:56:51.185 cpu10:4106)0x417f800578b8:[0x41800175aa9d]vmk_PktAllocWithFlags@vmkernel:nover+0x6c stack: 0x1
      0:18:56:51.185 cpu10:4106)0x417f800578f8:[0x418001a63e45]vmklnx_dev_alloc_skb@esx:nover+0x9c stack: 0x4100aea1e988
      0:18:56:51.185 cpu10:4106)0x417f80057918:[0x418001a423da]__netdev_alloc_skb@esx:nover+0x1d stack: 0x417f800579a8
      0:18:56:51.186 cpu10:4106)0x417f80057b08:[0x418001b6c0cf]bnx2x_rx_int@esx:nover+0xf5e stack: 0x0
      0:18:56:51.186 cpu10:4106)0x417f80057b48:[0x418001b7e880]bnx2x_poll@esx:nover+0x1cf stack: 0x417f80057c64
      0:18:56:51.187 cpu10:4106)0x417f80057bc8:[0x418001a6513a]napi_poll@esx:nover+0x10d stack: 0x417fc1f0d078
    • The bnx2x driver or firmware sends panic messages and writes a backtrace with messages similar to the following messages in the /var/log/vmkernel log file:
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3379(vmnic0)]MC assert!
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_attn_int_deasserted3:3384(vmnic0)]driver assert
      vmkernel: 0:00:34:23.762 cpu8:4401)<3>[bnx2x_panic_dump:634(vmnic0)]begin crash dump


    The issue is resolved in this release.
  • ESXi hosts might not boot or might cause some devices to become inaccessible when using NetXen 1G NX3031 devices or multiple 10G NX2031 devices
    When using a NetXen 1G NX3031 or multiple 10G NX2031 devices, you might see an error message similar to the following written to logs of the ESXi 4.1 host after you upgrade from the ESXi 4.0 host:
    Out of Interrupt vectors. On ESXi hosts where NetXen 1G and NX2031 10G devices do not support NetQueue, the ESXi host might run out of MSI-X interrupt vectors. This issue can render ESXi hosts unbootable or other devices (such as storage devices) inaccessible. The issue is resolved in this release.

Security

  • Updates the ESXi userworld OpenSSL library version to 0.9.8p
    In this release, the version of ESXi userworld OpenSSL library is updated to 0.9.8p. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-3864 and CVE-2010-2939 to the issues addressed in this update.


  • Update to userworld version of cURL
    The userworld version of cURL is updated. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2010-0734 to the issue addressed in this update.


  • ESXi 4.1 Update Installer might introduce a SFCB Authentication flaw
    Under certain conditions, the ESXi 4.1 installer that upgrades an ESXi 3.5 or ESXi 4.0 host to an ESXi 4.1 host incorrectly handles the SFCB authentication mode. The result is that SFCB authentication can allow login with any combination of username and password.

    An ESXi 4.1 host is affected if all of the following conditions apply:
    • ESXi 4.1 host was upgraded from an ESXi 3.5 or an ESXi 4.0 host.
    • The SFCB configuration file at /etc/sfcb/sfcb.cfg was modified prior to the upgrade.
    • The sfcbd daemon is running (sfcbd runs by default).

    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2010-4573 to this issue addressed in this update.

Server Configuration

  • System clock reports inaccurate time
    In rare cases, the system clock on ESXi hosts reports incorrect time. The issue is resolved in this release.

Storage

  • Creation of large .vmdk files on NFS might fail
    When you create a virtual disk (.vmdk file) with a large size, for example, more than 1TB, on NFS storage, the creation process might fail with an error:
    A general system error occurred: Failed to create disk: Error creating disk. This issue occurs when the NFS client does not wait for sufficient time for the NFS storage array to initialize the virtual disk after the RPC parameter of the NFS client times out. By default the timeout value is 10 seconds. This fix provides the configuration option to tune the RPC timeout parameter using the
    esxcfg-advcfg -s <Timeout> /NFS/SetAttrRPCTimeout command.
  • Not supported SCSI warning messages logged in vmkernel
    SCSI warnings similar to the following are written to
    /var/log/vmkernel:
    Apr 29 04:10:55 localhost vmkernel: 0:00:01:08.161 cpu0:4096)WARNING: ScsiHost: 797: SCSI command failed on handle 1072: Not supported. You can ignore the messages. Such messages appear because certain SCSI commands are not supported in the storage array. In this release, the warning messages are suppressed in /var/log/vmkwarning to reduce support calls.
  • Messages logged in VMkernel log files when storage arrays are rescanned from vSphere Client
    ESXi hosts might log messages similar to the following in the VMkernel log files for LUNs not mapped to ESXi hosts: 0:22:30:03.046 cpu8:4315)ScsiScan: 106: Path 'vmhba0:C0:T0:L0': Peripheral qualifier 0x1 not supported. Such messages are logged either when ESXi hosts start, or when you initiate a rescan operation of the storage arrays from the vSphere Client, or every 5 minutes after ESXi hosts boot. In this release, the messages are no longer logged.
  • Error messages logged when scanning for LUNs from iSCSI storage array
    ESXi hosts might fail with a
    NOT_REACHED bora/modules/vmkernel/tcpip2/freebsd/sys/support/vmk_iscsi.c:648 message on a purple screen when you scan for LUNs from iSCSI storage array through vSphere Client (Inventory > Configuration > Storage Adapters > iSCSI Software Adapter). This issue might occur if the tcp.window.size parameter in /etc/vmware/vmkiscsid/iscsid.conf is modified manually. This fix resolves the issue and also logs warning messages in /var/log/messages for ESXi, if the tcp.window.size parameter is modified to a value lower than its default.
  • ESXi hosts might fail when using LSI SAS HBAs connected to SATA disks
    Data loss might occur on ESXi hosts using LSI SAS HBAs connected to SATA disks. This issue occurs when the maximum I/O size is set to more than 64KB in mptsas driver and LSI SAS HBAs are connected to SATA disks. The issue is resolved in this release.
  • VMW_PSP_RR set as default path selection policy for NetApp storage arrays that support SATP_ALUA
    The VMW_PSP_RR policy is set as the default path selection policy for NetApp storage arrays that support SATP_ALUA. You can set this policy by using vCenter Server or through esxcli.
    To set this policy through vCenter Server:
    1. Click the Configuration tab.
    2. In the left panel under Hardware Adapters, select Storage Adapters.
    3. On the right panel, select the vmhba that connects to the NetApp LUNs.
    4. Right-click the LUN whose path policy you want to change, and select Manage Paths.
    5. In the resulting dialog box, under Policy, set Path Selection to Round Robin.

    To set this policy, through esxcli, run the following commands from a remote host setup for vSphere remote management:
    # esxcli <conn_options> nmp satp addrule --satp="VMW_SATP_ALUA" --psp="VMW_PSP_RR" --claim-option="tpgs_on" --vendor="NETAPP" --description="NetApp arrays with ALUA support"
    # esxcli <conn_options> conn_options corestorage claimrule load
    # esxcli <conn_options> corestorage claimrule run

  • VMW_PSP_RR set as default path selection policy for IBM 2810XIV storage arrays
    The VMW_PSP_RR policy is set as the default path selection policy for IBM 2810XIV storage arrays. You can set this policy by using vCenter Server or through esxcli.
    To set this policy through vCenter Center:
    1. Click the Configuration tab.
    2. In the left panel under Hardware Adapters, select Storage Adapters.
    3. On the right panel, select the vmhba that connects to the IBM LUNs.
    4. Right-click the LUN whose path policy you want to change, and select Manage Paths.
    5. In the resulting dialog box, under Policy, set Path Selection to Round Robin.

    To set this policy through esxcli, run the following commands from a remote host setup for vSphere remote management:
    # esxcli <conn_options> nmp satp addrule --satp="VMW_SATP_ALUA" --psp="VMW_PSP_RR" --claim-option="tpgs_on" --vendor="IBM" --model="2810XIV" --description="IBM 2810XIV arrays with ALUA support"
    # esxcli <conn_options> nmp satp addrule --satp="VMW_SATP_DEFAULT_AA" --psp="VMW_PSP_RR" --claim-option="tpgs_off" --vendor="IBM" --model="2810XIV" --description="IBM 2810XIV arrays without ALUA support"
    # esxcli <conn_options> corestorage claimrule load

    # esxcli <conn_options> corestorage claimrule run
  • Target information for some LUNs is missing in the vCenter Server UI
    Target information for LUNs is sometimes not displayed in the vCenter Server UI.
    In releases earlier than ESXi 4.1 Update 1, some iSCSI LUNs do not show the target information. To view this information in the Configuration tab, perform the following steps:

    1. Click Storage Adapters under Hardware.
    2. Click iSCSI Host Bus Adapter in the Storage Adapters pane.
    3. Click Paths in the Details pane.
  • txtsetup.oem file on floppy disk points to incorrect location of PVSCSI driver
    Installation of Microsoft Windows Server 2003 guest operating systems on VMware Para virtual SCSI (PVSCSI) virtual hard disk fails with the following error:
    Insert the disk labeled:
    VMware PVSCSI Controller Disk
    into drive A:
    The error occurs because txtsetup.oem file on a floppy disk points to the incorrect location of the PVSCSI driver. In this release, the location is corrected.
  • Some virtual machines stop responding during storage rescan operation when any LUN on the host is in an all-paths-down (APD) state
    During storage rescan operation, some virtual machines stop responding when any LUN on the host is in an all-paths-down (APD) state. For more information, see KB 1016626 at http://kb.vmware.com/kb/1016626. To work around the problem in the KB, manually set the advanced configuration option /VMFS3/FailVolumeOpenIfAPD to 1 before issuing the rescan and then reset it to 0 after the completion of the rescan operation. The issue is resolved in this release. You need not apply the workaround of setting and not setting the advanced configuration option while starting the rescan operation. Virtual machines on non-APD volumes will no longer fail during a rescan operation, even if some LUNs are in an all-paths-down state.

  • Consistent decrease in the size of the available memory when a virtual machine is powered on and powered off
    Powering on and the powering off a single virtual machine or running I/O on the virtual machine causes a consistent decrease in the available memory. The VMkernel log contains the memory allocation error messages. The issue is resolved in this release.

Supported Hardware

  • Power consumption graph does not display information on few ESXi hosts
    The Power consumption graph displayed from vSphere Client for ESXi host 4.1 hosts does not appear on few ESXi hosts from certain vendors. The chart shows consumption as 0 watts. To view the Power consumption graph from vSphere Client, you can click Host, click the Performance tab, and select Power from the drop-down menu. In this release, the Power consumption graph is updated to support additional hosts from Bull, Dell, HP, Mitsubishi, NEC, and Toshiba.


  • Additional support for Dell iDRAC devices
    In this release, iDRAC Dell device id: 413C:a102 is supported.


  • Enhances to Tboot and SINIT error reporting and handling
    If Tboot fails to boot an ESXi host in secure mode, an error similar to the following is displayed:
    Intel TXT boot failed on a previous boot attempt. TXT.ERRORCODE:<error code>. <description>
    The issue is resolved in this release.

Upgrade and Installation

  • esxupdate dependency resolution is not consistent with bulletin policy
    In the esxupdate utility, the dependency resolution does not match with the bulletin building policy. While installing ESXi hosts, you do not see any errors or warning messages. The bulletin delivery policy should use the lowest version of VIB needed so that you can control the fixes you install and avoid unknown or unexpected updates to ESXi hosts.


  • DNS server settings are lost after second ESXi reboot
    After ESXi is installed using the scripted installation method, the DNS server settings are lost when ESXi reboots for the second time. The settings are correct after the initial reboot following the installation but are lost after the second reboot. The issue is resolved in this release.

  • Unable to install ESXi 4.1 using scripted installation from a USB drive
    When you try to install ESXi 4.1 by using script saved on a USB drive or if the installation media is on a USB drive, the ESXi 4.1 installation stops and displays the following message:
    Total number of sectors not a multiple of sectors per track!
    Add mtools_skip_check=1 to your .mtoolsrc file to skip this test.

    The issue is resolved in this release.

Virtual Machine Management

  • Virtual machines fail to power on in some cases even when swap space exists on ESXi 4.1 hosts
    Powering on virtual machines running on an ESXi 4.1 host fails and logs an
    Insufficient COS swap to power on error message in /var/log/vmware/hostd.log even though the machine has free space available. After applying this fix, you can power on virtual machines.
  • After a virtual machine is migrated, USB devices on the destination host might incorrectly show up as assigned to the virtual machine
    After you migrate a virtual machine to a destination host that contains USB devices and then add additional USB devices to the migrated virtual machine on the destination host, USB devices on the destination host might show up as assigned to the virtual machine even though they have not been assigned to it.
    The issue is resolved in this release.

  • Cannot perform pass-through of certain devices
    A warning message similar to the following is logged in the VMkernel log:
    x:x:x.x: Cannot change ownership to PASSTHRU (non-ACS capable switch in hierarchy) where x:x:x.x is pci device address
    This warning message is logged because certain devices cannot perform pass-through when you perform a direct assignment of a device to a virtual machine. Access Control Services (ACS) is introduced by PCI SIG to address potential data corruption with direct assignment of devices. In this release, pass-through of devices that are behind PCI Express (PCIe) switches and without ACS capability is not allowed.

  • Resuming a 64-bit Windows virtual machine from suspended state might cause the applications running on the virtual machine to stop responding
    If a 64-bit Windows virtual machine is resumed from the suspended state or is migrated to an ESXi 4.1 host, the applications running on the virtual machine might stop responding, and the Microsoft Windows Event Logs might display error messages similar to the following:
    .NET Runtime
    .NET Runtime version * Fatal Execution Engine Error *
    Application Error:
    Faulting application name: oobe.exe *
    Faulting module name: mscorwks.dll *
    Exception code: 0xc00000005

    The issue is resolved in this release.

vMotion and Storage vMotion

  • Cannot revert to snapshots created on ESXi 3.5 hosts
    ESXi hosts cannot revert virtual machines to an earlier snapshot after you upgrade from ESXi 3.5 Update 4 to ESXi 4.1 Update 1. The following message might be displayed in vCenter Server:
    The features supported by the processor(s) in this machine are different from the features supported by the processor(s) in the machine on which the checkpoint was saved. Please try to resume the snapshot on a machine where the processors have the same features. This issue might occur when you create virtual machines on ESX 3.0 hosts, perform vMotion and suspend virtual machines on ESXi 3.5 hosts, and resume them on ESXi 4.x hosts. In this release, you can revert to snapshots created on ESXi 3.5 hosts, and resume the virtual machines on ESXi 4.x hosts.
  • Swap file of virtual machine increases in size after completion of storage vMotion
    After you move a virtual machine running with memory reservation is moved to a different datastore by using storage vMotion, after the completion of storage vMotion, the virtual machine is seen to have a swap file equal in size to the configured memory. Messages similar to the following might be logged in the vmware.log file of the virtual machine:
    May 25 16:42:38.756: vmx| FSR: Decreasing CPU reservation by 750 MHz, due to atomic CPU reservation transfer of that amount. New reservation is 0 MHz.FSR: Decreasing memory reservation by 20480 MB, due to atomic memory reservation transfer of that amount. New reservation is 0 pages. CreateVM: Swap: generating normal swap file name.
    When ESXi hosts perform storage vMotion, the swap file size of virtual machines increases to memsize. With this release, the swap file size remains the same after storage vMotion.
  • ESXi hosts might fail when storage vMotion task is cancelled when relocating a powered on virtual machine
    Canceling a storage vMotion task when relocating a powered-on virtual machine containing multiple disks on the same datastore to a different datastore on the same host might cause the ESXi 4.1 hosts to fail with the following error: Exception: NOT_IMPLEMENTED bora/lib/pollDefault/pollDefault.c:2059. The issue is resolved in this release.

VMware Tools

  • Error displayed when VMware Tools is installed with Print Spooler service stopped
    If you install VMware Tools on a virtual machine on which the Print Spooler service is stopped (Administrative Tools > Services > Print Spooler), and if you select the Thin Print feature (Install VMware Tools > Typical or Custom and select Thin Print under Custom Setup > VMware Device Drivers), uninstalling VMware Tools results in the following error message: Runtime Error! Program: C:\Program Files\VMware\VMware Tools\TPVCGateway.exe. This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Click OK to remove the error message and uninstall VMware Tools. In this release, the error message does not appear.
  • Update Tools button in VMware Tools Properties window disabled for performing VMware Tools upgrade
    The Update Tools button for performing VMware Tools upgrade from a Windows guest operating system is disabled for non-administrators. The Update Tools button is available under the Options tab of the VMware Tools Properties window. Also, options in the Shrink and Scripts tabs in the VMware Tools Properties window are disabled for non-administrators. To block VMware Tools upgrades for all users, set the isolation.tools.autoinstall.disable parameter to TRUE in the VMX file. This release contains only a UI change that disables the Updates Tools button for non-administrators, and does not block upgrades from custom applications.
  • Installation of vmware-open-vm-tools-xorg-utilities might fail
    If the /usr and /directories are mounted on different devices and you are installing vmware-open-vm-tools-xorg-utilities on guest operating systems, an error similar to the following is displayed:
    failed: ln: creating hard link
    `/usr/lib/vmware-tools/libconf/etc/fonts/fonts.conf' =>
    `/etc/fonts/fonts.conf': Invalid cross-device link error
    .
    For example, if you run zypper (SLES package manager) to install vmware-open-vm-tools-utilities, you might see the error on the screen. When vmware-open-vm-tools-xorg-utilities tries to create a hard link to
    /etc/fonts/fonts.conf, a cross-device link issue might occur if the /usr and /directories are mounted on different devices. After applying this fix, you can install vmware-open-vm-tools-xorg-utilities.
  • Creation of quiesced snapshots might not work on non-English versions of Microsoft Windows guest operating systems
    The issue occurs when a Windows known folder path contains non-ASCII characters, for example, in the case of the application data folder in Czech Windows guest operating systems. This issue causes the snapshot operation to fail. The issue is resolved in this release
    .  

  • Creation of quiesced snapshots might fail on some non-English versions of Windows guest operating systems
    Quiesced snapshots might fail on some non-English versions of Windows guest operating systems, such as French versions of Microsoft Windows Server 2008 R2 and Microsoft Windows 7 guest operating systems. This issue occurs because the VMware Snapshot Provider service does not get registered as a Windows service or as a COM+ application properly on some non-English versions of Microsoft Windows guest operating systems. This issue causes the whole snapshot operation to fail, and as a result, no snapshot is created. The issue is resolved in this release.

Top of Page

Known Issues

This section describes known issues in this release in the following subject areas:

Known issues not previously documented are marked with the * symbol.

CIM and API

  • SFCC library does not set the SetBIOSAttribute method in the generated XML file
    When Small Footprint CIM Client (SFCC) library tries to run the
    SetBIOSAttribute method of the CIM_BIOSService class through SFCC, a XML file containing the following error will be returned by SFCC: ERROR CODE="13" DESCRIPTION="The value supplied is incompatible with the type". This issue occurs when the old SFCC library does not support setting method parameter type in the generated XML file. Due to this issue, you cannot invoke the SetBIOSAttribute method. SFCC library in ESXi 4.1 hosts does not set the method parameter type in the socket stream XML file that is generated.

    A few suggested workarounds are:
    • IBM updates the CIMOM version
    • IBM patches the CIMOM version with this fix
    • IBM uses their own version of SFCC library

Guest Operating System

  • Guest operating system might become unresponsive after you hot-add memory more than 3GB *
    RedHat 5.4-64 guest operating system might become unresponsive if you start with an IDE device attached, and perform a hot-add operation to increase memory from less than 3GB to more than 3GB.

    Workaround: Do not use hot-add to change the virtual machine's memory size from less than or equal to 3072MB to more than 3072MB. Power off the virtual machine to perform this reconfiguration. If the guest operating system is already unresponsive, restart the virtual machine. This problem occurs only when the 3GB mark is crossed while the operating system is running.
  • Windows NT guest operating system installation error with hardware version 7 virtual machines *
    When you install Windows NT 3.51 in a virtual machine that has hardware version 7, the installation process stops responding. This happens immediately after the blue startup screen with the Windows NT 3.51 version appears. This is a known issue in the Windows NT 3.51 kernel. Virtual machines with hardware version 7 contain more than 34 PCI buses, and the Windows NT kernel supports hosts that have a limit of 8 PCI buses.

    Workaround: If this is a new installation, delete the existing virtual machine and create a new one. During virtual machine creation, select hardware version 4. You must use the New Virtual Machine wizard to select a custom path for changing the hardware version. If you created the virtual machine with hardware version 4 and then upgraded it to hardware version 7, use VMware vCenter Converter to downgrade the virtual machine to hardware version 4.
  • Installing VMware Tools OSP packages on SLES 11 guest operating systems displays a message stating that the packages not supported *
    When you install VMware Tools OSP packages on a SUSE Linux Enterprise Server 11 guest operating system, an error message similar to the following is displayed:
    The following packages are not supported by their vendor.

    Workaround: Ignore the message. The OSP packages do not contain a tag that marks them as supported by the vendor. However, the packages are supported.
  • Compiling modules for VMware kernel is supported only for the running kernel *
    VMware currently supports compiling kernel modules only for the currently running kernel.

    Workaround: Boot the kernel before compiling modules for it.


  • No network connectivity after deploying and powering on a virtual machine
    If you deploy a virtual machine created by using the Customization Wizard on an ESXi host, and power on the virtual machine, the virtual machine might lose network connectivity.

    Workaround:
    After deploying each virtual machine on the ESXi host, select the Connect at power on option in the Virtual Machine Properties window before you power on the virtual machine.

Miscellaneous

  • Running resxtop or esxtop for extended periods might result in memory problems *
    Memory usage by
    resxtop or esxtop might increase over time depending on what happens on the ESXi host being monitored. That means that if a default delay of 5 seconds between two displays is used, resxtop or esxtop might shut down after around 14 hours.

    Workaround: Although you can use the -n option to change the total number of iterations, you should consider running resxtop only when you need the data. If you do have to collectresxtop or esxtop statistics over a long time, shut down and restart resxtop or esxtop periodically instead of running oneresxtop or esxtop instance for weeks or months.
  • Group ID length in vSphere Client shorter than group ID length in vCLI *
    If you specify a group ID using the vSphere Client, you can use only nine characters. In contrast, you can specify up to ten characters if you specify the group ID by using the
    vicfg-user vCLI.

    Workaround: None


  • Warning message appears when you run esxcfg-pciid command
    When you try to run the esxcfg-pciid command to list the Ethernet controllers and adapters, you might see a warning message similar to the following:
    Vendor short name AMD Inc does not match existing vendor name Advanced Micro Devices [AMD]
    kernel driver mapping for device id 1022:7401 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7409 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7411 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown
    kernel driver mapping for device id 1022:7441 in /etc/vmware/pciid/pata_amd.xml conflicts with definition for unknown


    This issue occurs when both the platform device-descriptor files and the driver-specific descriptor files contain descriptions for the same device.

    Workaround: You can ignore this warning message.
  • Adding ESXi 4.1 Update 1 Embedded host into Cisco Nexus 1000V Release 4.0(4)SV1(3a) fails
    You might not be able to add an ESXi 4.1 Update 1 Embedded host to a Cisco Nexus 1000V Release 4.0(4)SV1(3a) through vCenter Server.

    Workaround
    To add an ESXi 4.1 Update 1 Embedded host into Cisco Nexus 1000V Release 4.0(4)SV1(3a), use the vihostupdate utility to apply the VEM bundle on ESXi hosts.
    Perform the following steps to add an ESXi 4.1 Update 1 Embedded host:
    1. Set up Cisco Nexus 1000V Release 4.0(4)SV1(3a).
    2. Set up vCenter Server with VUM plug-in installed.
    3. Connect Cisco Nexus 1000V Release 4.0(4)SV1(3a) to vCenter Server.
    4. Create a datacenter and add ESXi 4.1 Update 1 Embedded host to vCenter Server.
    5. Add ESXi 4.1 Update 1 compatible AV.2 VEM bits to an ESXi host by running the following command from vSphere CLI:
      vihostupdate.pl --server <Server IP> -i -b <VEM offline metadata path>
      The following prompts will be displayed on the vCLI:
      Enter username:
      Enter password:
      Please wait patch installation is in progress ...
    6. After the update of patches, navigate to Networking View in vCenter Server, and add the host in Cisco Nexus 1000V Release 4.0(4)SV1(3a).
    7. Verify that ESXi 4.1 Update 1 host is added to Cisco Nexus 1000V Release 4.0(4)SV1(3a).

Networking

  • Network connectivity and system fail while control operations are running on physical NICs *
    In some cases, when multiple X-Frame II s2io NICs are sharing the same PCI-X bus, control operations, such as changing the MTU, on the physical NIC cause network connectivity to be lost and the system to fail.

    Workaround: Avoid having multiple X-Frame II s2io NICs in slots that share the same PCI-X bus. In situations where such a configuration is necessary, avoid performing control operations on the physical NICs while virtual machines are doing network I/O.
  • Poor TCP performance might occur in traffic-forwarding virtual machines with LRO enabled *
    Some Linux modules cannot handle LRO-generated packets. As a result, having LRO enabled on a VMXNET2 or VMXNET3 device in a traffic forwarding virtual machine running a Linux guest operating system can cause poor TCP performance. LRO is enabled by default on these devices.

    Workaround: In traffic-forwarding virtual machines running Linux guest operating systems, set the module load time parameter for the VMXNET2 or VMXNET3 Linux driver to include disable_lro=1.
  • Memory problems occur when a host uses more than 1016 dvPorts on a vDS *
    Although the maximum number of allowed dvPorts per host on vDS is 4096, memory problems can start occurring when the number of dvPorts for a host approaches 1016. When this occurs, you cannot add virtual machines or virtual adapters to the vDS.

    Workaround: Configure a maximum of 1016 dvPorts per host on a vDS.
  • Reconfiguring VMXNET3 NIC might cause virtual machine to wake up *
    Reconfiguring a VMXNET3 NIC while Wake-on-LAN is enabled and the virtual machine is asleep causes the virtual machine to resume.

    Workaround: Put the virtual machine back into sleep mode manually after reconfiguring (for example, after performing a hot-add or hot-remove) a VMXNET3 vNIC.

Storage

  • Cannot configure iSCSI over NIC with long logical-device names
    Running the command
    esxcli swiscsi nic add -nfrom a vSphere Command-Line Interface (vCLI) interface does not configure iSCSI operation over a VMkernel NIC whose logical-device name exceeds 8 characters. Third-party NIC drivers that use vmnic and vmknic names that contain more than 8 characters cannot work with iSCSI port binding feature in ESXi hosts and might display exception error messages in the remote command line interface. Commands such as esxcli swiscsi nic list, esxcli swiscsi nic add, esxcli swiscsi vmnic list from the vCLI interface fail because they are unable to handle long vmnic names created by the third-party drivers.

    Workaround: The third-party NIC driver needs to restrict their vmnic names to less than or equal to 8 bytes to be compatible with iSCSI port binding requirement.
    Note: If the driver is not used for iSCSI port binding, the driver can still use up to names of 32 bytes. This will also work with iSCSI without the port binding feature.


  • Large number of storage-related messages in /var/log/messages log file *
    When ESXi starts on a host with several physical paths to storage devices, the VMkernel log file records a large number of storage-related messages similar to the following:

    Nov 3 23:10:19 vmkernel: 0:00:00:44.917 cpu2:4347)Vob: ImplContextAddList:1222: Vob add (@&!*@*@(vob.scsi.scsipath.add)Add path: %s) failed: VOB context overflow
    The system might log similar messages during storage rescans. The messages are expected behavior and do not indicate any failure. You can safely ignore them.

    Workaround: Turn off logging if you do not want to see the messages.
  • Persistent reservation conflicts on shared LUNs might cause ESXi hosts to take longer to boot *
    You might experience significant delays while starting hosts that share LUNs on a SAN. This might be because of conflicts between the LUN SCSI reservations.

    Workaround: To resolve this issue and speed up the boot process, change the timeout for synchronous commands during boot time by setting the Scsi.CRTimeoutDuringBoot parameter to 1.

    To modify the parameter from the vSphere Client:
    1. In the vSphere Client inventory panel, select the host, click the Configuration tab, and click Advanced Settings under Software.
    2. Select SCSI.
    3. Change the Scsi.CRTimeoutDuringBoot parameter to 1.

    Also, see KB 1016106 at http://kb.vmware.com/kb/1016106.

Supported Hardware

  • ESXi might fail to boot when allowInterleavedNUMANodes boot option is FALSE
    On an IBM eX5 host with a MAX 5 extension, ESXi fails to boot and displays a
    SysAbort message. This issue might occur when the allowInterleavedNUMANodes boot option is not set to TRUE. The default value for this option is FALSE.

    Workaround: Set the
    allowInterleavedNUMANodes boot option to TRUE. See KB 1021454 at http://kb.vmware.com/kb/1021454 for more information about how to configure the boot option for ESXi hosts.
  • PCI device mapping errors on HP ProLiant DL370 G6 *
    When you run I/O operations on the HP ProLiant DL370 G6 server, you might encounter a purple screen or see alerts about Lint1 Interrupt or NMI. The HP ProLiant DL370 G6 server has two Intel I/O hub (IOH) and a BIOS defect in the ACPI Direct Memory Access remapping (DMAR) structure definitions, which causes some PCI devices to be described under the wrong DMA remapping unit. Any DMA access by such incorrectly described PCI devices triggers an IOMMU fault, and the device receives an I/O error. Depending on the device, this I/O error might result either in a Lint1 Interrupt or NMI alert message, or in a system failure with a purple screen.


    Workaround: Update the BIOS to 2010.05.21 or a later version.
  • ESXi installations on HP systems require the HP NMI driver *
    ESXi 4.1 instances on HP systems require the HP NMI driver to ensure proper handling of non-maskable interrupts (NMIs). The NMI driver ensures that NMIs are properly detected and logged. Without this driver, NMIs, which signal hardware faults, are ignored on HP systems running ESXi.
    Caution: Failure to install this driver might result in silent data corruption.

    Workaround: Download and install the NMI driver. The driver is available as an offline bundle from the HP Web site. Also, see KB 1021609 at http://kb.vmware.com/kb/1021609.
  • Virtual machines might become read-only when run on an iSCSI datastore deployed on EqualLogic storage *
    Virtual machines might become read-only if you use an EqualLogic array with a later firmware version. The firmware might occasionally drop I/O from the array queue, causing virtual machines to become read-only after marking the I/O as failed.


    Workaround: Upgrade EqualLogic Array Firmware to version 4.1.4 or later.
  • After you upgrade a storage array, the status for hardware acceleration in the vSphere Client changes to supported after a short delay *
    When you upgrade a storage array's firmware to a version that supports VAAI functionality, vSphere 4.1 does not immediately register the change. The vSphere Client temporarily displays Unknown as the status for hardware acceleration.


    Workaround: This delay is harmless. The hardware acceleration status changes to supported after a short period of time.
  • Slow performance during virtual machine power-on or disk I/O on ESXi on the HP G6 Platform with P410i or P410 Smart Array Controller *
    Some hosts might show slow performance during virtual machine power-on or while generating disk I/O. The major symptom is degraded I/O performance, causing large numbers of error messages similar to the following to be logged to /var/log/messages:

    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:25 vmkernel: 0:00:08:47.438 cpu1:4097)scsi_cmd_alloc returned NULL
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060600) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)WARNING: NMP: nmp_DeviceRetryCommand: Device
    "naa.600508b1001030304643453441300100": awaiting fast path state update for failoverwith I/O blocked. No prior reservation
    exists on the device.
    Mar 25 17:39:26 vmkernel: 0:00:08:47.632 cpu1:4097)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x410005060700) to NMP device
    "naa.600508b1001030304643453441300100" failed on physical path "vmhba0:C0:T0:L1" H:0x1 D:0x0 P:0x0 Possible sense data: 0x
    Mar 25 17:39:26 0 0x0 0x0


    Workaround: Install the HP 256MB P-series Cache Upgrade module from http://h30094.www3.hp.com/product.asp?mfg_partno=462968-B21&pagemode=ca&jumpid=in_r3924/kc.

Upgrade and Installation

  • New: Host upgrade to ESX/ESXi 4.1 Update 1 fails if you upgrade by using Update Manager 4.1 * (KB 1035436)

  • Installation of the vSphere Client might fail with an error *
    When you install vSphere Client, the installer might attempt to upgrade an out-of-date Microsoft Visual J# runtime. The upgrade is unsuccessful and the vSphere Client installation fails with the error: The Microsoft Visual J# 2.0 Second Edition installer returned error code 4113.

    Workaround: Uninstall all earlier versions of Microsoft Visual J#, and then install the vSphere Client. The installer includes an updated Microsoft Visual J# package.
  • Simultaneous access to two ESXi installations on USB flash drives causes the system to display panic messages
    If you boot a system that has access to multiple installations of ESXi with the same build number on two different USB flash drives, the system displays panic messages.

    Workaround: Detach one of the USB flash drives and reboot the system.

vMotion and Storage vMotion

  • vMotion is disabled after a reboot of ESXi 4.1 host
    If you enable vMotion on an ESXi host and reboot the ESXi host, vMotion is no longer enabled after the reboot process is completed.


    Workaround: To resolve the issue, reinstall the latest version of ESXi image provided by your system vendor.

  • Hot-plug operations fail after the swap file is relocated *
    Hot-plug operations fail for powered-on virtual machines in a DRS cluster or on a standalone host, and result in the error failed to resume destination; VM not found after the swap file location is changed.

    Workaround: Perform one of the following tasks:
    • Reboot the affected virtual machines to register the new swap file location with them, and then perform the hot-plug operations.
    • Migrate the affected virtual machines using vMotion.
    • Suspend the affected virtual machines.

VMware Tools

  • VMware Tools does not perform auto upgrade when a Microsoft Windows 2000 virtual machine is restarted
    When you configure VMware Tools for auto upgrading during power cycle, by selecting the Check and upgrade Tools before each power-on option under the Advanced pane in Virtual Machine Properties window, VMware Tools does not perform auto upgrade in Microsoft Windows 2000 guest operating systems.


    Workaround:
    Manually upgrade VMware Tools in the Microsoft Windows 2000 guest operating system.

 

Top of Page